OpenSea is the first and largest marketplace for non-fungible tokens, or NFTs. Applications for NFTs include collectibles, gaming items, domain names, digital art, and many other items backed by a blockchain. OpenSea is an open, inclusive web3 platform, where individuals can come to explore NFTs and connect with each other to purchase and sell NFTs. At OpenSea, we’re excited about building a platform that supports a brand new economy based on true digital ownership and are proud to be recognized as Y Combinator’s #4 ranked top private company.
When hiring candidates, we look for signals that a candidate will thrive in our culture, where we default to trust, embrace feedback, grow rapidly, and love our work. We also know how critical it is to celebrate and support our differences. Employing a team rich in diverse thoughts, experiences and opinions enables our employees, our product and our community to flourish. We are dedicated to equal employment opportunities regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. To help facilitate this, we support remote, hybrid or onsite work at either New York City, San Francisco or the Silicon Valley for the majority of our opportunities.
We are looking for experienced engineers – Machine Learning, Backend, Full Stack, and Data Engineers – to join our Trust and Safety Engineering organization. In this role, you will build the foundational risk platform and detection models. You will deploy risk management features on our product surface to protect the ecosystem. You will collaborate closely with our Product, Policy, Operations, Data, Search & Discovery teams. If you are enthusiastic about earning users’ trust, and have experience in the adversarial modeling domain, we want to hear from you!
What You’ll Do
- Design and build scalable risk data infrastructure and detection engine with low-latency capability
- Design and build platform solutions for human review process, evidence collection, reporting, and more
- Develop rules and ML models to detect policy violations and identify bad actors
- Define success metrics, design and conduct A/B testing to measure product impact
- Partner with Product, Policy, Legal, Operations, and Communications to deploy punitive/corrective/friction solutions
- Lead the technical direction to scale our detection, review and enforcement ahead of business growth
If you don’t think you meet all of the criteria below but still are interested in the job, please apply. Nobody checks every box, and we’re looking for someone who is excited to join the team.
- Bachelor’s degree in Computer Science or related technical field
- 4+ years of experience in adversarial domains like abuse/fraud/spam prevention, anomaly detection, risk management, trust and safety
- Proven record of leading engineering design, implementation, and deployment
- Experience with service-oriented architecture, big-data stack, experimentation framework
- Experience with the one or more ML frameworks: TFX, PyTorch, Scikit-learn, or equivalent
- Growth-minded, data, and impact driven
Nice to Have
- Web3 ecosystem experience