Data Scientist, Integrity
OpenAI
Use the employer link to read the full source listing and submit your application.
Listing data may include public employer ATS feeds and Jobs by Adzuna.
Before you apply
The decision-making details job seekers want first
We pulled the strongest signals from the listing so you can quickly judge fit, compensation, and what the company expects before opening the full source post.
Compensation
Salary & market context
314% above the BLS national median
BLS national median: $74,680
- You might thrive in this role if you: - Have experience on a highly technical trust and safety team and/or have worked closely with policy, content moderation, or security teams. - Can use coding languages (Python preferred) to programmatically explore large datasets and generate actionable insights to solve problems. - Proven ability to propose, design, and run rigorous experiments (A/B tests, quasi-experiments, simulations) with clear insights and actionable product recommendations, leveraging SQL and Python. - Excellent communication skills with a track record of influencing cross-functional partners, including product managers, engineers, policy leads, and executives. - Bonus if you have experience with deploying scaled detection solutions using large language models, embeddings, or fine tuning. - 5+ years of quantitative experience in ambiguous environments, ideally as a data scientist at a hyper-growth company or research org, with exposure to fraud, abuse, or security problems.
Requirements
Top requirements
- You might thrive in this role if you: - Have experience on a highly technical trust and safety team and/or have worked closely with policy, content moderation, or security teams. - Can use coding languages (Python preferred) to programmatically explore large datasets and generate actionable insights to solve problems. - Proven ability to propose, design, and run rigorous experiments (A/B tests, quasi-experiments, simulations) with clear insights and actionable product recommendations, leveraging SQL and Python. - Excellent communication skills with a track record of influencing cross-functional partners, including product managers, engineers, policy leads, and executives. - Bonus if you have experience with deploying scaled detection solutions using large language models, embeddings, or fine tuning. - 5+ years of quantitative experience in ambiguous environments, ideally as a data scientist at a hyper-growth company or research org, with exposure to fraud, abuse, or security problems.
- AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.
Perks & setup
Benefits candidates care about
- You might thrive in this role if you: - Have experience on a highly technical trust and safety team and/or have worked closely with policy, content moderation, or security teams. - Can use coding languages (Python preferred) to programmatically explore large datasets and generate actionable insights to solve problems. - Proven ability to propose, design, and run rigorous experiments (A/B tests, quasi-experiments, simulations) with clear insights and actionable product recommendations, leveraging SQL and Python. - Excellent communication skills with a track record of influencing cross-functional partners, including product managers, engineers, policy leads, and executives. - Bonus if you have experience with deploying scaled detection solutions using large language models, embeddings, or fine tuning. - 5+ years of quantitative experience in ambiguous environments, ideally as a data scientist at a hyper-growth company or research org, with exposure to fraud, abuse, or security problems.
- About OpenAI OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity.
Why candidates care
Benefits & perks
- You might thrive in this role if you: - Have experience on a highly technical trust and safety team and/or have worked closely with policy, content moderation, or security teams. - Can use coding languages (Python preferred) to programmatically explore large datasets and generate actionable insights to solve problems. - Proven ability to propose, design, and run rigorous experiments (A/B tests, quasi-experiments, simulations) with clear insights and actionable product recommendations, leveraging SQL and Python. - Excellent communication skills with a track record of influencing cross-functional partners, including product managers, engineers, policy leads, and executives. - Bonus if you have experience with deploying scaled detection solutions using large language models, embeddings, or fine tuning. - 5+ years of quantitative experience in ambiguous environments, ideally as a data scientist at a hyper-growth company or research org, with exposure to fraud, abuse, or security problems.
- About OpenAI OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity.
Start here
Requirements
- You might thrive in this role if you: - Have experience on a highly technical trust and safety team and/or have worked closely with policy, content moderation, or security teams. - Can use coding languages (Python preferred) to programmatically explore large datasets and generate actionable insights to solve problems. - Proven ability to propose, design, and run rigorous experiments (A/B tests, quasi-experiments, simulations) with clear insights and actionable product recommendations, leveraging SQL and Python. - Excellent communication skills with a track record of influencing cross-functional partners, including product managers, engineers, policy leads, and executives. - Bonus if you have experience with deploying scaled detection solutions using large language models, embeddings, or fine tuning. - 5+ years of quantitative experience in ambiguous environments, ideally as a data scientist at a hyper-growth company or research org, with exposure to fraud, abuse, or security problems.
- AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.
Responsibilities
What you'll do
- Our customers build fast-growing businesses around our APIs, which power product features that were never before possible.
- We are looking for a data scientist with anti fraud & abuse experience to help architect and build our next-generation anti-abuse systems.
- In the data scientist role, you will be responsible for discovering and mitigating new types of misuse, and scaling our detection techniques and processes.
- In this role, you will: - Design and build systems for fraud detection and remediation while balancing fraud loss, cost of implementation, and customer experience. - Work closely with finance, security, product, research, and trust & safety operations to holistically combat fraudulent and abusive actors on our system. - Stay abreast of the latest techniques and tools to stay several steps ahead of determined and well resourced adversaries. - Utilize GPT-5 and future models to more effectively combat fraud and abuse.
- You might thrive in this role if you: - Have experience on a highly technical trust and safety team and/or have worked closely with policy, content moderation, or security teams. - Can use coding languages (Python preferred) to programmatically explore large datasets and generate actionable insights to solve problems. - Proven ability to propose, design, and run rigorous experiments (A/B tests, quasi-experiments, simulations) with clear insights and actionable product recommendations, leveraging SQL and Python. - Excellent communication skills with a track record of influencing cross-functional partners, including product managers, engineers, policy leads, and executives. - Bonus if you have experience with deploying scaled detection solutions using large language models, embeddings, or fine tuning. - 5+ years of quantitative experience in ambiguous environments, ideally as a data scientist at a hyper-growth company or research org, with exposure to fraud, abuse, or security problems.
Role snapshot
About the role
About the Team
The Applied team safely brings OpenAI's technology to the world. We released ChatGPT; Plugins; DALL·E; and the APIs for GPT-4, GPT-3, embeddings, and fine-tuning. We also operate inference infrastructure at scale. There's a lot more on the immediate horizon.
Our customers build fast-growing businesses around our APIs, which power product features that were never before possible. ChatGPT is a prime example of what is currently possible. We simultaneously ensure that our powerful tools are used responsibly. Safe deployment is more important to us than unfettered growth.
The Scaled Abuse team works within our Applied Engineering organization identifying and responding to fraudsters on our platform. We are looking for a data scientist with anti fraud & abuse experience to help architect and build our next-generation anti-abuse systems.
Source text