Technical Abuse Investigator
OpenAI
Use the employer link to read the full source listing and submit your application.
Listing data may include public employer ATS feeds and Jobs by Adzuna.
Before you apply
The decision-making details job seekers want first
We pulled the strongest signals from the listing so you can quickly judge fit, compensation, and what the company expects before opening the full source post.
Compensation
Salary & market context
339% above the BLS national median
BLS national median: $74,680
Requirements
Top requirements
- This role combines traditional investigative judgment with strong technical fluency: much of the work involves navigating complex datasets to surface actionable abuse signals, not just reviewing individual reports.
- In this role, you will: - Detect, investigate and disrupt abuse and harm with policy, legal, global affairs, security, and engineering teams via complex datasets - Develop and iterate on abuse signals and investigative methods, scaling one-off insights to reduce manual effort and expand coverage. - Build and maintain lightweight technical solutions (e.g., SQL/ Python data pipelines, investigation templates, dashboards, or internal utilities) for investigators focused on specific harm domains. - Develop a deep understanding of OpenAI’s products, data systems, and enforcement mechanisms, and collaborate with engineering and data teams to improve investigative tooling, data quality, and workflows. - Communicate investigation findings effectively to internal stakeholders through written briefs, data-backed recommendations, and escalation summaries - Rotate (in-frequently) into an incident response role that requires rapid threat triaging, investigation, mitigation, sound judgement and concise briefing to senior leadership - Be someone people enjoy working with - Proven ability to quickly learn new processes, systems and team dynamics while thriving in ambiguous, rapidly changing, and high-pressure environments.
- You might thrive in this role if you: - Have deep expertise in at least two of the following domains: agentic AI misuse; automation; encryption; terrorism; fraud; violence; child exploitation; data science; dashboarding; api abuse; product exploits, prompt injection; distillation; - Have 5+ years of experience investigating and mitigating abuse in a relevant domain - Have 4+ years of relevant technical projects - Strong presenter on safety work in public or policy settings - Have experience scaling or automating processes, especially with LLMs or ML techniques About OpenAI OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity.
- AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.
Perks & setup
Benefits candidates care about
- About the Team OpenAI’s mission is to ensure that general-purpose artificial intelligence benefits all of humanity.
- You might thrive in this role if you: - Have deep expertise in at least two of the following domains: agentic AI misuse; automation; encryption; terrorism; fraud; violence; child exploitation; data science; dashboarding; api abuse; product exploits, prompt injection; distillation; - Have 5+ years of experience investigating and mitigating abuse in a relevant domain - Have 4+ years of relevant technical projects - Strong presenter on safety work in public or policy settings - Have experience scaling or automating processes, especially with LLMs or ML techniques About OpenAI OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity.
Why candidates care
Benefits & perks
- About the Team OpenAI’s mission is to ensure that general-purpose artificial intelligence benefits all of humanity.
- You might thrive in this role if you: - Have deep expertise in at least two of the following domains: agentic AI misuse; automation; encryption; terrorism; fraud; violence; child exploitation; data science; dashboarding; api abuse; product exploits, prompt injection; distillation; - Have 5+ years of experience investigating and mitigating abuse in a relevant domain - Have 4+ years of relevant technical projects - Strong presenter on safety work in public or policy settings - Have experience scaling or automating processes, especially with LLMs or ML techniques About OpenAI OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity.
Start here
Requirements
- This role combines traditional investigative judgment with strong technical fluency: much of the work involves navigating complex datasets to surface actionable abuse signals, not just reviewing individual reports.
- In this role, you will: - Detect, investigate and disrupt abuse and harm with policy, legal, global affairs, security, and engineering teams via complex datasets - Develop and iterate on abuse signals and investigative methods, scaling one-off insights to reduce manual effort and expand coverage. - Build and maintain lightweight technical solutions (e.g., SQL/ Python data pipelines, investigation templates, dashboards, or internal utilities) for investigators focused on specific harm domains. - Develop a deep understanding of OpenAI’s products, data systems, and enforcement mechanisms, and collaborate with engineering and data teams to improve investigative tooling, data quality, and workflows. - Communicate investigation findings effectively to internal stakeholders through written briefs, data-backed recommendations, and escalation summaries - Rotate (in-frequently) into an incident response role that requires rapid threat triaging, investigation, mitigation, sound judgement and concise briefing to senior leadership - Be someone people enjoy working with - Proven ability to quickly learn new processes, systems and team dynamics while thriving in ambiguous, rapidly changing, and high-pressure environments.
- You might thrive in this role if you: - Have deep expertise in at least two of the following domains: agentic AI misuse; automation; encryption; terrorism; fraud; violence; child exploitation; data science; dashboarding; api abuse; product exploits, prompt injection; distillation; - Have 5+ years of experience investigating and mitigating abuse in a relevant domain - Have 4+ years of relevant technical projects - Strong presenter on safety work in public or policy settings - Have experience scaling or automating processes, especially with LLMs or ML techniques About OpenAI OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity.
- AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.
Responsibilities
What you'll do
- Our work enables partner teams to develop data-backed model policies and build scalable safety mitigations.
- By precisely understanding abuse, we help ensure OpenAI’s products can be used safely to build meaningful, rewarding applications.
- About the Role As a Technical Abuse Investigator on the Intelligence and Investigations team, you will be responsible for detecting, investigating, and disrupting malicious use of OpenAI’s platform.
- You will further scale parts of the investigative process to help our team disrupt harm at scale.
- You will be scaling or automating highly manual, important and nuanced processes.
- You will design and implement lightweight technical solutions—such as notebook templates, data pipelines or internal utilities—that enable specialized investigators to identify, track, and action abuse at a greater scale than a single investigator can currently achieve.
Role snapshot
About the role
About the Team
OpenAI’s mission is to ensure that general-purpose artificial intelligence benefits all of humanity. We believe achieving this goal requires real-world deployment and continuous iteration based on how our products are used—and misused—in practice.
The Intelligence and Investigations team supports this mission by detecting, investigating, and disrupting the misuse of our products, particularly critical or novel harms. Our work enables partner teams to develop data-backed model policies and build scalable safety mitigations. By precisely understanding abuse, we help ensure OpenAI’s products can be used safely to build meaningful, rewarding applications.
As a Technical Abuse Investigator on the Intelligence and Investigations team, you will be responsible for detecting, investigating, and disrupting malicious use of OpenAI’s platform. You will further scale parts of the investigative process to help our team disrupt harm at scale. This role combines traditional investigative judgment with strong technical fluency: much of the work involves navigating complex datasets to surface actionable abuse signals, not just reviewing individual reports.
Source text