OpenAI
Creating safe AGI that benefits all of humanity.
Technical Abuse Investigator
Location
United States
Posted
8 days ago
Salary
Not specified
No structured requirement data.
Job Description
As a Technical Abuse Investigator on the Intelligence and Investigations team, you will be responsible for detecting, investigating, and disrupting malicious use of OpenAI’s platform. This role combines traditional investigative judgment with strong technical fluency.
Detect, investigate and disrupt abuse and harm with policy, legal, global affairs, security, and engineering teams via complex datasets.
Develop and iterate on abuse signals and investigative methods, scaling one-off insights to reduce manual effort and expand coverage.
Build and maintain lightweight technical solutions (e.g., SQL/Python data pipelines, investigation templates, dashboards, or internal utilities) for investigators focused on specific harm domains.
Develop a deep understanding of OpenAI’s products, data systems, and enforcement mechanisms, and collaborate with engineering and data teams to improve investigative tooling, data quality, and workflows.
Communicate investigation findings effectively to internal stakeholders through written briefs, data-backed recommendations, and escalation summaries.
Rotate (in-frequently) into an incident response role that requires rapid threat triaging, investigation, mitigation, sound judgement and concise briefing to senior leadership.
Be someone people enjoy working with.
Proven ability to quickly learn new processes, systems and team dynamics while thriving in ambiguous, rapidly changing, and high-pressure environments.
Job Requirements
- Deep expertise in at least two of the following domains: agentic AI misuse; automation; encryption; terrorism; fraud; violence; child exploitation; data science; dashboarding; API abuse; product exploits; prompt injection; distillation.
- 5+ years of experience investigating and mitigating abuse in a relevant domain.
- 2+ years of relevant technical projects.
- Strong presenter on safety work in public or policy settings.
- Experience scaling or automating processes, especially with LLMs or ML techniques.
- Participation in an on-call rotation to handle urgent escalations outside of normal work hours.
- Some investigations may involve sensitive content, including sexual, violent, or otherwise disturbing material.
- This role will work PST and is open to remote work within the United States, though we heavily prefer candidates based in San Francisco or New York.