AI Red-Teamer — Adversarial AI Testing
Location
United States + 2 moreAll locations: United States, United Kingdom, Canada
Posted
19 days ago
Salary
Not specified
No structured requirement data.
Job Description
Role Description
At Mercor, we believe the safest AI is the one that’s already been attacked — by us. That’s why we’re building a pod of AI Red-Teamers: human data experts who probe AI models with adversarial inputs, surface vulnerabilities, and generate the red-team data that makes AI safer for our customers.
This role may include reviewing AI outputs that touch on sensitive topics such as bias, misinformation, or harmful behaviors. All work is text-based, and participation in higher-sensitivity projects is optional and supported by clear guidelines and wellness resources.
What You’ll Do
- Red-team AI models and agents: jailbreaks, prompt injections, misuse cases, exploits
- Generate high-quality human data: annotate failures, classify vulnerabilities, and flag systemic risks
- Apply structure: follow taxonomies, benchmarks, and playbooks to keep testing consistent
- Document reproducibly: produce reports, datasets, and attack cases customers can act on
- Flex across projects: support different customers, from LLM jailbreaks to socio-technical abuse testing
Qualifications
- You bring prior red-teaming experience (AI adversarial work, cybersecurity, socio-technical probing)
- You’re curious and adversarial: you instinctively push systems to breaking points
- You’re structured: you use frameworks or benchmarks, not just random hacks
- You’re communicative: you explain risks clearly to technical and non-technical stakeholders
- You’re adaptable: thrive on moving across projects and customers
Requirements
- Adversarial ML: jailbreak datasets, prompt injection, RLHF/DPO attacks, model extraction
- Cybersecurity: penetration testing, exploit development, reverse engineering
- Socio-technical risk: harassment/disinfo probing, abuse analysis
- Creative probing: psychology, acting, writing for unconventional adversarial thinking
Benefits
- Build experience in human data-driven AI red-teaming at the frontier of safety
- Play a direct role in making AI systems more robust, safe, and trustworthy
- The pay rate for this role may vary by project, customer, and content category. Compensation will be aligned with the level of expertise required, the sensitivity of the material, and the scope of work for each engagement.
Job Requirements
- You bring prior red-teaming experience (AI adversarial work, cybersecurity, socio-technical probing)
- You’re curious and adversarial: you instinctively push systems to breaking points
- You’re structured: you use frameworks or benchmarks, not just random hacks
- You’re communicative: you explain risks clearly to technical and non-technical stakeholders
- You’re adaptable: thrive on moving across projects and customers
- Adversarial ML: jailbreak datasets, prompt injection, RLHF/DPO attacks, model extraction
- Cybersecurity: penetration testing, exploit development, reverse engineering
- Socio-technical risk: harassment/disinfo probing, abuse analysis
- Creative probing: psychology, acting, writing for unconventional adversarial thinking
Benefits
- Build experience in human data-driven AI red-teaming at the frontier of safety
- Play a direct role in making AI systems more robust, safe, and trustworthy
- The pay rate for this role may vary by project, customer, and content category. Compensation will be aligned with the level of expertise required, the sensitivity of the material, and the scope of work for each engagement.
Related Guides
Related Job Pages
More AI Engineer Jobs
Contract M365/GenAI Engineer with expertise in Microsoft 365 integrations.
The Artificial Intelligence (AI) Solutions Architect (Contractor) will work with our existing internal AI product management team, Vendor Teams and existing development staff to develop, enhance and streamline our AI systems. Lead the design and implementation of secure, scalable...
AI Engineer – GenAI
Interval GroupHigh quality consulting. On demand. Delivered by top professionals.
AI Engineer developing agent-based systems in finance
Senior Director, Enterprise Architecture and AI
GitLabBuild software faster. The One DevOps Platform enables your entire org to collaborate around your code. We're hiring.
The Senior Director of Enterprise Architecture and AI will define GitLab's architecture strategy, govern internal systems, and lead integration and AI teams. Key responsibilities include setting standards, reducing shadow IT, and ensuring compliance with security standards while enhancing business objectives through AI and automation.