Job Description
TRUST AND SAFETY
What We Do
The Epic Trust and Safety team provides a safer experience for Epic users. We work across multiple products and services to improve technology and craft transparent policies so our players and users can have positive experiences on our platforms.
What You'll Do
As a Trust and Safety AI Red Team Analyst at Epic Games, you will be instrumental in protecting our gaming ecosystem by identifying and mitigating trust and safety risks in AI-driven features. Your work will ensure that our games remain safe, inclusive, and enjoyable for players by proactively addressing potential abuses of our content rules and our community rules. The ideal candidate is a creative, mission-driven investigator passionate about safeguarding player experiences and upholding the integrity of AI systems in gaming.
In this role, you will
• Take a leadership role in developing, prototyping, and teaching novel red teaming techniques and trust and safety methodologies to enhance team capabilities
• Investigate and understand how adversarial attacks, such as prompt injections, data poisoning, or bias exploitation, could manifest in Epic's products
• Lead red teaming efforts to simulate real-world scenarios, identify vulnerabilities, and design forward-looking strategies to mitigate risks and ensure player safety
• Proactively look for and identify undetected vulnerabilities or misuse patterns in AI systems by leveraging diverse investigative resources and methodologies
• Analyze qualitative and quantitative data to identify trends, quantify risks, and provide clear, evidence-based findings to support trust and safety initiatives
• Collaborate with AI researchers, product managers, and trust and safety teams to mitigate identified risks and align AI systems with Epic policies and regulatory standards
• Provide launch support for quick fixes during AI-involved product launches
What we're looking for
• 5+ years of experience conducting investigations or red teaming in fields such as cybersecurity, AI ethics, trust and safety, or related areas
• Proven ability to develop multi-source, evidence-based findings and communicate them effectively to technical and non-technical stakeholders
• Strong proficiency in open-source research and adversarial testing to uncover hidden vulnerabilities or risks in AI systems
• Experience managing or contributing to projects with organization-wide impact, involving cross-functional collaboration with diverse teams
• Ability to prioritize tasks and execute independently with minimal oversight
• Subject matter expertise or prior work experience with trust and safety challenges in AI systems, particularly generative AI models
• Experience conducting data analysis using Python, SQL, or similar tools to support investigations or red teaming efforts
• Familiarity with AI governance, ethical AI frameworks, or emerging regulatory standards for AI safety
• Experience collaborating with distributed teams across multiple locations or time zones
• MS or equivalent experience in Computer Science, Cybersecurity, AI Ethics, Data Science, or a related field
Note to Recruitment Agencies: Epic does not accept any unsolicited resumes or approaches from any unauthorized third party (including recruitment or placement agencies) (i.e., a third party with whom we do not have a negotiated and validly executed agreement). We will not pay any fees to any unauthorized third party. Further details on these matters can be found here.
Jobcode: Reference SBJ-6kb849-216-73-216-111-42 in your application.