Fast-paced, dynamic, and proactive, YouTube's Trust & Safety team is dedicated to making YouTube a safe place for users, viewers, and content creators around the world to belong, create, and express themselves. Whether understanding and solving their online content concerns, navigating within global legal frameworks, or writing and enforcing worldwide policy, the Trust & Safety team is on the frontlines of enhancing the YouTube experience, building internet safety, and protecting free speech in our ever-evolving digital world.
As an Enforcement Detection Analyst on the YouTube Trust and Safety team, you will be directly responsible for ensuring that we effectively identify and remove disruptive content and users as quickly as possible. In this role, you may be exposed to graphic, controversial, and sometimes offensive video content during team escalations in line with YouTube's Community Guidelines.
This role will require on-call work at weekends & holidays on a rotational basis.
At YouTube, we believe that everyone deserves to have a voice, and that the world is a better place when we listen, share, and build community through our stories. We work together to give everyone the power to share their story, explore what they love, and connect with one another in the process. Working at the intersection of cutting-edge technology and boundless creativity, we move at the speed of culture with a shared goal to show people the world. We explore new ideas, solve real problems, and have fun - and we do it all together.
• Reduce uncaught badness on YouTube by developing and implementing strategies and methods to enforce YouTube content policies at scale across entities; leverage Machine Learning, as appropriate.
• Deliver improvements to our performance metrics based on understanding of drivers of operations. Leverage SQL and programming skills to ensure that solutions fit within the current complex Product and backend infrastructures.
• Work closely with Policy and Operations teams to understand the ever-changing policy enforcement landscape, and with Engineering teams to develop and test features that address operational needs.
• Provide rapid incident response for abuse problems, operating with minimal guidance; analyze and draw conclusions on root-causes and define and implement mitigation steps.
• Understand the abuse landscape; cover key vectors including insights towards drivers/intention of the abuse (why), methods to acquire accounts, gaps in enforcement (how), etc.
• Bachelor's degree in Engineering, Computer Science, Mathematics, Statistics, a related technical field, or equivalent practical experience
• 3 years of experience working with machine learning, programming (e.g., Python, Java), database querying (e.g., SQL), and statistics
• Experience with creating metrics to measure performance of models, building dashboards, and reporting
• Master's degree in Engineering, Computer Science, Mathematics, Statistics, related technical field or equivalent practical experience
• Experience or demonstrated interest in content policy and/or anti-abuse operations at scale (including machine learning)
• Experience going above and beyond to make operational improvements and solve tough problems
• Ability to work with a variety of stakeholders to gather requirements, explain models, and iterate to make improvements
• Excellent problem-solving and analytical skills
• Demonstrated written and verbal communication skills
Jobcode: Reference SBJ-d8z33q-44-192-114-32-42 in your application.