company_logo

Full Time Job

Sr. Staff Data Engineer

HBO

Culver City, CA 04-08-2021
 
  • Paid
  • Full Time
  • Senior (5-10 years) Experience
Job Description

Sr. Staff Data Engineer (Content Metadata) - HBO Max

The Job

WarnerMedia seeks a Sr. Data Engineer for the Data Intel&Op - HBO Max department. We have created a new Data Insights and Operations (DIO) group within HBO Max Direct-To-Consumer Organization to make the streaming products more data-driven. This team is looking to hire a motivated Sr. Data Engineer (Content Metadata) who will be working closely with a team of highly motivated data engineers and data scientists to build a state-of-the-art metadata platform. Together, we will build a rich content metadata library for HBO Max by processing content and other metadata data points available and helping optimize content personalization, marketing and other areas within business who uses content metadata. On a daily basis, you will be processing both batch and streaming data, applying simple processing rules to Natural Language Processing to curate metadata, building APIs and other methods to distribute the results.

This individual will bring in his/her expertise in a wide variety of big data processing frameworks (both open source and proprietary), large scale database systems, stream data processing, API Development, Machine learning operationalization, and cloud automation to build and support all the data needs across the HBO Max platform.

The Daily
• Design and develop the content metadata platform – which includes batch and stream data processing, applying simple rules to machine learning techniques in order to curate metadata out of content.
• Build and maintain content metadata graph.
• Build software across our entire cutting-edge data platform, including event driven data processing, storage, and serving through scalable and highly available APIs, with awesome cutting-edge technologies.
• Wear multiple hats in a lean team – It could be analyzing data or testing your code to assure quality or supporting the platform you built.
• Change how we think, act, and utilize our data by performing exploratory and quantitative analytics, data mining, and discovery.
• Think of new ways to help make our data platform more scalable, resilient and reliable and then work across our team to put your ideas into action.
• Ensure performance isn't our weakness by implementing and refining robust data processing, REST services, RPC (in an out of HTTP), and caching technologies.
• Help us stay ahead of the curve by working closely with data architects, stream processing specialists, API developers, our DevOps team, and analysts to design systems which can scale elastically in ways which make other groups jealous.
• Work closely with data analysts and business stakeholders to make data easily accessible and understandable to them.
• Ensure data quality by implementing re-usable data quality frameworks.
• Work closely with various other data engineering teams to roll out new capabilities.
• Build process and tools to maintain Machine Learning pipelines in production.
• Develop and enforce data engineering, security, data quality standards through automation.
• Participate in supporting the platform 24X7.
• Be passionate about growing team - hire and mentor engineers and analysts.
• Be responsible for cloud cost and improving efficiency.

The Essentials
• Bachelor's degree in computer science or Similar discipline.
• 5+ years of experience in software engineering
• 2+ years of experience in data engineering.
• Go above and beyond attitude.
• Ability to work in fast paced, high pressure, agile environment.
• Ability and willingness to learn any new technologies and apply them at work in order to stay ahead of the curve.
• Expertise in at least few programming languages - Java, Scala, Python or similar.
• Expertise in operationalizing machine learning code – especially NLP methods.
• Expertise in building and managing large volume data processing (both streaming and batch) platform is a must.
• Expertise in building micro services and managing containerized deployments, preferably using Kubernetes
• Expertise in distributed data processing frameworks such as Apache Spark, Flink or Similar.
• Expertise in SQL, Spark SQL, Hive etc.
• Experience in Graph Database will be ideal.
• No-SQL (Apache Cassandra, DynamoDB or similar) is a huge plus
• Experience in operationalizing and scaling machine models is a huge plus.
• Cloud (AWS) experience is preferred
• Ability to learn and teach new languages and frameworks.
• Excellent data analytical skills
• Direct to consumer digital business experience preferred
• Strong interpersonal, communication and presentation skills.
• Strong team focus with outstanding organizational and resource management skills

Jobcode: Reference SBJ-g4y5pv-3-149-24-159-42 in your application.