The HBO Max Product Analytics team is looking to hire a motivated Sr. Analytics Engineer who will be working closely with a team of highly motivated data engineers and data analysts to build a state-of-the-art data platform to solve various data-driven use cases for HBO Max Products. A strong focus will be placed on building data quality tools and ensuring the delivery of reliable data. The Sr. Analytics Engineer will collaborate with the Product Analytics team to generate reporting and analyses aimed at developing a deep understanding of in-app user behavior.
You are an analytics engineer who will work hand-in-hand with technical data partners to design and build a robust semantic layer to best support self-service Business Intelligence for business users and our executive stakeholders. This position requires someone that can exercise their deep technical abilities, but also work with the team to help socialize meaningful and actionable insights to the organization.
This individual will bring in his/her expertise in a wide variety of big data processing frameworks (both open source and proprietary), large scale database systems (OLAP and OLTP), stream data processing, API Development, Machine learning operationalization, and cloud automation to build and support all the data needs across HBO Max platform.
• Core collaboration with Product Analytics, Engineering and Data Engineering
• Build data models in Snowflake and Looker that support flexible querying and data visualization.
• Advance automation efforts in data processing & testing that help the team spend less time manipulating & validating data and more time analyzing it.
• Work closely with Data Engineering and other technology teams as needed to provide and iterate on technical data requirements, as well as putting developed data solutions into production
• Take initiative in translating various business requirements into data quality architecture.
• Wear multiple hats in a lean team – It could be analyzing data or writing requirements to assure quality or supporting the platform you built.
• Change how we think, act, and utilize our data by performing exploratory and quantitative analytics, data mining, and discovery.
• Think of new ways to help make our data platform more scalable, resilient and reliable and then work across our team to put your ideas into action.
• Ensure data quality by implementing re-usable data quality frameworks.
• Bachelor's degree in computer science or Similar discipline.
• 5+ years of experience in software engineering and/or data engineering
• Expertise in programming and scripting languages - Python, Java or similar
• Expertise in building queries and insights
• Expertise in a large volume, scalable, reliable and high-quality data processing (both streaming [Kafka, Kinesis] and batch [Spark, Flink) platform is a must.
• Expertise in different types of databases (RedShift, SnowFlake) and query engine (SQL, Spark SQL, Hive).
The Nice to Haves
• Cloud (AWS) experience is preferred
• No-SQL (Apache Cassandra, DynamoDB or similar) is a huge plus
• Experience in operationalizing and scaling machine models is a huge plus
• Experience with variety of data Tools & frameworks (example: Apache Airflow, Druid) will be a huge plus
• Direct to consumer digital business experience preferred
• Experience with analytics and visualization tools such as Looker, Tableau is preferred
• Ability to work in fast paced, high visibility, agile environment & ability to take initiative
• Strong interpersonal, communication and presentation skills
• Strong team focus with outstanding organizational and resource management skills
Jobcode: Reference SBJ-d891kk-3-236-212-116-42 in your application.