company_logo

Full Time Job

Principal Data Engineer

Peacock

Universal City, CA 09-22-2021
 
  • Paid
  • Full Time
  • Senior (5-10 years) Experience
Job Description

Responsibilities
Welcome to Peacock, the dynamic new streaming service from NBCUniversal. Here you'll find more than a job. You'll find a fast-paced, high-flying team for unique birds that want to be at the epicenter of technology, sports, news, tv, movies and more. Our flock works hard to connect people to what they love, each other and the world around them by creating shared experiences through culture-defining entertainment.
As a company, we embrace the power of difference. Our team is committed to creating an organization that champions diversity and inclusivity for all by curating content and a workforce that represents the world around us. We continue to challenge ourselves and the industry by being customer-centric, data-driven creatures of innovation. At Peacock, we are determined to forge the next frontier of streaming through creativity, teamwork, and talent.
Here you can fly to new heights!

Role Description:
As part of the Direct-to-Consumer Decision Sciences team, the Principal Data Engineer will be responsible for creating a connected data ecosystem that unleashes the power of our streaming data. We gather data from across all customer/prospect journeys in near real-time, to allow fast feedback loops across territories; combined with our strategic data platform, this data ecosystem is at the core of being able to make intelligent customer and business decisions.

In this role, the Principal Data Engineer will share responsibilities in the development and maintenance of an optimized and highly available data pipelines that facilitate deeper analysis and reporting by the business, as well as support ongoing operations related to the Direct to Consumer data ecosystem

Responsibilities include, but are not limited to:
• Manage a high-performance team of Data Engineers and contractors
• Maintain and enhance solutions combining data blending, profiling, mining, statistical analysis, and machine learning, to better define and curate models, test hypothesis, and deliver key insights
• Contribute to and help lead teams in design, build, testing, scaling and maintaining data pipelines from a variety of source systems and streams (Internal, third party, cloud based, etc.), according to business and technical requirements.
• Maintain and enhance observable, reliable and secure software, with a focus on automation and GitOps.
• Continually work on improving the codebase and have active participation and oversight in all aspects of the team, including agile ceremonies.
• Take an active role in story definition, assisting business stakeholders with acceptance criteria.
• Work with other Principal Engineers and Architects to share and contribute to the broader technical vision.
• Develop and champion best practices, striving towards excellence and raising the bar within the department.
• Operationalize data processing systems (dev ops)

Qualifications/Requirements
• 8+ years relevant experience in Data Engineering
• Experience of near Real Time & Batch Data Pipeline development in a similar Big Data Engineering role.
• Programming skills in one or more of the following: Python, Java, Scala, R, SQL and experience in writing reusable/efficient code to automate analysis and data processes
• Experience in processing structured and unstructured data into a form suitable for analysis and reporting with integration with a variety of data metric providers ranging from advertising, web analytics, and consumer devices
• Experience implementing scalable, distributed, and highly available systems using Google Cloud
• Hands on programming experience of the following (or similar) technologies: Apache Beam, Scio, Apache Spark, and Snowflake.
• Experience in progressive data application development, working in large scale/distributed SQL, NoSQL, and/or Hadoop environment.
• Build and maintain dimensional data warehouses in support of BI tools
• Develop data catalogs and data cleanliness to ensure clarity and correctness of key business metrics
• Experience building streaming data pipelines using Kafka, Spark or Flink
• Data modelling experience (operationalizing data science models/products) a plus
• Bachelors' degree with a specialization in Computer Science, Engineering, Physics, other quantitative field or equivalent industry experience.



Desired Characteristics
• Experience with graph-based data workflows using Apache Airflow
• Experience building and deploying ML pipelines: training models, feature development, regression testing
• Strong Test-Driven Development background, with understanding of levels of testing required to continuously deliver value to production.
• Experience with large-scale video assets
• Ability to work effectively across functions, disciplines, and levels
• Team-oriented and collaborative approach with a demonstrated aptitude, enthusiasm and willingness to learn new methods, tools, practices and skills
• Ability to recognize discordant views and take part in constructive dialogue to resolve them
• Pride and ownership in your work and confident representation of your team to other parts of NBCUniversal

Jobcode: Reference SBJ-r1y90y-3-144-113-30-42 in your application.