company_logo

Full Time Job

Principal Data Engineer

Entercom

Denver, CO 10-02-2020
 
  • Paid
  • Full Time
  • Senior (5-10 years) Experience
Job Description

Location: This opportunity is available remote, however, the majority of the team sits in Denver.

The Data team is an elite team of data and marketing strategy AND technology specialists with expertise in the ever-changing universe of precision marketing, ad-technologies, and data architecture and management.

The Principal Engineer owns the definition, execution, and delivery of the technical data architecture for Entercom's data team. The work will improve the quality, reliability, accuracy, and consistency of our data by engineering project-specific data pipelines and validation tools, and the team's overall data model. The candidate will have a hands-on role with product specialists and be responsible for working with them to understand data requirements and then execute by implementing those requirements. As part of the Data Team, you will help build data pipelines that empower the organization's ability to make more informed decisions.

The successful candidate will have a fully-rounded knowledge of successfully developing and delivering organizational and client value solutions.

Responsibilities
• Fully own critical portions of Entercom's data model
• Collaborate with product teams, data analysts and data scientists to help design and build data-forward solutions
• Design, build and deploy streaming and batch data pipelines capable of processing and storing petabytes of data quickly and reliably
• Integrate with a variety of data metrics providers ranging from consumer data platform - CDP, advertising, mobile/web analytics, and consumer devices
• Build and maintain dimensional data warehouses in support of business intelligence and optimization product tools
• Develop data catalogs and data validations to ensure clarity and correctness of key business metrics
• Drive and maintain a culture of quality, innovation and experimentation
• Coach data engineers best practices and technical concepts of building large scale data platforms

Qualifications:
• 7-10 years of experience developing in object oriented languages, data design concepts, and large scale Enterprise Data Architecture deployments
• Experience deploying and running AWS based data solutions and familiar with tools such as Cloud Formation, IAM, Athena, Neptune, Redshift, and Kinesis
• Experience engineering production level big-data solutions using technologies like Databricks or EMR, S3, Spark and an in-depth understanding of data partitioning and sharding techniques
• Familiar with metadata management, data lineage, and principles of data governance
• Experience loading and Querying cloud-hosted databases such as Redshift and Snowflake
• Building streaming data pipelines using Kafka, Kinesis, Spark, or Flink

Preferred Qualifications
• Familiarity with binary data serialization formats such as Parquet, Avro, and Thrift
• Experience deploying data notebooks and analytic environments such as Jupyter and Databricks
• Knowledge of the Python data ecosystem using pandas and numpy
• Experience building and deploying ML pipelines: training models, feature development, data integrations, regression testing, predictive modeling
• Experience with graph-based and event-based data workflows using Apache Airflow, AWS Lambda, Docker, and/or Kubernetes

Required Education
• Bachelor's degree in Computer Science or a related field
• Master's Degree in Computer Science or equivalent preferred

Professionals

Jobcode: Reference SBJ-r7441p-18-191-189-85-42 in your application.