At Disney Streaming, data is central to measuring all aspects of delivering streaming video. The Data Engineering team within Services & Data Engineering builds the infrastructure and tooling to process extremely large volumes of data using public cloud and open source technologies.
Our team is hiring a Sr Data Engineer to build a secure data platform that powers our streaming products and can scale with the growth of our data and organization. You'll be tasked with raising the bar by innovating and adopting new patterns and technologies to keep your teams on the bleeding edge of development. You'll get to understand the challenges of facilitating a high-quality, flawless media streaming experience across mobile, connected devices, and web, all while using the latest technologies available from AWS and beyond.
At Disney Streaming we feel strongly that teams should own their own processes, decide their own technologies, and design solutions for the long term. If you're interested in working in a highly collaborative team environment like this, please get in touch - we'd love to hear from you!
• Work within a cross-functional team of engineers to build software for a large-scale data processing ecosystem supporting real-time and batch data that underpins all analytic and operational data flows
• Build tools and services to support data discovery, lineage, governance, and privacy compliance across the data platform
• Build and maintain a CICD process for infrastructure components to ensure high availability while remaining nimble
• Influence and drive software engineering and architecture best practices and standards within the team and wider organization, along with a culture of quality, innovation, and experimentation
• Work with other teams: evangelize the platform, best-practices, data driven decisions; identify new use cases and features and drive adoption
• 5+ years of relevant professional experience
• Proficient with Python or Scala (preferably both!)
• Proven track record engineering big-data solutions using technologies like Databricks/EMR/Spark
• Deep experience deploying and running cloud-based data solutions (AWS, GCP, Azure)
• Experience with data warehouses such as Redshift or Snowflake
• Ability to dive deep into any technical component as well as understand and drive the overall systems architecture.
• You care deeply about craftsmanship in your software, and can work backwards from the customer experience.
• You have excellent written and verbal communication skills.
• Experience building data pipelines using tools such as Kinesis, Kafka, or Flink
• Familiarity with workflow scheduler tools such as Airflow
• Familiarity with metadata management, data lineage, and principles of data governance
• Bachelor's degree in Computer Science or related field or equivalent work
Jobcode: Reference SBJ-rekkw7-3-235-120-150-42 in your application.