Job Description
Overview and Responsibilities:
Paramount Data technology Solutions (DTS) team plays a crucial role in Paramount's global engineering organization. Through our projects we ensure that millions of users worldwide can enjoy Paramount content through web, mobile, and TV applications. We are seeking a motivated and diligent Junior Data Engineer to join our team! This role will focus on designing, implementing, and maintaining scalable data pipelines, ensuring data quality, and enabling robust analytics solutions. The ideal candidate is passionate about working with data, has experience with Python, Airflow, and cloud platforms (AWS, Azure, or GCP), and is eager to learn and grow in a fast-paced environment!
Responsibilities:
• Design, develop, and maintain robust and scalable data pipelines using Python and Apache Airflow
• Implement ETL (Extract, Transform, Load) processes for diverse data sources, ensuring data quality and consistency.
• Collaborate with data scientists, analysts, and other engineers to understand data needs and translate them into efficient, well-documented solution
• Monitor and solve data pipelines, proactively identifying and resolving issues to maintain high availability.
• Contribute to the development and improvement of our data infrastructure, using open-source technologies and cloud-native services. While our current platform is GCP, experience with other cloud providers (AWS, Azure) is highly valued as we strive for platform-agnostic solutions.
• Work with Kubernetes to manage and orchestrate our data pipelines.
• Participate actively in agile development sprints, contributing to sprint planning, daily stand-ups, and retrospectives.
• Create code that is clean, well-documented, and can be easily tested, following established coding standards.
• Participate in code reviews, offering constructive feedback and continuously improving our codebase.
• Stay abreast of the latest technologies and trends in data engineering, proactively suggesting improvements to our processes and tools.
Basic Qualifications:
• Bachelor's degree in Computer Science, Engineering, or a related field
• 3+ years of experience in data engineering or a related field
• Strong programming skills in Python
• Experience with Apache Airflow or similar workflow management tool
• Experience with Kubernetes for container orchestration
• Experience with at least one major cloud provider (GCP, AWS, or Azure); experience with multiple is a strong plus
• Demonstrated ability to adapt to different cloud environments is crucial
• Understanding of data warehousing concepts and principles.
• Proficiency with SQL and experience with relational and/or NoSQL database
• Excellent problem-solving and analytical skills
• Strong communication and collaboration skills, essential for working effectively within an agile team
Additional Qualifications:
• Experience with data modeling and schema design
• Experience with big data technologies (e.g., Spark, Hadoop)
• Experience with CI/CD pipelines and terraform
• Experience with containerization tools (e.g., Docker, Kubernetes)
• Experience using Git
ADDITIONAL INFORMATION
Hiring Salary Range: $80,000.00 - 95,000.00. The hiring salary range for this position applies to New York, California, Colorado, Washington state, and most other geographies. Starting pay for the successful applicant depends on a variety of job-related factors, including but not limited to geographic location, market demands, experience, training, and education. The benefits available for this position include medical, dental, vision, 401(k) plan, life insurance coverage, disability benefits, tuition assistance program and PTO or, if applicable, as otherwise dictated by the appropriate Collective Bargaining Agreement. This position is bonus eligible.
https://www.paramount.com/careers/benefits
Paramount is an equal opportunity employer (EOE) including disability/vet.
Jobcode: Reference SBJ-gq129z-216-73-216-202-42 in your application.