Job Description
Paramount+, a direct-to-consumer digital subscription video on-demand and live streaming service from Paramount Global, combines live sports, breaking news, and a mountain of entertainment. The premium streaming service features an expansive library of original series, hit shows and popular movies across every genre from world-renowned brands and production studios, including BET, CBS, Comedy Central, MTV, Nickelodeon, Paramount Pictures and the Smithsonian Channel. The service is also the streaming home to unmatched sports programming, including every CBS Sports event, from golf to football to basketball and more, plus exclusive streaming rights for major sports properties, including some of the world's biggest and most popular soccer leagues. Paramount+ also enables subscribers to stream local CBS stations live across the US in addition to the ability to stream Paramount Streaming's other live channels: CBSN for 24/7 news, CBS Sports HQ for sports news and analysis, and ET Live for entertainment coverage.
Overview & Responsibilities
We are a passionate group providing data needs for all CBS Digital Media properties. This includes CBS, CBSNews, CBS Sports, and related Sports Properties. Our team is made up of varying engineering roles including Data, Data Product and Ingestion Engineering. We are responsible for driving the CDM data strategy and best practices around data collection, pipelines, usage, and infrastructure.
The Senior Data Engineer should possess a deep sense of curiosity and a passion for Data. They will be expected from their knowledge of the data, to build smart data products that will drive the business and provide insights into the CBSi audience. They will have good communication skills and possess the ability to convey knowledge of data structures and tools throughout the CBS Digital Media organization. This candidate will be expected to lead a project from inception to completion as well as help mentor members of the team on best practices and approaches around data. They will use their skills in reverse engineering, analytics, and creative experimental solutions to devise data and BI solutions. This engineer supports data pipeline development which includes machine learning algorithms using disparate data sources.
Responsibilities
• Designs and develops highly scalable and reliable data engineering pipelines to process large volumes of data across many data sources in the cloud
• Identifies, designs and implements internal process improvements by automating manual processes and optimizing data delivery
• Develops and promotes best practices in data engineering
• Be part of the on call rotation supporting our SLA
• Explores ways of improving efficiency by staying current on the latest technology and trends; and introduces those to the members of the team
• Finds answers to business questions via hands-on exploration of data sets via Jupyter, SQL, dashboards, statistical analysis, and data visualizations
• Mentors members of the team and department on best practices and approaches.
• Leads initiatives in ways to improve the quality of our data as well as making the data more effective, with other members of engineering, BI teams, and business units to implement changes.
• Meets with members of the Product, Engineering, Marketing, and BI teams to discuss and advise on projects and implement data-driven solutions that drive the business.
• Brakes down and communicates highly complex data problems into simple, feasible solutions.
• Extracts patterns from large datasets and transform data into an informational advantage.
• Partners with the internal product and business intelligence teams to determine the best approach around data ingestion, structure, and storage then, working with the team to ensure these are implemented correctly.
Basic Qualifications
• Bachelors/Masters degree with 4+ years of work experience in the Data Engineering space
• Experience with data management systems, both Relational and NoSQL e.g. MongoDB
• Proficient in Python
• Strong ETL concepts and experience with Data Modeling
• Ability to write complex SQL to perform common types of analysis and aggregations
• Experience with exploratory data analysis using tools like iPython Notebook, Pandas & matplotlib, etc.
• Influences and applies data standards, policies, and procedures
• Familiar with Docker containerization
• Strong experience with version control systems (GitHub and Bitbucket)
• Strong problem-solving and creative-thinking skills
• Experience with training and providing guidance to junior developers on the team
• Demonstrated development of ongoing technical solutions while maintaining documentation
• Experience developing solutions to business requirements via hands-on discovery and exploration of data
• Exceptional written and verbal communication skills, including the ability to communicate technical concepts to non-technical audiences, as well as translating business requirements into Data Solutions
• Builds strong commitment within the team to support the appropriate team priorities
• Stays current with new and evolving technologies via formal training and self-directed education
Additional Qualifications
• Experience with GCP or AWS Cloud Platform
• Experience building pipelines using Apache Airflow.
• Experience with services like Google Dataflow, Google BigQuery, Pubsub, or similar
• Familiar with Apache Beam
• Familiar with Atlassian products Jira and Confluence
#LI-FV 32450
#LI-REMOTE
Paramount is an equal opportunity employer (EOE) including disability/vet.
Jobcode: Reference SBJ-g627nj-216-73-217-10-42 in your application.