company_logo

Full Time Job

Remote Data Scientist

CBS

Fort Lauderdale, FL 09-15-2021
 
  • Paid
  • Full Time
Job Description

Job Position Title: Data Scientist / Data Machine Learning (ML) Engineer

Department: Data & Insight Group

Location: Fort Lauderdale, FL

About Us:

ViacomCBS Streaming (formerly CBS Interactive) is a division of ViacomCBS that encompasses both free, paid, and premium streaming services including Pluto TV, Paramount+, CBS Sports Digital, and CBS News Digital.

Role Details:

Interest in applying machine learning e.g, supervised methods like logistic regression, SVMs, random forests, deep learning & reinforcement learning, to existing products and services to improve their effectiveness and the overall user experience across our digital properties.

Leverage datasets like traffic/video/ consumption data from Adobe Analytics clickStream, purchase/ subscription data, audience data from syndicated audience measurement services (comScore/ Nielsen), and Ad revenue data from Doubleclick, to model, analyze and predict user behavior.

Utilize various statistical measures (mean, median, variance, etc.), distributions (uniform, normal, binomial, Poisson, etc.) and analysis methods (ANOVA, hypothesis testing, etc.) to build and validate models from observed data.

Engineering responsibilities include:

Build and implement production machine learning systems

Maintain the health of machine learning systems, including speed, reliability, and performance

Develop internal machine learning frameworks and abstractions to facilitate common tasks such as training/testing, feature use/reuse/creation/storage, and deployment. - These abstractions are used by both machine learning engineers and data scientists/BI.

Your Day-to-Day:

Design machine learning systems

Leverage impactful and innovative technology based on machine learning to improve existing products, as well as create new data products that will drive marketing, subscription, and advertising initiatives.

Use your mathematical, statistical and programmatic knowledge to search, analyze, and model in order to forecast the traffic and consumption of our digital properties and the sale of our subscription products

Read white papers, synthesize information, apply theoretical ML concepts to real-world business problems across our digital properties

Document, train, and mentor members of the Data and BI teams on ML concepts and principles, and how they can be applied to solve day-to-day business needs

Key Projects:

Work on initiatives that improve the quality of our data as well as making the data more effective, with other members of engineering, BI teams, and business units to implement changes.

Build a user-level data management platform that serves as the central data hub allowing machine learning software components to ''plug-in'' and segment users, predict future purchase behavior, identify users at risk of churning, and personalize their online experiences

Use look-alike modeling derived from Analytics clickStream-sourced behavioral data to build larger audience segments from smaller segments in order to create targeted ''reach'' for advertisers

Work on device/user graph that accounts for multi-platform access, and pan-visit user behavior

Incorporate ''anomaly detection'' across KPIs, metrics, and datasets to proactively find data issues, data load problems, and failures with minimal human supervision

Qualifications:

You Have:

MA/MS in Statistics/Data Science/Computer Science or related disciplines with specialization in data mining or machine learning techniques

Knowledge of both supervised and unsupervised machine learning techniques

Have full stack experience in data collection, aggregation, analysis, visualization, productionalization, and monitoring of data science products

Proficiency in big data modeling work: BigQuery, HBase

The ability to write robust code in Python and associated machine learning packages is a must.

Experience using Jupyter Notebooks

Communicate concisely and persuasively with engineers and product managers

Strong detail-orientation with a penchant for data accuracy and good grammar

Familiar with version control systems (GitHub, BitBucket)

Must successfully pass a background check

You might also have:

Familiarity with Hadoop pipelines using Spark, Kafka.

Familiarity with Graph Databases (Neo4J)

Experience using project management tools like those from Atlassian (JIRA, Confluence)

Experience using Google Cloud Platform (BigQuery, ML Engine, and APIs)

Can Wrangle data using: SQL, Pandas, MongoDB

Background in NLP or text mining techniques is a plus

Background in deep learning and Tensorflow is a plus

Jobcode: Reference SBJ-rjq161-3-17-154-171-42 in your application.