We are seeking a Lead Data Engineer who will partner with business, analytics and engineering teams to design, build and maintain our growing Data Warehousing and Analytical reporting Environment. You will build ease of use data structures to facilitate reporting and monitoring key performance indicators. Collaborating across disciplines, you will identify internal/external data sources to design and implement table structure, data products, ETL strategy, automation frameworks and scalable data pipelines.
• Partner with technical and non-technical colleagues to understand data and reporting requirements.
• Work with Engineering teams to collect required data from internal and external systems
• Design table structures and ETL strategy to build performant Data solutions that are reliable and scalable in a fast growing data ecosystem
• Develop Data Quality checks for source and target data sets. Develop UAT plans and conduct QA.
• Develop and maintain ETL routines using ETL and orchestration tools such as Airflow, Luigi and Jenkins
• Document and publish Metadata and table designs to facilitate data adoption
• Perform ad hoc analysis as necessary
• Perform SQL and ETL tuning as necessary
• Develop and maintain Dashboards/reports using Tableau and Looker
• Coach and mentor team members to improve their designs and ETL processes
• Create and conduct project/architecture design review
• Create POC when necessary to test new approaches
• Design and build modern data management solutions
• Enforce common data design pattern to increase code maintainability
• Conduct peers code review and provide constructive feedback
• Partner with team leads to identify, design and implement internal process improvements
• Automate manual processes, optimize data delivery, understand when to re-design architecture for greater scalability
• 5+ years of relevant Professional experience
• 4+ years' work experience implementing and reporting on business key performance indicators in data warehousing environments. Strong understanding of data modeling principles including Dimensional modeling, data normalization principles etc
• 3+ years' experience using analytic SQL, working with traditional relational databases and/or distributed systems such as Hadoop / Hive, BigQuery, Redshift
• 2+ Years of experience programming languages (e.g. Python, R, bash)
• 2+ years of experience with workflow management tools (Airflow, Oozie, Azkaban, UC4)
• Expert level understanding of SQL Engines and able to conduct advanced performance tuning
• Experience with Hadoop (or similar) Ecosystem (MapReduce, Yarn, HDFS, Hive, Spark, Presto, Pig, HBase)
• Familiarity with data exploration / data visualization tools like Tableau, Chartio, etc
• Ability to think strategically, analyze and interpret market and consumer information
• Strong communication skills – written and verbal presentations
• Excellent conceptual and analytical reasoning competencies
• Comfortable working in a fast-paced and highly collaborative environment. A great team player who embraces collaborations also work well individually while supporting multiple projects in parallel
• Degree in an analytical field such as economics, mathematics, or computer science is desired
Jobcode: Reference SBJ-rz5077-3-215-79-116-42 in your application.