Job Details



Refer Job: Send to a Friend
  • Share this on Facebook
  • Share this on LinkedIn

Add Add to Saved Jobs

Back

Information Technology - Data Sciences & Analytics Engineer (Data Engineering Track)

Job Description

We have multiple junior and senior data engineering positions available. The data engineer is responsible for designing and developing robust, scalable solutions for collecting and analysing large data sets in the cloud and on-premises data centres. Also in creating and maintaining data pipelines, data marts and business intelligence dashboards to be used across Singapore Airlines.

Responsibilities:

  • Understand business processes, applications and how data is stored and gathered.
  • Develop and manage streaming data pipelines at enterprise scale.
  • Build expertise on the data. Own data quality for various data flows.
  • Design, build and manage data marts to satisfy our growing data needs.
  • Support data marts to provide intuitive analytics for internal customers.
  • Design and build new framework and automation tools to enable teams to consume and understand data faster.
  • Use your expert coding skills across a number of languages like SQL, Python and Java to support data scientists.
  • Interface with internal customers to understand data needs.
  • Collaborate with multiple teams and own the solution end-to-end.
  • Maintain infrastructure for our data pipelines.
  • Any other ad-hoc duties.
  • This is a sole contributor role.

Required Skills

  • BS degree in Computer Science or a related technical field. MS or PhD degree is a plus.
  • More than 2 years of advanced Python or Java development is necessary. Scala or Kotlin experience is a plus.
  • More than 2 years of SQL (such as PostgreSQL, Oracle, AWS Redshift, or Hive) experience is required. NoSQL experience is a plus.
  • More than 2 years working with Linux OS. Knowledge of networks and cybersecurity is a plus.
  • Experience with modern MapReduce/workflow distributed systems, especially Apache Spark. Experience with Apache Kafka is a plus.
  • Experience working with infrastructure-as-code systems like AWS CloudFormation.
  • DevOps experience is a plus.
  • Experience in custom ETL pipeline design, implementation and maintenance.
  • Experience working with visualization tools like Tableau or Apache Superset.
  • Ability in analyzing data to identify deliverables, gaps and inconsistencies.
  • Ability in managing and communicating data mart plans to internal customers.