I. Job Responsibilities:
Assembling large, complex sets of data that meet non-functional and functional business requirements
Building analytical tools to utilize the data pipeline, providing actionable insight into key business performance metrics including operational efficiency and customer acquisition
Identifying, designing, and implementing internal process improvements including re-designing infrastructure for greater scalability, optimizing data delivery, and automating manual processes
Works closely with all business units to develop strategy for long term data platform architecture.
II. Job Requirements:
University degree in Computer Science, Engineering and/or a technically oriented field
2 years of experience with Flink/Spark, Databricks
2 years of experience with Azure (DP200 and/or DP201, DP203 certification acts as a plus)
Passionate about analytics machine learning technology & applications and eager to learn
English communication
Knowledge of Big Data technologies, such as Spark, Hadoop/MapReduce
Knowledge of Azure services like Storage Account, Azure DataBricks etc.
Good knowledge of SQL and excellent coding skills
Working knowledge of various ML/DL applications such as Keras, Tensorflow, Python scikit learn and R
Self-Development, communication, problem-Solving Skills.
Open-minded, multi-tasking, teamwork, flexible and interest to learn new things