An exciting opportunity to join the team within a growing ICT Services company with a global portfolio, as an Cloud Data Lead. Responsibilities/Accountabilities:
Prepare KMAP of the team and create upskill / training plan
Review and update of Project specific documentations
Plan and guide team in Managing day-to-day activities
...
Assist in technology decision making and proposing best solution to the client for the capabilities/solutions we have during deal solution conversations
Contribute to developing platforms that make data across applications/application deployments available for AI/ML-driven feature prototyping, proofs-of-concept,
and general availability
Assist in refining data pipelines for analysis, while refining, automating, and scaling as needed for the use-case at hand
Work on various aspects of the AI/ML ecosystem – data modelling, data and MLpipelines, data documentation, scaling, deployment, monitoring and maintenance
etc.
Work closely with Data scientists and MLOps Engineers to come up with scalable system and model architectures for enabling real-time ML/AI services
Develop, program, and maintain applications using the Apache Spark open-source
framework.
Work with different aspects of the Spark ecosystem, including Spark SQL, Data Frames, Datasets and streaming
Familiar with big data processing tools and techniques Create Spark jobs for data transformation and aggregation
Produce unit tests for Spark transformations and helper methods Required Technical Skills and Experience:
More than 10+ years of experience which can include IT infrastructure, Cloud and Data areas.
Minimum of 3 years experience managing a team containing a mix of Data skills
Strong analytic skills related to working with unstructured datasets
A successful history of manipulating, processing and extracting value from large, disconnected datasets
Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data store
Agile/Scrum methodology experience is required
Expertise in designing and developing applications using Big Data and Cloud technologies.
Hands-on experience on Spark, and Hadoop echo system components
Hands-on experience of any of the Cloud (AWS/Azure)
Good knowledge of Shell script & Java/Python
Passionate about exploring new technologies.
Good Communication Skills
5+ years of experience understanding architecture related distributed data systems, specifically within one of the following:
o Data Engineering technologies (e.g. Spark, Hadoop, Kafka)
o Data Warehousing (e.g. SQL, OLTP/OLAP/DSS)
o Data Science and Machine Learning technologies (e.g. pandas, scikit-learn,
HPO)
show more
An exciting opportunity to join the team within a growing ICT Services company with a global portfolio, as an Cloud Data Lead. Responsibilities/Accountabilities:
Prepare KMAP of the team and create upskill / training plan
Review and update of Project specific documentations
Plan and guide team in Managing day-to-day activities
Assist in technology decision making and proposing best solution to the client for the capabilities/solutions we have during deal solution conversations
Contribute to developing platforms that make data across applications/application deployments available for AI/ML-driven feature prototyping, proofs-of-concept,
and general availability
Assist in refining data pipelines for analysis, while refining, automating, and scaling as needed for the use-case at hand
Work on various aspects of the AI/ML ecosystem – data modelling, data and MLpipelines, data documentation, scaling, deployment, monitoring and maintenance
etc.
Work closely with Data scientists and MLOps Engineers to come up with scalable system and model architectures for enabling real-time ML/AI services
Develop, program, and maintain applications using the Apache Spark open-source
...
framework.
Work with different aspects of the Spark ecosystem, including Spark SQL, Data Frames, Datasets and streaming
Familiar with big data processing tools and techniques Create Spark jobs for data transformation and aggregation
Produce unit tests for Spark transformations and helper methods Required Technical Skills and Experience:
More than 10+ years of experience which can include IT infrastructure, Cloud and Data areas.
Minimum of 3 years experience managing a team containing a mix of Data skills
Strong analytic skills related to working with unstructured datasets
A successful history of manipulating, processing and extracting value from large, disconnected datasets
Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data store
Agile/Scrum methodology experience is required
Expertise in designing and developing applications using Big Data and Cloud technologies.
Hands-on experience on Spark, and Hadoop echo system components
Hands-on experience of any of the Cloud (AWS/Azure)
Good knowledge of Shell script & Java/Python
Passionate about exploring new technologies.
Good Communication Skills
5+ years of experience understanding architecture related distributed data systems, specifically within one of the following:
o Data Engineering technologies (e.g. Spark, Hadoop, Kafka)
o Data Warehousing (e.g. SQL, OLTP/OLAP/DSS)
o Data Science and Machine Learning technologies (e.g. pandas, scikit-learn,
HPO)
show more