This role is currently open for a client that deliver business excelence using cloud technologies. Details of the role are as below. Experience in developing and optimizing ETL pipelines, big data pipelines, and data-driven architectures.·Must have strong big-data core knowledge & experience in programming using Spark – Python/Scala·5+ years of relevant strong experience in working with real-time data streaming using Kafka·Experience in solving Streaming
This role is currently open for a client that deliver business excelence using cloud technologies. Details of the role are as below. Experience in developing and optimizing ETL pipelines, big data pipelines, and data-driven architectures.·Must have strong big-data core knowledge & experience in programming using Spark – Python/Scala·5+ years of relevant strong experience in working with real-time data streaming using Kafka·Experience in solving Streaming
This role is currently open for a client that deliver business excelence using cloud technologies. Details of the role are as below. Experience in developing and optimizing ETL pipelines, big data pipelines, and data-driven architectures.·Must have strong big-data core knowledge & experience in programming using Spark – Python/Scala·3+ years of relevant strong experience in working with real-time data streaming using Kafka·Experience in solving Streaming
This role is currently open for a client that deliver business excelence using cloud technologies. Details of the role are as below. Experience in developing and optimizing ETL pipelines, big data pipelines, and data-driven architectures.·Must have strong big-data core knowledge & experience in programming using Spark – Python/Scala·3+ years of relevant strong experience in working with real-time data streaming using Kafka·Experience in solving Streaming
Responsibilities:·Write test data scripts, based on ETL mapping artifacts·Execute data scripts and detailed analysis on the scripts·Create strategies and test cases for applications that use ETL components·Data mining and detailed data analysis on data warehousing systems·Execute formal test plans to ensure the delivery of data related projects·Provide input and support big data testing initiative·Define and track quality assurance metrics such as defects,
Responsibilities:·Write test data scripts, based on ETL mapping artifacts·Execute data scripts and detailed analysis on the scripts·Create strategies and test cases for applications that use ETL components·Data mining and detailed data analysis on data warehousing systems·Execute formal test plans to ensure the delivery of data related projects·Provide input and support big data testing initiative·Define and track quality assurance metrics such as defects,
6+ years of technology experience·Spark Streaming experience is mandatory·Technology Stack:·Spark Streaming·Kafka·Spark, Flink·AWS (Good to have)·Java/Python/Scala·Microservices Architecture·Exposure to API Management·Architectural experience with Spark, AWS and Big Data (Hadoop Cloudera Mapr Hortonworks).·Strong knowledge on optimizing workloads developed using Spark SQL/DataFrame.·Proficiency with Data Processing: Hadoop, Hive, Spark, Scala, Python,
6+ years of technology experience·Spark Streaming experience is mandatory·Technology Stack:·Spark Streaming·Kafka·Spark, Flink·AWS (Good to have)·Java/Python/Scala·Microservices Architecture·Exposure to API Management·Architectural experience with Spark, AWS and Big Data (Hadoop Cloudera Mapr Hortonworks).·Strong knowledge on optimizing workloads developed using Spark SQL/DataFrame.·Proficiency with Data Processing: Hadoop, Hive, Spark, Scala, Python,
8+ years of technology experience·Spark Streaming experience is mandatory·Technology Stack:·Spark Streaming·Kafka·Spark, Flink·AWS (Good to have)·Java/Python/Scala·Microservices Architecture·Exposure to API Management·Architectural experience with Spark, AWS and Big Data (Hadoop Cloudera Mapr Hortonworks).·Strong knowledge on optimizing workloads developed using Spark SQL/DataFrame.·Proficiency with Data Processing: Hadoop, Hive, Spark, Scala, Python, PyS
8+ years of technology experience·Spark Streaming experience is mandatory·Technology Stack:·Spark Streaming·Kafka·Spark, Flink·AWS (Good to have)·Java/Python/Scala·Microservices Architecture·Exposure to API Management·Architectural experience with Spark, AWS and Big Data (Hadoop Cloudera Mapr Hortonworks).·Strong knowledge on optimizing workloads developed using Spark SQL/DataFrame.·Proficiency with Data Processing: Hadoop, Hive, Spark, Scala, Python, PyS
12+ Years of experience in Software Development with 5-6 years of experience as AWS Data Architect. Having worked in a large team and complex projects. Having prior BI, Analytics and ETL experience. Hands-on experience in modern analytics architecture and tools. Data Modelling and Data Mart / Data ware house design. Key Roles and Responsibilities: Designing and implementing highly performant data ingestion pipelines from multiple sources using Apache Spark
12+ Years of experience in Software Development with 5-6 years of experience as AWS Data Architect. Having worked in a large team and complex projects. Having prior BI, Analytics and ETL experience. Hands-on experience in modern analytics architecture and tools. Data Modelling and Data Mart / Data ware house design. Key Roles and Responsibilities: Designing and implementing highly performant data ingestion pipelines from multiple sources using Apache Spark
Job Description: 6+ years of relevant experience in developing big data processing task using PySpark/Glue/ADF/Hadoop and other cloud native services. Strong knowledge on optimizing workloads developed using Spark SQL/DataFrame. Proficiency with Data Processing: Hadoop, Hive, Spark, Scala, Python, PySpark. Experience in at least one popular programming language – python/Scala/Java Strong analytic skills related to working with structured, semi structured a
Job Description: 6+ years of relevant experience in developing big data processing task using PySpark/Glue/ADF/Hadoop and other cloud native services. Strong knowledge on optimizing workloads developed using Spark SQL/DataFrame. Proficiency with Data Processing: Hadoop, Hive, Spark, Scala, Python, PySpark. Experience in at least one popular programming language – python/Scala/Java Strong analytic skills related to working with structured, semi structured a
let similar jobs come to you
We will keep you updated when we have similar job postings.
Thank you for subscribing to your personalised job alerts.