You are successfully logged out of your my randstad account

You have successfully deleted your account

    Thank you for subscribing to your personalised job alerts.

    9 jobs found for Big data in Maharashtra

    filter1
    clear all
      • pune, maharashtra
      • contract
      5+yrs relevant experience Spark , ETL, SQLAnyone – AWS Azure GCP preferably AWS Optional –Spark StreamingRole & Responsibilities –Experience in developing and optimizing ETL pipelines, big data pipelines.Must have strong big-data core knowledge & experience in programming using Spark – Python/ScalaFamiliarity with DevOps framework – Git/Bitbucket , Jenkins etc.
      5+yrs relevant experience Spark , ETL, SQLAnyone – AWS Azure GCP preferably AWS Optional –Spark StreamingRole & Responsibilities –Experience in developing and optimizing ETL pipelines, big data pipelines.Must have strong big-data core knowledge & experience in programming using Spark – Python/ScalaFamiliarity with DevOps framework – Git/Bitbucket , Jenkins etc.
      • pune, maharashtra
      • contract
       This role is currently open for a client that deliver business excelence using cloud technologies. Details of the role are as below. Experience in developing and optimizing ETL pipelines, big data pipelines, and data-driven architectures.·Must have strong big-data core knowledge & experience in programming using Spark – Python/Scala·3+ years of relevant strong experience in working with real-time data streaming using Kafka·Experience in solving Streaming
       This role is currently open for a client that deliver business excelence using cloud technologies. Details of the role are as below. Experience in developing and optimizing ETL pipelines, big data pipelines, and data-driven architectures.·Must have strong big-data core knowledge & experience in programming using Spark – Python/Scala·3+ years of relevant strong experience in working with real-time data streaming using Kafka·Experience in solving Streaming
      • pune, maharashtra
      • contract
       This role is currently open for a client that deliver business excelence using cloud technologies. Details of the role are as below. Experience in developing and optimizing ETL pipelines, big data pipelines, and data-driven architectures.·Must have strong big-data core knowledge & experience in programming using Spark – Python/Scala·5+ years of relevant strong experience in working with real-time data streaming using Kafka·Experience in solving Streaming
       This role is currently open for a client that deliver business excelence using cloud technologies. Details of the role are as below. Experience in developing and optimizing ETL pipelines, big data pipelines, and data-driven architectures.·Must have strong big-data core knowledge & experience in programming using Spark – Python/Scala·5+ years of relevant strong experience in working with real-time data streaming using Kafka·Experience in solving Streaming
      • pune, maharashtra
      • contract
      Responsibilities:·Write test data scripts, based on ETL mapping artifacts·Execute data scripts and detailed analysis on the scripts·Create strategies and test cases for applications that use ETL components·Data mining and detailed data analysis on data warehousing systems·Execute formal test plans to ensure the delivery of data related projects·Provide input and support big data testing initiative·Define and track quality assurance metrics such as defects,
      Responsibilities:·Write test data scripts, based on ETL mapping artifacts·Execute data scripts and detailed analysis on the scripts·Create strategies and test cases for applications that use ETL components·Data mining and detailed data analysis on data warehousing systems·Execute formal test plans to ensure the delivery of data related projects·Provide input and support big data testing initiative·Define and track quality assurance metrics such as defects,
      • pune, maharashtra
      • contract
          6+ years of technology experience·Spark Streaming experience is mandatory·Technology Stack:·Spark Streaming·Kafka·Spark, Flink·AWS (Good to have)·Java/Python/Scala·Microservices Architecture·Exposure to API Management·Architectural experience with Spark, AWS and Big Data (Hadoop Cloudera Mapr Hortonworks).·Strong knowledge on optimizing workloads developed using Spark SQL/DataFrame.·Proficiency with Data Processing: Hadoop, Hive, Spark, Scala, Python,
          6+ years of technology experience·Spark Streaming experience is mandatory·Technology Stack:·Spark Streaming·Kafka·Spark, Flink·AWS (Good to have)·Java/Python/Scala·Microservices Architecture·Exposure to API Management·Architectural experience with Spark, AWS and Big Data (Hadoop Cloudera Mapr Hortonworks).·Strong knowledge on optimizing workloads developed using Spark SQL/DataFrame.·Proficiency with Data Processing: Hadoop, Hive, Spark, Scala, Python,
      • pune, maharashtra
      • contract
       8+ years of technology experience·Spark Streaming experience is mandatory·Technology Stack:·Spark Streaming·Kafka·Spark, Flink·AWS (Good to have)·Java/Python/Scala·Microservices Architecture·Exposure to API Management·Architectural experience with Spark, AWS and Big Data (Hadoop Cloudera Mapr Hortonworks).·Strong knowledge on optimizing workloads developed using Spark SQL/DataFrame.·Proficiency with Data Processing: Hadoop, Hive, Spark, Scala, Python, PyS
       8+ years of technology experience·Spark Streaming experience is mandatory·Technology Stack:·Spark Streaming·Kafka·Spark, Flink·AWS (Good to have)·Java/Python/Scala·Microservices Architecture·Exposure to API Management·Architectural experience with Spark, AWS and Big Data (Hadoop Cloudera Mapr Hortonworks).·Strong knowledge on optimizing workloads developed using Spark SQL/DataFrame.·Proficiency with Data Processing: Hadoop, Hive, Spark, Scala, Python, PyS
      • pune, maharashtra
      • contract
      12+ Years of experience in Software Development with 5-6 years of experience as AWS Data Architect. Having worked in a large team and complex projects. Having prior BI, Analytics and ETL experience. Hands-on experience in modern analytics architecture and tools. Data Modelling and Data Mart / Data ware house design. Key Roles and Responsibilities: Designing and implementing highly performant data ingestion pipelines from multiple sources using Apache Spark
      12+ Years of experience in Software Development with 5-6 years of experience as AWS Data Architect. Having worked in a large team and complex projects. Having prior BI, Analytics and ETL experience. Hands-on experience in modern analytics architecture and tools. Data Modelling and Data Mart / Data ware house design. Key Roles and Responsibilities: Designing and implementing highly performant data ingestion pipelines from multiple sources using Apache Spark
      • pune, maharashtra
      • contract
      Job Description: 6+ years of relevant experience in developing big data processing task using PySpark/Glue/ADF/Hadoop and other cloud native services. Strong knowledge on optimizing workloads developed using Spark SQL/DataFrame. Proficiency with Data Processing: Hadoop, Hive, Spark, Scala, Python, PySpark. Experience in at least one popular programming language – python/Scala/Java Strong analytic skills related to working with structured, semi structured a
      Job Description: 6+ years of relevant experience in developing big data processing task using PySpark/Glue/ADF/Hadoop and other cloud native services. Strong knowledge on optimizing workloads developed using Spark SQL/DataFrame. Proficiency with Data Processing: Hadoop, Hive, Spark, Scala, Python, PySpark. Experience in at least one popular programming language – python/Scala/Java Strong analytic skills related to working with structured, semi structured a
      • mumbai, maharashtra
      • permanent
      Job purpose:Lead and assist cross-functional & cross-regional teams to provide data analysis, monitoring, and forecasting, creating the logic for and implementing strategies, providing requirements to data scientists and technology teams on attribute, model, and platform requirements, and communicating with global stakeholders to ensure we deliver the best possible customer experience. Accountabilities• Work cross-functionally with teams to analyze usage a
      Job purpose:Lead and assist cross-functional & cross-regional teams to provide data analysis, monitoring, and forecasting, creating the logic for and implementing strategies, providing requirements to data scientists and technology teams on attribute, model, and platform requirements, and communicating with global stakeholders to ensure we deliver the best possible customer experience. Accountabilities• Work cross-functionally with teams to analyze usage a

    Thank you for subscribing to your personalised job alerts.

    Explore over 8 jobs in Maharashtra

    It looks like you want to switch your language. This will reset your filters on your current job search.