This role is currently open for a client that deliver business excelence using cloud technologies. Details of the role are as below.
Experience in developing and optimizing ETL pipelines, big data pipelines, and data-driven architectures.
·Must have strong big-data core knowledge & experience in programming using Spark – Python/Scala
·3+ years of relevant strong experience in working with real-time data streaming using Kafka
·Experience in solving Streaming use cases using Spark, Spark-SQL, KSQL, Kafka, NiFi
·Experience with AWS is good to have
·Build processes supporting data transformation, data structures, metadata, dependency and workload management.
·Good to have knowledge on data bricks Cloud.
·Proficiency with Data Processing: Hadoop, Hive, Spark, Scala, Python, Py Spark.
·Strong analytic skills related to working with structured, semi structured and unstructured datasets.
·Good knowledge of any RDBMS/NoSQL database with strong SQL writing skills
·Familiarity with DevOps framework
·Independent thinker, willing to engage, challenge and learn new technologies.
·Understanding of the benefits of data warehousing, data architecture, data quality processes, data warehousing design and implementation, table structure, fact and dimension tables, logical and physical database design, data modelling, reporting process metadata, and ETL processes....