job details
"Experience with designing and building metadata driven data pipelines using pyspark , databricks or other ETL frameworks .
Ability to design and implement data services using configurations and metadata.
Working experience with Gradle, GIT, GitHUB, GITLab, etc. around continuous integration and continuous delivery infrastructure
Experience of
12+ years of software engineering/development experience utilizing python and pyspark
10+ Years of large scale Enterprise Software Development.
4+ Years of experience in Docker or cloud-based applications
4+ years in Data Processing frameworks with focus on metadata driven services development using Pyspark.
3+ years in AGILE methodology (Scrum, Lean, SAFe, etc.)
8+ years experience integrating with backend services like JMS, J2C, ORM frameworks (Hibernate, JPA, JDO, etc), JDBC.
8+ Years leading development teams or mentoring junior developers."
"Experience with designing and building metadata driven data pipelines using pyspark , databricks or other ETL frameworks .
Ability to design and implement data services using configurations and metadata.
Working experience with Gradle, GIT, GitHUB, GITLab, etc. around continuous integration and continuous delivery infrastructure
Experience of
12+ years of software engineering/development experience utilizing python and pyspark
10+ Years of large scale Enterprise Software Development.
4+ Years of experience in Docker or cloud-based applications
4+ years in Data Processing frameworks with focus on metadata driven services development using Pyspark.
3+ years in AGILE methodology (Scrum, Lean, SAFe, etc.)
8+ years experience integrating with backend services like JMS, J2C, ORM frameworks (Hibernate, JPA, JDO, etc), JDBC.
8+ Years leading development teams or mentoring junior developers."