You are successfully logged out of your my randstad account

You have successfully deleted your account

    Thank you for subscribing to your personalised job alerts.

    24 jobs found for Machine Learning

    filter1
    clear all
      • mandvi (mumbai)
      • permanent
      Qualifications & Specifications• Master's degree in Engineering /Computer Science/ Statistics or equivalent• Experience of 2+ years in machine learning algorithms and libraries• Understanding of data structures, data modeling and software architecture• Deep knowledge of math, probability, statistics and algorithms• Ability to write robust code in Python, Java and R• Experience with AWS Sagemaker deployment• Experience with machine learning platforms such as Microsoft Azure, Google Cloud, IBM Watson,and Amazon• Proven leadership skills. Ability to lead, working collaboratively and creatively with businessstakeholders 
      Qualifications & Specifications• Master's degree in Engineering /Computer Science/ Statistics or equivalent• Experience of 2+ years in machine learning algorithms and libraries• Understanding of data structures, data modeling and software architecture• Deep knowledge of math, probability, statistics and algorithms• Ability to write robust code in Python, Java and R• Experience with AWS Sagemaker deployment• Experience with machine learning platforms such as Microsoft Azure, Google Cloud, IBM Watson,and Amazon• Proven leadership skills. Ability to lead, working collaboratively and creatively with businessstakeholders 
      • no data
      • contract
      • 1 year
      Required Skills, Knowledge, Relevant Work Experience * 3- 5 year of excellent experience in the areas of Computer Vision & Machine Learning  Deep knowledge & experience of: -o Image processing, Computer Vision, Machine Learningo Stereo Camera data processing, 2D and 3D Transformationso Mono & Stereo Camera Calibration & Imagingo Object detection, tracking and recognition. Machine learning techniques, analysis, and algorithms  Proficiency in: -o Python and MATLAB (and C++ would be added advantage)o Image Processing and computer vision libraries like OpenCV or MATLAB image processing and computer vision toolboxo Programming with strong experience in Data structures, algorithms  Common Data science tool kitsGood mathematical background Excellent verbal and written communication, and presentation skills Comfortable working with diverse, multi-disciplinary teams, across multiple time zones Good mathematical background Excellent verbal and written communication, and presentation skills Comfortable working with diverse, multi-disciplinary teams, across multiple time zones  Desired Skills, Knowledge & Relevant Work Experience * Deploying Computer vision and machine learning algorithms on edge computinghardware Embedded Software development fundamentals Experience in Robotics Experience of working with AWS tools Education (Specific Branch of Study, Certificate) * Masters / Bachelors in Electronics / Computer Science / Computer Vision or similar
      Required Skills, Knowledge, Relevant Work Experience * 3- 5 year of excellent experience in the areas of Computer Vision & Machine Learning  Deep knowledge & experience of: -o Image processing, Computer Vision, Machine Learningo Stereo Camera data processing, 2D and 3D Transformationso Mono & Stereo Camera Calibration & Imagingo Object detection, tracking and recognition. Machine learning techniques, analysis, and algorithms  Proficiency in: -o Python and MATLAB (and C++ would be added advantage)o Image Processing and computer vision libraries like OpenCV or MATLAB image processing and computer vision toolboxo Programming with strong experience in Data structures, algorithms  Common Data science tool kitsGood mathematical background Excellent verbal and written communication, and presentation skills Comfortable working with diverse, multi-disciplinary teams, across multiple time zones Good mathematical background Excellent verbal and written communication, and presentation skills Comfortable working with diverse, multi-disciplinary teams, across multiple time zones  Desired Skills, Knowledge & Relevant Work Experience * Deploying Computer vision and machine learning algorithms on edge computinghardware Embedded Software development fundamentals Experience in Robotics Experience of working with AWS tools Education (Specific Branch of Study, Certificate) * Masters / Bachelors in Electronics / Computer Science / Computer Vision or similar
      • bengaluru / bangalore
      • permanent
      Required Skill setIndustry experience of 5-8 years in solving problems using data science/statistical models/machine learning models and Deep Learning modelsKnowledge of coding in Python and R is requiredExperience developing end-to-end data science pipelines involving problem identification and definition, data collection, annotation, modelling, and monitoringAbility to communicate the results to higher management/clientsHigh understanding of the applicability and mathematics involved in Machine Learning / Deep Learning algorithmsProven ability to work with different datasets from different domains in a client-facing environmentDesired Skill setFull-time Bachelors/master’s degree in Mathematics/Statistics/Data Science/Computer Science or any other quantitative field would be desirableKnowledge of coding in Pyspark and PytorchIndustry experience working with at least one type of unstructured data like Text, Images, PDF or AudioKnowledge of Azure/AWS/GCP
      Required Skill setIndustry experience of 5-8 years in solving problems using data science/statistical models/machine learning models and Deep Learning modelsKnowledge of coding in Python and R is requiredExperience developing end-to-end data science pipelines involving problem identification and definition, data collection, annotation, modelling, and monitoringAbility to communicate the results to higher management/clientsHigh understanding of the applicability and mathematics involved in Machine Learning / Deep Learning algorithmsProven ability to work with different datasets from different domains in a client-facing environmentDesired Skill setFull-time Bachelors/master’s degree in Mathematics/Statistics/Data Science/Computer Science or any other quantitative field would be desirableKnowledge of coding in Pyspark and PytorchIndustry experience working with at least one type of unstructured data like Text, Images, PDF or AudioKnowledge of Azure/AWS/GCP
      • chennai, tamil nadu
      • permanent
      • 12 months
      Project X: Data Engineer (Machine Learning) Project Requirement:Implement custom ETL/ELT processes in distributed computing environments and large-scale data warehousesDesign and implement data pipelines capable of modeling data from many sources and store it in such a way that our various engineering groups can properly ingest the dataHelp design, implement and maintain web scrapers that run 24/7 and process millions of domains per monthCreate machine learning data pipelines and work turning AI research into production code that powers our search engineWork on server applications and APIs that are used by our Data teamHandle the challenges that come with managing terabytes of dataHelp with the automation and monitoring of our systemsTechnology Requirement:3+ years experience in data engineering or as a software engineer with a data-centric view3+ years experience in Python or a similar language (as well as the solid understanding of object-oriented programming)3+ years experience in data warehouses with the ability to have a debate of when the different ones are optimal2+ years of experience working with machine learning algorithms and turning notebooks into production codeBachelor’s degree or higher in a computational fieldAdditional Requirement (Nice to Haves):Experience in fintech or financial servicesWork experience in cloud platforms such as Google CloudTechnologiesPyTorchKerasTensorflowPython Number of Manpower required:Two positionWork Location and Mode of WorkAnywhere in India and WFH (Remote) Work Timing (Time Zone)12:30 PM - 9:30 PM IST
      Project X: Data Engineer (Machine Learning) Project Requirement:Implement custom ETL/ELT processes in distributed computing environments and large-scale data warehousesDesign and implement data pipelines capable of modeling data from many sources and store it in such a way that our various engineering groups can properly ingest the dataHelp design, implement and maintain web scrapers that run 24/7 and process millions of domains per monthCreate machine learning data pipelines and work turning AI research into production code that powers our search engineWork on server applications and APIs that are used by our Data teamHandle the challenges that come with managing terabytes of dataHelp with the automation and monitoring of our systemsTechnology Requirement:3+ years experience in data engineering or as a software engineer with a data-centric view3+ years experience in Python or a similar language (as well as the solid understanding of object-oriented programming)3+ years experience in data warehouses with the ability to have a debate of when the different ones are optimal2+ years of experience working with machine learning algorithms and turning notebooks into production codeBachelor’s degree or higher in a computational fieldAdditional Requirement (Nice to Haves):Experience in fintech or financial servicesWork experience in cloud platforms such as Google CloudTechnologiesPyTorchKerasTensorflowPython Number of Manpower required:Two positionWork Location and Mode of WorkAnywhere in India and WFH (Remote) Work Timing (Time Zone)12:30 PM - 9:30 PM IST
      • bengaluru / bangalore
      • permanent
      Qualifications & Specifications • Master’s degree in computer science, engineering, statistics, economics or equivalent with relevant 7 years of work experience • Proven ability to use modeling, optimization, machine learning or text classification algorithms • Possess a background in database design, ETL processes (SSIS) and SQL analysis (SSAS) to assist the Analytics team in formalizing their internal data flows and related reporting processes • Data virtualization, analytics, and integration techniques • Technology Prowess in R/Python/SAS/SQL/ R Shiny and other self-service tools (KNIME/ DataIku etc) • Experience in AI/ Machine Learning, real time analytics and Big data platforms • Cloud Platforms - AWS, Google or Azure • Ability to lead, working collaboratively and creatively with business stakeholders
      Qualifications & Specifications • Master’s degree in computer science, engineering, statistics, economics or equivalent with relevant 7 years of work experience • Proven ability to use modeling, optimization, machine learning or text classification algorithms • Possess a background in database design, ETL processes (SSIS) and SQL analysis (SSAS) to assist the Analytics team in formalizing their internal data flows and related reporting processes • Data virtualization, analytics, and integration techniques • Technology Prowess in R/Python/SAS/SQL/ R Shiny and other self-service tools (KNIME/ DataIku etc) • Experience in AI/ Machine Learning, real time analytics and Big data platforms • Cloud Platforms - AWS, Google or Azure • Ability to lead, working collaboratively and creatively with business stakeholders
      • bengaluru / bangalore
      • permanent
      Qualifications & Specifications • Master’s degree in computer science, engineering, statistics, economics or equivalent with relevant 7 years of work experience • Proven ability to use modeling, optimization, machine learning or text classification algorithms • Possess a background in database design, ETL processes (SSIS) and SQL analysis (SSAS) to assist the Analytics team in formalizing their internal data flows and related reporting processes • Data virtualization, analytics, and integration techniques • Technology Prowess in R/Python/SAS/SQL/ R Shiny and other self-service tools (KNIME/ DataIku etc) • Experience in AI/ Machine Learning, real time analytics and Big data platforms • Cloud Platforms - AWS, Google or Azure • Ability to lead, working collaboratively and creatively with business stakeholders
      Qualifications & Specifications • Master’s degree in computer science, engineering, statistics, economics or equivalent with relevant 7 years of work experience • Proven ability to use modeling, optimization, machine learning or text classification algorithms • Possess a background in database design, ETL processes (SSIS) and SQL analysis (SSAS) to assist the Analytics team in formalizing their internal data flows and related reporting processes • Data virtualization, analytics, and integration techniques • Technology Prowess in R/Python/SAS/SQL/ R Shiny and other self-service tools (KNIME/ DataIku etc) • Experience in AI/ Machine Learning, real time analytics and Big data platforms • Cloud Platforms - AWS, Google or Azure • Ability to lead, working collaboratively and creatively with business stakeholders
      • peth (pune)
      • permanent
      Role & Responsibilities● Opportunity to work as a Full-Stack Engineer● Work with Modern Technologies like Kafka, MongoDB, AWS, Hive, NodeJS, ApacheZookeeper● Opportunity to work on Machine learning /Artificial Intelligence algorithms● Define and gather Technical and UX requirements for new features and enhancements.● Manage and maintain the product backlog of features, engineering, and quality aspects for engineering teams.● Prioritize complex features across multiple areas of expertise considering quality, resiliency, and delivery timeline trade-offs.● Work closely with engineering teams clarifying requirements and ensuring delivery provides value to customers.● Collaborate with UX designers, Product Managers providing feedback on suggested designs and requirements Skill Set:● 2+ years of experience in Core Java and related technologies● Proven experience with JavaScript and JavaScript-based framework● Experience in Databases (Relational & no-SQL like Mongo DB)● Unit testing (Junit etc)● Knowledge of AngularJS, VueJS, ReactJS● Knowledge of AWS, Machine Learning will be an added advantage● Proven decision-making skills supported by data, user understanding, and intellect● Demonstrated ability to solve large complex problems● Proven ability to handle multiple, fast-moving creative projects and have the flexibility to work in a dynamic environment
      Role & Responsibilities● Opportunity to work as a Full-Stack Engineer● Work with Modern Technologies like Kafka, MongoDB, AWS, Hive, NodeJS, ApacheZookeeper● Opportunity to work on Machine learning /Artificial Intelligence algorithms● Define and gather Technical and UX requirements for new features and enhancements.● Manage and maintain the product backlog of features, engineering, and quality aspects for engineering teams.● Prioritize complex features across multiple areas of expertise considering quality, resiliency, and delivery timeline trade-offs.● Work closely with engineering teams clarifying requirements and ensuring delivery provides value to customers.● Collaborate with UX designers, Product Managers providing feedback on suggested designs and requirements Skill Set:● 2+ years of experience in Core Java and related technologies● Proven experience with JavaScript and JavaScript-based framework● Experience in Databases (Relational & no-SQL like Mongo DB)● Unit testing (Junit etc)● Knowledge of AngularJS, VueJS, ReactJS● Knowledge of AWS, Machine Learning will be an added advantage● Proven decision-making skills supported by data, user understanding, and intellect● Demonstrated ability to solve large complex problems● Proven ability to handle multiple, fast-moving creative projects and have the flexibility to work in a dynamic environment
      • no data
      • permanent
      Job Description: Accountabilities:Work cross-functionally with teams to analyze usage and uncovering key, actionable insights about customer behavior. Identify opportunities where scientific techniques can be applied to solve business problemsRun statistical analysis and create predictive models based on past user data and behavior. Build and improve machine learning models used in production and manage existing modelsDesigning and measuring controlled experiments to determine the potential impact of new approaches. Help with various data analysis and modeling projects Understanding and reporting current trends and performances in detail and identifying opportunitiesPlace actionable data points and trends in context for leadership to understand actual performance and uncover opportunities. Take ownership of the end-to-end system from Problem statement to Solution Delivery and leverage other teams if required.Qualifications & SpecificationsBachelor's degree in computer science, engineering, statistics, economics or equivalent. Master's degree in relevant specification will be first preferenceProven ability to use modeling, optimization, machine learning or text classification algorithmsPossess a background in database design, ETL processes (SSIS) and SQL analysis (SSAS) to assist the Analytics team in formalizing their internal data flows and related reporting processesData virtualization, analytics, and integration techniquesTechnology Prowess in R/Python/SAS/SQL/ R Shiny and other self-service tools (KNIME/ DataIku etc). Experience in AI/ Machine Learning, real time analytics and Big data platforms • Cloud Platforms - AWS, Google or Azure
      Job Description: Accountabilities:Work cross-functionally with teams to analyze usage and uncovering key, actionable insights about customer behavior. Identify opportunities where scientific techniques can be applied to solve business problemsRun statistical analysis and create predictive models based on past user data and behavior. Build and improve machine learning models used in production and manage existing modelsDesigning and measuring controlled experiments to determine the potential impact of new approaches. Help with various data analysis and modeling projects Understanding and reporting current trends and performances in detail and identifying opportunitiesPlace actionable data points and trends in context for leadership to understand actual performance and uncover opportunities. Take ownership of the end-to-end system from Problem statement to Solution Delivery and leverage other teams if required.Qualifications & SpecificationsBachelor's degree in computer science, engineering, statistics, economics or equivalent. Master's degree in relevant specification will be first preferenceProven ability to use modeling, optimization, machine learning or text classification algorithmsPossess a background in database design, ETL processes (SSIS) and SQL analysis (SSAS) to assist the Analytics team in formalizing their internal data flows and related reporting processesData virtualization, analytics, and integration techniquesTechnology Prowess in R/Python/SAS/SQL/ R Shiny and other self-service tools (KNIME/ DataIku etc). Experience in AI/ Machine Learning, real time analytics and Big data platforms • Cloud Platforms - AWS, Google or Azure
      • noida sector 27
      • permanent
      Candidate will be responsible for implementing and supporting IDCUBE solutions via online/onsite medium.Candidate gets hands-on experience while working in multiple technologies such as RFID, IOT, embedded systems, artificial intelligence and machine learning, thereby sharpening his/her skills.Candidate will be responsible in giving IDCUBE solution trainings to various partners and customers across the globe, thus maintaining good relationship with the clients. He also gets the opportunity to travel abroad to extend technical support to overseas clients, as and when required.Candidate will have to support marketing team in the documentation of installation & user manuals, newly found error solutions and case studies. 
      Candidate will be responsible for implementing and supporting IDCUBE solutions via online/onsite medium.Candidate gets hands-on experience while working in multiple technologies such as RFID, IOT, embedded systems, artificial intelligence and machine learning, thereby sharpening his/her skills.Candidate will be responsible in giving IDCUBE solution trainings to various partners and customers across the globe, thus maintaining good relationship with the clients. He also gets the opportunity to travel abroad to extend technical support to overseas clients, as and when required.Candidate will have to support marketing team in the documentation of installation & user manuals, newly found error solutions and case studies. 
      • gurgaon, haryana
      • permanent
      Location-GurgaonRole- Data Scientist Job Description:-Good exposure of Machine Learning Algorithm (supervised and unsupersived learning)Good programming skills in Python for data cleansing/pre processing, feature engineering and NLP.Data Handling.Additional preference for candidates with exposure of:-a.Text mining, sentimental analysis, sensor signals handling.b. End to end analysis project delivery, presenting reports to business user.c. Qliksense exposure
      Location-GurgaonRole- Data Scientist Job Description:-Good exposure of Machine Learning Algorithm (supervised and unsupersived learning)Good programming skills in Python for data cleansing/pre processing, feature engineering and NLP.Data Handling.Additional preference for candidates with exposure of:-a.Text mining, sentimental analysis, sensor signals handling.b. End to end analysis project delivery, presenting reports to business user.c. Qliksense exposure
      • no data
      • permanent
      JOB DESCRIPTION-B. Tech/ B.E from Premier Institute ● Experience of 2 - 3 years ● Understanding of Machine Learning ● Strong & conceptual understanding of common algorithms ● Experience in Big Data tools/platforms ● Python ● SQL & NoSQL ● ETL tools ● Data APIs ● Reporting tools/platform like MS Excel, Data Studio ● Experience working with large data sets and distributed computing is a must have. E.g. PYSPARK ● Experience with AWS must have ● Knowledge on statistical and data mining techniques: Regression, Random Forest, Boosting, Decision Trees, Clustering, Feature Selection etc.
      JOB DESCRIPTION-B. Tech/ B.E from Premier Institute ● Experience of 2 - 3 years ● Understanding of Machine Learning ● Strong & conceptual understanding of common algorithms ● Experience in Big Data tools/platforms ● Python ● SQL & NoSQL ● ETL tools ● Data APIs ● Reporting tools/platform like MS Excel, Data Studio ● Experience working with large data sets and distributed computing is a must have. E.g. PYSPARK ● Experience with AWS must have ● Knowledge on statistical and data mining techniques: Regression, Random Forest, Boosting, Decision Trees, Clustering, Feature Selection etc.
      • no data
      • permanent
      1. Candidate should have a Bachelor’s or Master’ Degree in Computer Science, Big Data, Machine Learning, Artificial Intelligence, Neural Networks, Data Science, Computer Vision, Embedded Systems, Electronics, Electronics and Telecommunication, or relevant fields.2. Demonstrated experience of 8-12 years in developing and launching consumer technology products, should have led, architected and delivered an end-to-end solution for stakeholders.3. Should carry demonstrated experience of 4-5 years in the core software and electronics domain specifically for the ADAS industry.4. Should have experience in managing and leading cross functional teams comprising of at-least 20-25 members. Should be able to translate functional and technical requirements into business needs.5. Should have minimum 3-4 years of hands on experience in libraries and frameworks like Tensorflow, Keras, PyTorch, OpenCV, ARM Compute Library and OpenCL.6. Should have strong hold on at-least two programming languages among Modern C++, Python, Javascript, and Go.7. Relatable experience in Configuration, Integration and Debugging of all Hardware and Software systems and components within the ADAS domain.8. Candidates from Tier-1 Institutes will be preferred.9. Candidates should carry prior Product Development experience of working with an early stage start-up or production software grade experience of working with a Tier-I organization.10. Prior experience of developing scalable machine learning pipelines, Web and Cloud development experience or developing Web and Mobile applications will be added advantage.11. Candidates with Proficiency in Raspberry Pi and ARM CPU’s will be preferred.
      1. Candidate should have a Bachelor’s or Master’ Degree in Computer Science, Big Data, Machine Learning, Artificial Intelligence, Neural Networks, Data Science, Computer Vision, Embedded Systems, Electronics, Electronics and Telecommunication, or relevant fields.2. Demonstrated experience of 8-12 years in developing and launching consumer technology products, should have led, architected and delivered an end-to-end solution for stakeholders.3. Should carry demonstrated experience of 4-5 years in the core software and electronics domain specifically for the ADAS industry.4. Should have experience in managing and leading cross functional teams comprising of at-least 20-25 members. Should be able to translate functional and technical requirements into business needs.5. Should have minimum 3-4 years of hands on experience in libraries and frameworks like Tensorflow, Keras, PyTorch, OpenCV, ARM Compute Library and OpenCL.6. Should have strong hold on at-least two programming languages among Modern C++, Python, Javascript, and Go.7. Relatable experience in Configuration, Integration and Debugging of all Hardware and Software systems and components within the ADAS domain.8. Candidates from Tier-1 Institutes will be preferred.9. Candidates should carry prior Product Development experience of working with an early stage start-up or production software grade experience of working with a Tier-I organization.10. Prior experience of developing scalable machine learning pipelines, Web and Cloud development experience or developing Web and Mobile applications will be added advantage.11. Candidates with Proficiency in Raspberry Pi and ARM CPU’s will be preferred.
      • no data
      • permanent
      Comapny: a Danish MNC into Industril ProcessingLocation: ChennaiExprience: 4-10 Years  Purpose and Target Execute data analytics tasks related to Recovery and Power SMART solutions (RSP) in field of project delivery, product development, and life-cycle datalytics support.Find and generate new ideas for SMART product development. OrganizationalReporting Report to Director, Digitalization, Recovery and Power SMART business.This role requires also active cooperation and influencing in matrix throughout the delivery project organizations and Pulp & Paper digitalization team.AuthorizationDecision making on specified tasks on delivery and development projects as per process regulations ResponsibilitiesDelivery and support of data analytics projectsdata follow up with features and functions on SMART productsDevelop new data-driven features and functions for SMART productsCollaborate with engineering and product development teamsTasksIdentify valuable data sources and automate collection processesUndertake preprocessing of structured and unstructured dataAnalyze large amounts of information to discover trends and patternsBuild predictive models and machine-learning algorithmsCombine models and advanced process controlsPresent information using data visualization techniquesPropose solutions and strategies to business challengesProfessional Experience / EducationMaster’s degree in Data Science, Computer Science, Mathematics, or relevant fieldFluent written and spoken English skillsCompetencesProven experience as a Data Analyst or Data ScientistExperience in data mining and machine-learning Knowledge of Python (preferred) R, and SQL Analytical mind and business acumen, problem-solving attitudeExcellent communication and presentation skillsKnowledge of pulp & paper industry is an advantage
      Comapny: a Danish MNC into Industril ProcessingLocation: ChennaiExprience: 4-10 Years  Purpose and Target Execute data analytics tasks related to Recovery and Power SMART solutions (RSP) in field of project delivery, product development, and life-cycle datalytics support.Find and generate new ideas for SMART product development. OrganizationalReporting Report to Director, Digitalization, Recovery and Power SMART business.This role requires also active cooperation and influencing in matrix throughout the delivery project organizations and Pulp & Paper digitalization team.AuthorizationDecision making on specified tasks on delivery and development projects as per process regulations ResponsibilitiesDelivery and support of data analytics projectsdata follow up with features and functions on SMART productsDevelop new data-driven features and functions for SMART productsCollaborate with engineering and product development teamsTasksIdentify valuable data sources and automate collection processesUndertake preprocessing of structured and unstructured dataAnalyze large amounts of information to discover trends and patternsBuild predictive models and machine-learning algorithmsCombine models and advanced process controlsPresent information using data visualization techniquesPropose solutions and strategies to business challengesProfessional Experience / EducationMaster’s degree in Data Science, Computer Science, Mathematics, or relevant fieldFluent written and spoken English skillsCompetencesProven experience as a Data Analyst or Data ScientistExperience in data mining and machine-learning Knowledge of Python (preferred) R, and SQL Analytical mind and business acumen, problem-solving attitudeExcellent communication and presentation skillsKnowledge of pulp & paper industry is an advantage
      • bengaluru / bangalore
      • permanent
      • 12 months
      What you'll doBuild a long-term vision on how we can rethink our customer acquisition and engagement strategies leveraging data in our decision making.Drive exploratory analysis to understand the ecosystem, user behavior; identifying new levers to help move metrics and build models of user behaviors for analysis and product enhancements.Shape and influence data/ML models and instrumentation to optimize the product experience and generate insights on new areas of opportunity and new products.Provide product leadership by sharing data-based recommendation to communicate state of business, root cause of change in metrics and experimentation results influencing product and business decisionImplement scalable machine learning algorithms that will be used in production on big data.Embark on exploratory data analysis projects to achieve better understanding of phenomena as well as to discover untapped areas of growth and optimization. Answer complex analytic questions from big data sets to help Careem shape its products and services in a better way.Help define and track the appropriate key metrics for specific projects. Design and run randomized controlled experiments, analyze the resulting data and communicate results with other teams.You will always challenge the status quo and continually investigate new data processing technologies and seek to ensure that we follow the industry best practices.What you'll need7+ years experience in data mining, predictive modeling, time series analysis, machine learning, Big Data methodologies, transformation and cleaning of both structured and unstructured data.Advanced degree in a quantitative discipline such as Physics, Statistics, Mathematics, Engineering or Computer Science.Strong problem solving and coding skills.Strong experience in leading cross-functional teams and projects from a technical and data science perspective.Solid knowledge of and experience with the payment channels, pay terms, banking and payments services, and the payments regulatory landscape, preferably in MENA regionSolid understanding of digital wallet use cases and potential risks associated with themFluency in English along with excellent oral and written communication skills.Proficiency and demonstrated experience in at least 2 of the following: Python, R, SQL, Spark, Hive.Demonstrated experience with database technologies (e.g. Hadoop, BigQuery, Amazon EMR, Hive, Oracle, SAP, DB2, Teradata, MS SQL Server, MySQL) is a plus.Demonstrated experience with business intelligence and visualization tools (Tableau, MicroStrategy, ChartIO, Qlik) along with geospatial data processing skills is also a plus.Knowledge of Agile methodologies.Where you'll beThis role is part of a remote distributed team! This means you can be based in any of the countries where we currently have an engineering site. If you would like to join us in Dubai, Berlin, Poland, Pakistan, Egypt, Lebanon or Jordan, that's fine with us (Visa permitting)!Even though we are working remotely, we are strong believers in collaboration and the power of building social connections with our teams. For that reason, our offices are still open and provide plenty of collaboration-friendly spaces at times when teams need it or if you need a quiet space to work outside of home. You’ll be working in the location you’re hired from. Due to legal and compensation considerations, you will need to be based out of the country you’re hired from as your primary work location.  
      What you'll doBuild a long-term vision on how we can rethink our customer acquisition and engagement strategies leveraging data in our decision making.Drive exploratory analysis to understand the ecosystem, user behavior; identifying new levers to help move metrics and build models of user behaviors for analysis and product enhancements.Shape and influence data/ML models and instrumentation to optimize the product experience and generate insights on new areas of opportunity and new products.Provide product leadership by sharing data-based recommendation to communicate state of business, root cause of change in metrics and experimentation results influencing product and business decisionImplement scalable machine learning algorithms that will be used in production on big data.Embark on exploratory data analysis projects to achieve better understanding of phenomena as well as to discover untapped areas of growth and optimization. Answer complex analytic questions from big data sets to help Careem shape its products and services in a better way.Help define and track the appropriate key metrics for specific projects. Design and run randomized controlled experiments, analyze the resulting data and communicate results with other teams.You will always challenge the status quo and continually investigate new data processing technologies and seek to ensure that we follow the industry best practices.What you'll need7+ years experience in data mining, predictive modeling, time series analysis, machine learning, Big Data methodologies, transformation and cleaning of both structured and unstructured data.Advanced degree in a quantitative discipline such as Physics, Statistics, Mathematics, Engineering or Computer Science.Strong problem solving and coding skills.Strong experience in leading cross-functional teams and projects from a technical and data science perspective.Solid knowledge of and experience with the payment channels, pay terms, banking and payments services, and the payments regulatory landscape, preferably in MENA regionSolid understanding of digital wallet use cases and potential risks associated with themFluency in English along with excellent oral and written communication skills.Proficiency and demonstrated experience in at least 2 of the following: Python, R, SQL, Spark, Hive.Demonstrated experience with database technologies (e.g. Hadoop, BigQuery, Amazon EMR, Hive, Oracle, SAP, DB2, Teradata, MS SQL Server, MySQL) is a plus.Demonstrated experience with business intelligence and visualization tools (Tableau, MicroStrategy, ChartIO, Qlik) along with geospatial data processing skills is also a plus.Knowledge of Agile methodologies.Where you'll beThis role is part of a remote distributed team! This means you can be based in any of the countries where we currently have an engineering site. If you would like to join us in Dubai, Berlin, Poland, Pakistan, Egypt, Lebanon or Jordan, that's fine with us (Visa permitting)!Even though we are working remotely, we are strong believers in collaboration and the power of building social connections with our teams. For that reason, our offices are still open and provide plenty of collaboration-friendly spaces at times when teams need it or if you need a quiet space to work outside of home. You’ll be working in the location you’re hired from. Due to legal and compensation considerations, you will need to be based out of the country you’re hired from as your primary work location.  
      • bengaluru / bangalore
      • contract
      • 1 year
      The Computer Vision (CV) Specialist will work in our AI team within Digitalization Organization. As a CV Specialist, he/she will be responsible for using cutting-edge technology in the Computer Vision field for instance, Deep Learning based model building, object detection and recognition, activity recognition to deliver projects.He/she should have a great understanding of Computer Vision algorithms and tools.He/she should have a good understanding of state-of-the-art technologies and are willing to explore technologies to contribute to delivering business values. He/she will use these methods in a broader sense, to solve business problems by working with business SME/users to assess opportunities and review expected outputs.Job Purpose Responsible for development & delivery of Computer vision innovation solutions in an agile fashion to the business in order to provide insights & improvement to business processes and solving business problems.Accountabilities• Accountable for working with business to prepare requirements and technical feasibility study for a Computer vision related project• Accountable for delivering Computer vision projects• Accountable for bringing innovative technologies into Special Challenges:• Building constructive and trusting relationships with mostly remote business stakeholders• Dealing with challenges in data acquisition from other parts of the business.Skills & Requirements: • Master or PhD degree in Computer Vision, Computer Science or relevant areas• Proven practical experience with working on industrial Computer Vision projects• Experience in one of the following deep learning frameworks such as TensorFlow, Keras, PyTorch, etc and familiar with state-of-the-art Computer Vision development• Good knowledge of Python• Experience with Azure Machine Learning is a plus• Creative problem solver and team player with strong communication skill 
      The Computer Vision (CV) Specialist will work in our AI team within Digitalization Organization. As a CV Specialist, he/she will be responsible for using cutting-edge technology in the Computer Vision field for instance, Deep Learning based model building, object detection and recognition, activity recognition to deliver projects.He/she should have a great understanding of Computer Vision algorithms and tools.He/she should have a good understanding of state-of-the-art technologies and are willing to explore technologies to contribute to delivering business values. He/she will use these methods in a broader sense, to solve business problems by working with business SME/users to assess opportunities and review expected outputs.Job Purpose Responsible for development & delivery of Computer vision innovation solutions in an agile fashion to the business in order to provide insights & improvement to business processes and solving business problems.Accountabilities• Accountable for working with business to prepare requirements and technical feasibility study for a Computer vision related project• Accountable for delivering Computer vision projects• Accountable for bringing innovative technologies into Special Challenges:• Building constructive and trusting relationships with mostly remote business stakeholders• Dealing with challenges in data acquisition from other parts of the business.Skills & Requirements: • Master or PhD degree in Computer Vision, Computer Science or relevant areas• Proven practical experience with working on industrial Computer Vision projects• Experience in one of the following deep learning frameworks such as TensorFlow, Keras, PyTorch, etc and familiar with state-of-the-art Computer Vision development• Good knowledge of Python• Experience with Azure Machine Learning is a plus• Creative problem solver and team player with strong communication skill 
      • chennai, tamil nadu
      • permanent
      • 12 months
      Project X: Data Engineer Project Requirement:Implement custom ETL/ELT processes in distributed computing environments and large-scale data warehousesDesign and implement data pipelines capable of modeling data from many sources and store it in such a way that our various engineering groups can properly ingest the dataHelp design, implement and maintain web scrapers that run 24/7 and process millions of domains per monthCreate machine learning data pipelines and work turning AI research into production code that powers our search engineWork on server applications and APIs that are used by our Data teamHandle the challenges that come with managing terabytes of dataHelp with the automation and monitoring of our systemsTechnology Requirement:3+ years experience in data engineering or as a software engineer with a data-centric view3+ years experience in Python or a similar language (as well as the solid understanding of object-oriented programming)3+ years experience in data warehouses with the ability to have a debate of when the different ones are optimal2+ years experience in data pipeline tools such as Airflow or LuigiBachelor’s degree or higher in a computational fieldAdditional Requirement ( Nice to Haves):Experience in fintech or financial servicesWork experience in cloud platforms such as Google CloudTechnologiesBigQueryDBTAirflowPostgreSQLWork Location and Mode of WorkAnywhere in India and WFH (Remote) Work Timing (Time Zone)12:30 PM - 9:30 PM IST
      Project X: Data Engineer Project Requirement:Implement custom ETL/ELT processes in distributed computing environments and large-scale data warehousesDesign and implement data pipelines capable of modeling data from many sources and store it in such a way that our various engineering groups can properly ingest the dataHelp design, implement and maintain web scrapers that run 24/7 and process millions of domains per monthCreate machine learning data pipelines and work turning AI research into production code that powers our search engineWork on server applications and APIs that are used by our Data teamHandle the challenges that come with managing terabytes of dataHelp with the automation and monitoring of our systemsTechnology Requirement:3+ years experience in data engineering or as a software engineer with a data-centric view3+ years experience in Python or a similar language (as well as the solid understanding of object-oriented programming)3+ years experience in data warehouses with the ability to have a debate of when the different ones are optimal2+ years experience in data pipeline tools such as Airflow or LuigiBachelor’s degree or higher in a computational fieldAdditional Requirement ( Nice to Haves):Experience in fintech or financial servicesWork experience in cloud platforms such as Google CloudTechnologiesBigQueryDBTAirflowPostgreSQLWork Location and Mode of WorkAnywhere in India and WFH (Remote) Work Timing (Time Zone)12:30 PM - 9:30 PM IST
      • bangalore city
      • permanent
      Job Description : Principal Data Scientist (Big Data) Your mission, if you choose to accept, will be :● To initiate, develop and optimise POCs to solve healthcare problems● To ensure a constant flow of case studies/publications from acquired health datasets● To extract feedback for product and marketing in order to improve and optimize theirfunctions● To optimise and upgrade existing pipelines● To ensure performance on par with business requirements● To keep the software stack and implementation up to date with the latest advancements inthe industry Skills & experience needed to qualify :● Strong fundamentals in statistics and machine learning● 4+ Years of experience mining, handling and analysing big datasets● 2+ years of experience in managing a team of Data Scientists● Proven ability to solve complex problems (Case Studies/Publications/Patents)● 2+ years of experience working with Time series data● 3+ years of experience of working with TensorFlow/Keras/Pytorch● Experience of working with Spark/Hadoop and help setup pipelines● Excellent communication, sharp analytical abilities with proven design skills along with abilityto think critically of the current system and existing projects● Excellent scripting and programming skills with either off Python, Julia or R● Excellent data visualisation skills
      Job Description : Principal Data Scientist (Big Data) Your mission, if you choose to accept, will be :● To initiate, develop and optimise POCs to solve healthcare problems● To ensure a constant flow of case studies/publications from acquired health datasets● To extract feedback for product and marketing in order to improve and optimize theirfunctions● To optimise and upgrade existing pipelines● To ensure performance on par with business requirements● To keep the software stack and implementation up to date with the latest advancements inthe industry Skills & experience needed to qualify :● Strong fundamentals in statistics and machine learning● 4+ Years of experience mining, handling and analysing big datasets● 2+ years of experience in managing a team of Data Scientists● Proven ability to solve complex problems (Case Studies/Publications/Patents)● 2+ years of experience working with Time series data● 3+ years of experience of working with TensorFlow/Keras/Pytorch● Experience of working with Spark/Hadoop and help setup pipelines● Excellent communication, sharp analytical abilities with proven design skills along with abilityto think critically of the current system and existing projects● Excellent scripting and programming skills with either off Python, Julia or R● Excellent data visualisation skills
      • bengaluru / bangalore
      • permanent
      Job Description:Key tasks and job requirements:Requirement elicitation/ Solution design/ Delivery Provide insights, BI solutions/reports, and recommendations for global marketing programs Deliver reports, dashboards and deep analyses to regional Marketing teams, including US Extract, gather and format the right level of data for Marketing programs and projects Integrate and standardize data sets to enable ease in analytics & visualization (both cloud and enterprise) Establish measures of success (metrics & KPIs) – KPI Synthesis Analyze data to understand performance and effectiveness of digital programs (across channels like Search, Digital Display, Social Media, Email and onsite) oProvide guidance on online tagging for measurement and perform User Acceptance Testing (UAT) Develop statistical models to understand customer journeys and influence marketing strategies Assist with A/B and multi-variate content test setup, execution and measurement Liaise with the US Digital Intelligence team for continuous guidance, direction and alignment Interact with internal groups and agency partners that support Marketing.Skillset Required:  7-10 years of practical experience on Adobe Analytics/ Google Analytics – Setup / Configuration / Implementation/ Processing Rules / TroubleshootingData Visualization, Dashboard/Reporting - Tableau / Spotfire / PowerBI / Qlikview o Ability to think creatively to solve real world business problemsStrong analytical and problem-solving abilities withAbility to self-motivate, manage concurrent projects and work with remote team/sExcellent listening, written and verbal communication skillsResearch focused mindset with strong attention to detail with high motivation, good work ethic and maturityExperience in Adobe Audience Manager, Advanced Statistics, A/B Testing, Database, SQL, Predictive Analytics, SAS, R, Tableau/Spotfire, Big Data, Google AdWords, Personalization, Segmentation, Machine Learning is desirable
      Job Description:Key tasks and job requirements:Requirement elicitation/ Solution design/ Delivery Provide insights, BI solutions/reports, and recommendations for global marketing programs Deliver reports, dashboards and deep analyses to regional Marketing teams, including US Extract, gather and format the right level of data for Marketing programs and projects Integrate and standardize data sets to enable ease in analytics & visualization (both cloud and enterprise) Establish measures of success (metrics & KPIs) – KPI Synthesis Analyze data to understand performance and effectiveness of digital programs (across channels like Search, Digital Display, Social Media, Email and onsite) oProvide guidance on online tagging for measurement and perform User Acceptance Testing (UAT) Develop statistical models to understand customer journeys and influence marketing strategies Assist with A/B and multi-variate content test setup, execution and measurement Liaise with the US Digital Intelligence team for continuous guidance, direction and alignment Interact with internal groups and agency partners that support Marketing.Skillset Required:  7-10 years of practical experience on Adobe Analytics/ Google Analytics – Setup / Configuration / Implementation/ Processing Rules / TroubleshootingData Visualization, Dashboard/Reporting - Tableau / Spotfire / PowerBI / Qlikview o Ability to think creatively to solve real world business problemsStrong analytical and problem-solving abilities withAbility to self-motivate, manage concurrent projects and work with remote team/sExcellent listening, written and verbal communication skillsResearch focused mindset with strong attention to detail with high motivation, good work ethic and maturityExperience in Adobe Audience Manager, Advanced Statistics, A/B Testing, Database, SQL, Predictive Analytics, SAS, R, Tableau/Spotfire, Big Data, Google AdWords, Personalization, Segmentation, Machine Learning is desirable
      • bengaluru / bangalore
      • permanent
      We are looking for a well rounded data engineer with solid experience in EMR/EHR data structures,data extraction from and push back to the EMR/EHR who can drive rapid prototyping anddevelopment with Product and Technical teams in building and scaling new high value medical datacapabilities to enable products, and insights.Must have: 5+ years software development and engineering experience with data ingestion/integration toData Warehouse/Data Lake with Bachelors/Masters degree in Computer Science or equivalent Experience with HL7 standard and familiarity with FHIR and/or X12 Experience with implementing APIs Experience with databases, building data pipelines for data acquisition, aggregation, andmigration using ETL/scheduling solutions like Airflow, Informatica, Talend, or similar Experience with data cleansing, transformation, validation, and monitoring Experienced in SQL and object-oriented/object function scripting languages such as python Experience with Docker, Kubernetes or similar technology Familiarity with AWS and with Snowflake / Postgres databases Familiarity with architecture and data modelling for data platforms Solid analytical and reasoning skills for design, troubleshooting and root cause analysis Great communication skills with technical and non-technical teamsExperience with any of the following is a plus but not required Experience with Engineering team leadership Experience with MDM, DQM and data quality tools such as Informatica, Talend, or similar Basic familiarity with visualizing data with Tableau, Business Objects, Quicksight, PowerBI andsimilar tools and technical user interfaces (for internal tech teams) Familiarity with NoSQL databases such as Cassandra, MongoDB, or similar Familiarity with machine learning and NLP data stability needs Web scraping and CSS/XPATH selectors
      We are looking for a well rounded data engineer with solid experience in EMR/EHR data structures,data extraction from and push back to the EMR/EHR who can drive rapid prototyping anddevelopment with Product and Technical teams in building and scaling new high value medical datacapabilities to enable products, and insights.Must have: 5+ years software development and engineering experience with data ingestion/integration toData Warehouse/Data Lake with Bachelors/Masters degree in Computer Science or equivalent Experience with HL7 standard and familiarity with FHIR and/or X12 Experience with implementing APIs Experience with databases, building data pipelines for data acquisition, aggregation, andmigration using ETL/scheduling solutions like Airflow, Informatica, Talend, or similar Experience with data cleansing, transformation, validation, and monitoring Experienced in SQL and object-oriented/object function scripting languages such as python Experience with Docker, Kubernetes or similar technology Familiarity with AWS and with Snowflake / Postgres databases Familiarity with architecture and data modelling for data platforms Solid analytical and reasoning skills for design, troubleshooting and root cause analysis Great communication skills with technical and non-technical teamsExperience with any of the following is a plus but not required Experience with Engineering team leadership Experience with MDM, DQM and data quality tools such as Informatica, Talend, or similar Basic familiarity with visualizing data with Tableau, Business Objects, Quicksight, PowerBI andsimilar tools and technical user interfaces (for internal tech teams) Familiarity with NoSQL databases such as Cassandra, MongoDB, or similar Familiarity with machine learning and NLP data stability needs Web scraping and CSS/XPATH selectors
      • bengaluru / bangalore
      • permanent
                     Data Engineer  Job Description:High Skilled and proficient on Azure Data Engineering Tech stacks (ADF, Databricks)Should be well experienced in design and development of Big data integration platform (Kafka, Hadoop).Highly skilled and experienced in building medium to complex data integration pipelines for Data at Rest and streaming data using Spark.Strong knowledge in R/Python.Advanced proficiency in solution design and implementation through Azure Data Lake, SQL and NoSQL Databases.Strong in Data Warehousing conceptsExpertise in SQL, SQL tuning, Data Management (Data Security), schema design, Python and ETL processesHighly Motivated, Self-Starter and quick learnerMust have Good knowledge on Data modelling and understating of Data analyticsExposure to Statistical procedures, Experiments and Machine Learning techniques is an added advantage.Experience in leading small team of 6/7 Data Engineers. Excellent written and verbal communication skills  Active involvement in building of Data acquisition, Data management, Data integration and Data Security solutions.Design new processes and builds large, complex data setsBuild web prototypes and performs data visualizationExposure to statistical modeling and experiment design Should possess excellent analytical skills and troubleshooting ideasShould be aware for Agile Mode of operations and should have been part of scrum teams.Should be open to work in Devops model with responsibilities of Dev and Support both as application goes liveShould be able to work in shifts (if required) Should be open to work in fast paced, project with multiple stakeholders. 
                     Data Engineer  Job Description:High Skilled and proficient on Azure Data Engineering Tech stacks (ADF, Databricks)Should be well experienced in design and development of Big data integration platform (Kafka, Hadoop).Highly skilled and experienced in building medium to complex data integration pipelines for Data at Rest and streaming data using Spark.Strong knowledge in R/Python.Advanced proficiency in solution design and implementation through Azure Data Lake, SQL and NoSQL Databases.Strong in Data Warehousing conceptsExpertise in SQL, SQL tuning, Data Management (Data Security), schema design, Python and ETL processesHighly Motivated, Self-Starter and quick learnerMust have Good knowledge on Data modelling and understating of Data analyticsExposure to Statistical procedures, Experiments and Machine Learning techniques is an added advantage.Experience in leading small team of 6/7 Data Engineers. Excellent written and verbal communication skills  Active involvement in building of Data acquisition, Data management, Data integration and Data Security solutions.Design new processes and builds large, complex data setsBuild web prototypes and performs data visualizationExposure to statistical modeling and experiment design Should possess excellent analytical skills and troubleshooting ideasShould be aware for Agile Mode of operations and should have been part of scrum teams.Should be open to work in Devops model with responsibilities of Dev and Support both as application goes liveShould be able to work in shifts (if required) Should be open to work in fast paced, project with multiple stakeholders. 
      • thiruvananthapuram / trivandrum
      • permanent
      Required Qualifications:  Expertise in at least one of the following domain areas:Big Data: managing Hadoop clusters (all included services), troubleshooting cluster operation issues, migrating Hadoop workloads, architecting solutions on Hadoop, experience with NoSQL data stores like Cassandra and HBase, building batch/streaming ETL pipelines with frameworks such as Spark, Spark Streaming and Apache Beam, and working with messaging systems like Pub/Sub, Kafka and RabbitMQ. Data warehouse modernization: building complete data warehouse solutions, including technical architectures, star/snowflake schema designs, infrastructure components, ETL/ELT pipelines and reporting/analytic tools. Must have hands-on experience working with batch or streaming data processing software (such as Beam, Airflow, Hadoop, Spark, Hive).Data migration: migrating data stores to reliable and scalable cloud-based stores, including strategies for minimizing downtime. May involve conversion between relational and NoSQL data stores, or vice versa. Backup, restore & disaster recovery: building production-grade data backup and restore, and disaster recovery solutions. Up to petabytes in scale. Experience writing software in one or more languages such as Python, Java, Scala, or GoExperience building production-grade data solutions (relational and NoSQL)Experience with systems monitoring/alerting, capacity planning and performance tuningExperience in technical consulting or other customer-facing role Useful Qualifications: Experience working with Google Cloud data products (CloudSQL, Spanner, Cloud Storage, Pub/Sub, Dataflow, Dataproc, Bigtable, BigQuery, Dataprep, Composer, etc)Experience developing insights and visualizations in BI tools and related applications, such as Looker. Experience with IoT architectures and building real-time data streaming pipelinesApplied experience operationalizing machine learning models on large datasetsKnowledge and understanding of industry trends and new technologies and ability to apply trends to architectural needsDemonstrated leadership and self-direction — a willingness to teach others and learn new techniquesDemonstrated skills in selecting the right statistical tools given a data analysis problem 
      Required Qualifications:  Expertise in at least one of the following domain areas:Big Data: managing Hadoop clusters (all included services), troubleshooting cluster operation issues, migrating Hadoop workloads, architecting solutions on Hadoop, experience with NoSQL data stores like Cassandra and HBase, building batch/streaming ETL pipelines with frameworks such as Spark, Spark Streaming and Apache Beam, and working with messaging systems like Pub/Sub, Kafka and RabbitMQ. Data warehouse modernization: building complete data warehouse solutions, including technical architectures, star/snowflake schema designs, infrastructure components, ETL/ELT pipelines and reporting/analytic tools. Must have hands-on experience working with batch or streaming data processing software (such as Beam, Airflow, Hadoop, Spark, Hive).Data migration: migrating data stores to reliable and scalable cloud-based stores, including strategies for minimizing downtime. May involve conversion between relational and NoSQL data stores, or vice versa. Backup, restore & disaster recovery: building production-grade data backup and restore, and disaster recovery solutions. Up to petabytes in scale. Experience writing software in one or more languages such as Python, Java, Scala, or GoExperience building production-grade data solutions (relational and NoSQL)Experience with systems monitoring/alerting, capacity planning and performance tuningExperience in technical consulting or other customer-facing role Useful Qualifications: Experience working with Google Cloud data products (CloudSQL, Spanner, Cloud Storage, Pub/Sub, Dataflow, Dataproc, Bigtable, BigQuery, Dataprep, Composer, etc)Experience developing insights and visualizations in BI tools and related applications, such as Looker. Experience with IoT architectures and building real-time data streaming pipelinesApplied experience operationalizing machine learning models on large datasetsKnowledge and understanding of industry trends and new technologies and ability to apply trends to architectural needsDemonstrated leadership and self-direction — a willingness to teach others and learn new techniquesDemonstrated skills in selecting the right statistical tools given a data analysis problem 
      • no data
      • permanent
      Required Skills:Java, J2EE, Spring, JPA, Angular JS/Knockout JS, JQUERY, XML, XSLT, Web Services (SOAP and REST), JMXOracle/DB2/MSSQL (Any two)WebLogic/WebSphere/TOMCAT(Any two)Good Hands-on experience on any of the Cloud platforms like AWS, Azure, GCP, Oracle Cloud InfrastructureGood understanding of the related tools and technologies, and should have experience in at least one Cloud Migration hands-on, and more will be desirable.Strong understanding across Cloud components (server, storage, network, data, and applications) to deliver end-to-end Cloud solutions.Expertise in identifying and deploying the best approaches for matching Oracle Middleware, Data Conversion on premise to Cloud / User Interaction technology and products to solve business challenges.Hands-on experience with Docker/KubernetesKnowledge/experience in RESTful Web services, virtualization, IaaS & PaaS toolsetsResearch and develop innovative, scalable, and dynamic solutions to business problemsShould have good communication skills and attitude to find ways to solve problems and be able to think through the alternatives for the solutionBe part of a fast pace, focused, agile teamBonus Skills:Knowledge of a multitude of scripting languages including Python, Chef/Puppet/Ansible/Terraform, and Ruby.Design and implement machine learning, information extraction, probabilistic matching algorithms, and models. Minimum Experience:8 to 10 years of experience with the required skill set described above4-year degree in Computer Science or similarly disciplined field and experienceAnalyze, design develop, troubleshoot, and debug software programs for commercial or end-user applications. Writes code completes programming and performs testing and debugging of applications.As a member of the software engineering division, you will perform high-level design based on provided external specifications. Specify, design, and implement minor changes to existing software architecture. Build highly complex enhancements and resolve complex bugs. Build and execute unit tests and unit plans.Review integration and regression test plans created by QA. Communicate with QA and porting engineering as necessary to discuss minor changes to product functionality and to ensure quality and consistency across specific products.Duties and tasks are varied and complex needing independent judgment. Fully competent in my own area of expertise. May have project lead role and or supervise lower level personnel. BS or MS degree or equivalent experience relevant to functional area. 4 years of software engineering or related experience.
      Required Skills:Java, J2EE, Spring, JPA, Angular JS/Knockout JS, JQUERY, XML, XSLT, Web Services (SOAP and REST), JMXOracle/DB2/MSSQL (Any two)WebLogic/WebSphere/TOMCAT(Any two)Good Hands-on experience on any of the Cloud platforms like AWS, Azure, GCP, Oracle Cloud InfrastructureGood understanding of the related tools and technologies, and should have experience in at least one Cloud Migration hands-on, and more will be desirable.Strong understanding across Cloud components (server, storage, network, data, and applications) to deliver end-to-end Cloud solutions.Expertise in identifying and deploying the best approaches for matching Oracle Middleware, Data Conversion on premise to Cloud / User Interaction technology and products to solve business challenges.Hands-on experience with Docker/KubernetesKnowledge/experience in RESTful Web services, virtualization, IaaS & PaaS toolsetsResearch and develop innovative, scalable, and dynamic solutions to business problemsShould have good communication skills and attitude to find ways to solve problems and be able to think through the alternatives for the solutionBe part of a fast pace, focused, agile teamBonus Skills:Knowledge of a multitude of scripting languages including Python, Chef/Puppet/Ansible/Terraform, and Ruby.Design and implement machine learning, information extraction, probabilistic matching algorithms, and models. Minimum Experience:8 to 10 years of experience with the required skill set described above4-year degree in Computer Science or similarly disciplined field and experienceAnalyze, design develop, troubleshoot, and debug software programs for commercial or end-user applications. Writes code completes programming and performs testing and debugging of applications.As a member of the software engineering division, you will perform high-level design based on provided external specifications. Specify, design, and implement minor changes to existing software architecture. Build highly complex enhancements and resolve complex bugs. Build and execute unit tests and unit plans.Review integration and regression test plans created by QA. Communicate with QA and porting engineering as necessary to discuss minor changes to product functionality and to ensure quality and consistency across specific products.Duties and tasks are varied and complex needing independent judgment. Fully competent in my own area of expertise. May have project lead role and or supervise lower level personnel. BS or MS degree or equivalent experience relevant to functional area. 4 years of software engineering or related experience.
      • no data
      • permanent
      • 3 months
      Required Skills:Java, J2EE, Spring, JPA, Angular JS/Knockout JS, JQUERY, XML, XSLT, Web Services (SOAP and REST), JMXOracle/DB2/MSSQL (Any two)WebLogic/WebSphere/TOMCAT(Any two)Good Hands-on experience on any of the Cloud platforms like AWS, Azure, GCP, Oracle Cloud InfrastructureGood understanding of the related tools and technologies, and should have experience in at least one Cloud Migration hands-on, and more will be desirable.Strong understanding across Cloud components (server, storage, network, data, and applications) to deliver end-to-end Cloud solutions.Expertise in identifying and deploying the best approaches for matching Oracle Middleware, Data Conversion on premise to Cloud / User Interaction technology and products to solve business challenges.Hands-on experience with Docker/KubernetesKnowledge/experience in RESTful Web services, virtualization, IaaS & PaaS toolsetsResearch and develop innovative, scalable, and dynamic solutions to business problemsShould have good communication skills and attitude to find ways to solve problems and be able to think through the alternatives for the solutionBe part of a fast pace, focused, agile teamBonus Skills:Knowledge of a multitude of scripting languages including Python, Chef/Puppet/Ansible/Terraform, and Ruby.Design and implement machine learning, information extraction, probabilistic matching algorithms, and models. Minimum Experience:5 to 10 years of experience with the required skill set described above4-year degree in Computer Science or similarly disciplined field and experienceAnalyze, design develop, troubleshoot, and debug software programs for commercial or end-user applications. Writes code completes programming and performs testing and debugging of applications.As a member of the software engineering division, you will perform high-level design based on provided external specifications. Specify, design, and implement minor changes to existing software architecture. Build highly complex enhancements and resolve complex bugs. Build and execute unit tests and unit plans.Review integration and regression test plans created by QA. Communicate with QA and porting engineering as necessary to discuss minor changes to product functionality and to ensure quality and consistency across specific products.Duties and tasks are varied and complex needing independent judgment. Fully competent in my own area of expertise. May have project lead role and or supervise lower level personnel. BS or MS degree or equivalent experience relevant to functional area. 4 years of software engineering or related experience.
      Required Skills:Java, J2EE, Spring, JPA, Angular JS/Knockout JS, JQUERY, XML, XSLT, Web Services (SOAP and REST), JMXOracle/DB2/MSSQL (Any two)WebLogic/WebSphere/TOMCAT(Any two)Good Hands-on experience on any of the Cloud platforms like AWS, Azure, GCP, Oracle Cloud InfrastructureGood understanding of the related tools and technologies, and should have experience in at least one Cloud Migration hands-on, and more will be desirable.Strong understanding across Cloud components (server, storage, network, data, and applications) to deliver end-to-end Cloud solutions.Expertise in identifying and deploying the best approaches for matching Oracle Middleware, Data Conversion on premise to Cloud / User Interaction technology and products to solve business challenges.Hands-on experience with Docker/KubernetesKnowledge/experience in RESTful Web services, virtualization, IaaS & PaaS toolsetsResearch and develop innovative, scalable, and dynamic solutions to business problemsShould have good communication skills and attitude to find ways to solve problems and be able to think through the alternatives for the solutionBe part of a fast pace, focused, agile teamBonus Skills:Knowledge of a multitude of scripting languages including Python, Chef/Puppet/Ansible/Terraform, and Ruby.Design and implement machine learning, information extraction, probabilistic matching algorithms, and models. Minimum Experience:5 to 10 years of experience with the required skill set described above4-year degree in Computer Science or similarly disciplined field and experienceAnalyze, design develop, troubleshoot, and debug software programs for commercial or end-user applications. Writes code completes programming and performs testing and debugging of applications.As a member of the software engineering division, you will perform high-level design based on provided external specifications. Specify, design, and implement minor changes to existing software architecture. Build highly complex enhancements and resolve complex bugs. Build and execute unit tests and unit plans.Review integration and regression test plans created by QA. Communicate with QA and porting engineering as necessary to discuss minor changes to product functionality and to ensure quality and consistency across specific products.Duties and tasks are varied and complex needing independent judgment. Fully competent in my own area of expertise. May have project lead role and or supervise lower level personnel. BS or MS degree or equivalent experience relevant to functional area. 4 years of software engineering or related experience.
      • bengaluru / bangalore
      • permanent
        Duties & Responsibilities  1. You will diagnose, develop & implement recruitment strategies based on business goals and will be kept accountable for end to end tech recruitment process at Awign.  2. You will Own and manage the full recruitment life-cycle from JDs and sourcing to interviews and negotiations for all tech roles. 3. You will work with internal and external stakeholders, including senior business leaders to meet and improve recruitment funnel metrics such as 'offer conversion rate' and 'time to offer'. 4. You will liaise with stakeholders to continuously evaluate the criticality of open positions, identify gaps in the existing recruitment strategies and develop recruiting best practice. 5. You will use market research, networking, social platforms, advanced search tools, employee and external referrals, and community events to find the right talent for Awign. 6. You will keep all the stakeholders informed on recruitment by circulating weekly hiring reports with data points across the hiring funnel, status of every open position, feedback on profiles 7. You will work with the HR Head closely to enhance the candidate experience and NPS. 8. You will be a critical stakeholder in executing and enhancing candidate engagement and onboarding with efficient TAT. 9. You will work closely with the HR Manager to build and sustain a brand that is known for great hiring experience, the best place to work, network, learn and build one’s career. 10. You will have 1-2 people reporting to you on executing day-to-day operations but you will be responsible for owning the results.  Required Skills & Experience  1. 6+ years’ experience in hiring SDEs or other Tech/Product roles at a tech-first startup or high-growth company is mandatory 2. Experience with the full recruiting cycle, with an emphasis on high-quality candidates from Tier-I Colleges 3. Awareness and understanding of the latest talent acquisition trends and growth hacks 4. Experience in working with/managing hiring vendors is a plus. 5. Strong interpersonal skills and ability to work across leadership levels. 6. Ability to effectively influence and drive toward results in a fast-paced environment. 7. Deep understanding of various technologies (SaaS, AI, IoT, Machine Learning, and Mobile technologies. 8. Excellent oral and written communication skills in English  
        Duties & Responsibilities  1. You will diagnose, develop & implement recruitment strategies based on business goals and will be kept accountable for end to end tech recruitment process at Awign.  2. You will Own and manage the full recruitment life-cycle from JDs and sourcing to interviews and negotiations for all tech roles. 3. You will work with internal and external stakeholders, including senior business leaders to meet and improve recruitment funnel metrics such as 'offer conversion rate' and 'time to offer'. 4. You will liaise with stakeholders to continuously evaluate the criticality of open positions, identify gaps in the existing recruitment strategies and develop recruiting best practice. 5. You will use market research, networking, social platforms, advanced search tools, employee and external referrals, and community events to find the right talent for Awign. 6. You will keep all the stakeholders informed on recruitment by circulating weekly hiring reports with data points across the hiring funnel, status of every open position, feedback on profiles 7. You will work with the HR Head closely to enhance the candidate experience and NPS. 8. You will be a critical stakeholder in executing and enhancing candidate engagement and onboarding with efficient TAT. 9. You will work closely with the HR Manager to build and sustain a brand that is known for great hiring experience, the best place to work, network, learn and build one’s career. 10. You will have 1-2 people reporting to you on executing day-to-day operations but you will be responsible for owning the results.  Required Skills & Experience  1. 6+ years’ experience in hiring SDEs or other Tech/Product roles at a tech-first startup or high-growth company is mandatory 2. Experience with the full recruiting cycle, with an emphasis on high-quality candidates from Tier-I Colleges 3. Awareness and understanding of the latest talent acquisition trends and growth hacks 4. Experience in working with/managing hiring vendors is a plus. 5. Strong interpersonal skills and ability to work across leadership levels. 6. Ability to effectively influence and drive toward results in a fast-paced environment. 7. Deep understanding of various technologies (SaaS, AI, IoT, Machine Learning, and Mobile technologies. 8. Excellent oral and written communication skills in English  

    Thank you for subscribing to your personalised job alerts.

    It looks like you want to switch your language. This will reset your filters on your current job search.