AWS Data Engineering

india, Maharashtra, Nagpur

Full–time

Posted on: 4 days ago

We are looking for a skilled Software Engineer with strong experience in PySpark, SQL, and AWS cloud services to design, develop, and optimize large-scale data processing pipelines. The ideal candidate will work closely with data scientists, analysts, and business stakeholders to ensure efficient data flow and high-quality data solutions.

Mandatory Technical skills -
  • Strong experience in Python, PySpark, and SQL Angular for front-end development.
  • Experience with AWS services (S3, CloudWatch, IAM, SNS, Lambda) '
  • Work with Angular for front-end integration.
  • Collaborate on version control using Git and deployment using Octopus and Azure DevOps.
  • Basic knowledge of Scala and HL7 is a plus.

  • Good to have skills -
  • Terraform knowledge Experience with Git, Octopus, and Azure DevOps for CI/CD.
  • Familiarity with Terraform for infrastructure management.

  • Responsibilities -
  • Develop and manage ETL processes using Python, PySpark, and SQL. Work with Databricks for data processing.
  • Utilize AWS services like S3, CloudWatch, IAM, SNS, and Lambda.
  • Design and implement ETL pipelines using PySpark for processing large datasets.
  • Develop and optimize SQL queries for data extraction, transformation, and loading.
  • Manage and deploy data solutions on AWS services such as S3, EMR, Glue, Lambda, Redshift.
  • Ensure data quality, integrity, and security across all processes.
  • Collaborate with cross-functional teams to understand data requirements and deliver scalable solutions.
  • Monitor and troubleshoot data workflows for performance and reliability.