Senior Data Engineer - Snowflake(Founding Member)

India, Telangana, Hyderabad

icon
₹ 15 - 40 Lakh/year

Full-time

Posted on: 4 days ago

Skills

data engineering
data warehousing
python
sql
snowflake
insurtech
startup
Complex database
ELT/ETL
azure
databricks
aws

Position Title : Senior Data Engineer(Founding Member) - Insurtech StartUp

Location : Hyderabad(Onsite)
Immediate to 15 days Joiners
Experience : 5+ to 13 Years

Role Summary

We are looking for a Senior Data Engineer who will play a foundational role in:

    • Client onboarding from a data perspective

    • Understanding complex insurance data flows

    • Designing secure, scalable ingestion pipelines

    • Establishing strong data modeling and governance standards

    This role sits at the intersection of technology, data architecture, security, and business onboarding.

    .

    Key Responsibilities

    • Lead end-to-end data onboarding for new clients and partners, working closely with business and product teams to understand client systems, data formats, and migration constraints

    • Define and implement data ingestion strategies supporting multiple sources and formats, including CSV, XML, JSON files, and API-based integrations

    • Design, build, and operate robust, scalable ETL/ELT pipelines, supporting both batch and near-real-time data processing

    • Handle complex insurance-domain data including Contracts, Claims, Reserves, Cancellations, and Refunds

    • Architect ingestion pipelines with security-by-design principles, including secure credential management (keys, secrets, tokens), encryption at rest and in transit, and network-level controls where required

    • Enforce role-based and attribute-based access controls, ensuring strict data isolation, tenancy boundaries, and stakeholder-specific access rules

    • Design, maintain, and evolve canonical data models that support operational workflows, reporting & analytics, and regulatory/audit requirements

    • Define and enforce data governance standards, ensuring compliance with insurance and financial data regulations and consistent definitions of business metrics across stakeholders

    • Build and operate data pipelines on a cloud-native platform, leveraging distributed processing frameworks (Spark / PySpark), data lakes, lakehouses, and warehouses

    • Implement and manage orchestration, monitoring, alerting, and cost-optimization mechanisms across the data platform

    • Contribute to long-term data strategy, platform architecture decisions, and cost-optimization initiatives while maintaining strict security and compliance standards

    Required Technical Skills

    • Core Stack: Python, Advanced SQL(Complex joins, window functions, performance tuning), Pyspark

    • Platforms: Azure, AWS, Data Bricks,  Snowflake

    • ETL / Orchestration: Airflow or similar frameworks

    • Data Modeling: Star/Snowflake schema, dimensional modeling, OLAP/OLTP

    • Visualization Exposure: Power BI

    • Version Control & CI/CD: GitHub, Azure Devops, or equivalent

    • Integrations: APIs, real-time data streaming, ML model integration exposure

    Preferred Qualifications

    • Bachelor’s or Master’s degree in Computer Science, Engineering, or related field

    • 5+ years of experience in data engineering or similar roles

    • Strong ability to align technical solutions with business objectives

    • Excellent communication and stakeholder management skills

    What We Offer

    • Direct collaboration with the core US data leadership team

    • High ownership and trust to manage the function end-to-end

    • Exposure to a global environment with advanced tools and best practices