Senior AI Security Engineer (Red & Blue Team)

india, Karnataka, Bengaluru

Full–time

Posted on: 2 days ago

Job Title: Senior AI Security Engineer (Red & Blue Team)

No. of Positions: 1

Locations: Pune/Bengaluru

Position Type: Full-Time

We are looking for a hybrid Security Engineer who refuses to pick a side. You

are a Red Teamer who can craft sophisticated jailbreaks and prompt

injections, but you are also a Blue Teamer who knows how to architect the

guardrails to stop them.

As a Forward Deployed Engineer (FDE), you will not just write reports from a

desk. You will embed with our enterprise clients, attacking their live AI agents

to find vulnerabilities and then working side-by-side with their engineering

teams to implement the fixes.

Key Responsibilities:

The "Red" (Adversarial Simulation)

  • AI Red Teaming: Conduct advanced adversarial testing on Large Language Models (LLMs) and Agentic AI workflows. Execute prompt injections, jailbreaking, model inversion, and data poisoning attacks.

  • Agentic Threat Simulation: Test autonomous agents for "excessive agency" vulnerabilities—manipulating agents into performing unauthorized actions (e.g., executing SQL commands, escalating privileges, or leaking PII).

  • Automated & Manual Testing: Leverage tools like Garak, PyRIT, or TextAttack for automated scanning, while applying manual creativity to find logic flaws in multi-agent orchestration.

  • Chain-of-Thought Exploitation: Analyze and exploit flaws in the reasoning loops of autonomous agents (e.g., LangChain or AutoGen workflows).

  • The "Blue" (Defense & Engineering)

  • Guardrail Engineering: Design and implement input/output filters using tools like NVIDIA NeMo Guardrails, Llama Guard, or Lakera.

  • Identity & Access Control: Architect "Non-Human Identity" policies for AI agents, ensuring they adhere to Least Privilege (e.g., preventing an agent from deleting DB records).

  • Detection Engineering: Build monitoring pipelines to detect real-time attacks (e.g., identifying a "DAN" attack pattern in live chat logs) and automate response triggers.

  • Remediation: Don't just report bugs—fix them. Rewrite system prompts to be robust against social engineering and re-architect RAG pipelines to prevent data leakage.

  • The FDE (Client Engagement)

  • Embedded Problem Solving: Work on-site with client engineering teams to understand their specific business logic and deploy secure AI architectures.

  • Threat Modeling: Lead workshops to map the "Blast Radius" of a client's AI agents (i.e., if this agent is compromised, what can it destroy?).

  • Skills and Qualifications:

  • Experience: 5+ years in Cybersecurity, with at least 2 years focused on Application Security, Penetration Testing, or ML Security.

  • AI/ML Depth: Deep understanding of LLM architectures (Transformers, RAG, Fine-tuning). You understand how a model "thinks" and where it hallucinates.

  • Technical Stack:

  • ➢ Languages: Proficient in Python (mandatory for building custom attack scripts and harness).

    ➢ AI Frameworks: Experience with LangChain, Semantic Kernel, or Bedrock.

    ➢ Security Tools: Burp Suite, OWASP ZAP, plus AI-specific tools (Garak, PyRIT).

  • Offensive Mindset: Proven ability to think like an adversary (e.g., CVEs,Bug Bounties, or CTF wins).

  • Defensive Engineering: Experience implementing WAFs, API Gateways,or IAM policies (OAuth, OIDC)

  • Nice to Have:

  • Experience with Agentic Identity concepts (SPIFFE/SPIRE, Machine ID).

  • Certifications: OSEP, OSWE, or specific AI Security certifications (e.g., NVIDIA, SANS).

  • Contribution to open-source AI security projects or OWASP Top 10 for LLM.

  • What We Offer:

  • Competitive salary and benefits package.

  • Opportunities for professional growth and advancement.

  • Exposure to cutting-edge technologies and projects.

  • A collaborative and supportive work environment.