QA

india, West Bengal, Kolkata

Full–time

Posted on: a month ago

Exp: 5-8 yrs

Location: Bangalore, Mumbai

Key Responsibilities
  • Functional & Manual Testing (Web Application)
  •  Analyze business and technical requirements and translate them into detailed test cases and scenarios

     Perform functional, regression, integration, and system testing for web applications

     Participate in early design and requirement reviews to identify risks and gaps
  • Automation Testing
  •  Design, develop, and maintain scalable automation frameworks

     Automate UI, API, and regression test suites

     Ensure automation is reliable, reusable, and integrated into CI/CD pipelines

     Extend automation to support AI response validation where applicable
  • API & Backend Testing
  •  Test RESTful APIs using tools like Postman, REST Assured, Karate, or equivalent

     Validate payloads, error handling, and edge cases

     Collaborate with backend teams to debug and verify fixes
  • Database & Data Validation
  •  Perform database validation to ensure data consistency across systems

     Write SQL queries to validate inventory data, transactions, and reports

     Ensure alignment between UI, API, and database layers
  • AI Model & Response Validation
  •  Validate AI-generated outputs for accuracy, relevance, and completeness

     Design test scenarios for non-deterministic outputs (multiple acceptable responses)

     Identify and report issues such as hallucinations, incorrect recommendations,

    and inconsistencies

     Define evaluation metrics (e.g., precision, recall, response quality scoring)
  • Agentic Workflow Testing
  •  Test end-to-end AI agent workflows involving planning, decision-making, and execution

     Validate multi-step processes (e.g., recommendation → action → system update)

     Ensure correct tool/API usage by agents

     Test failure handling, fallback logic, and recovery mechanisms
  • Prompt & Scenario Testing
  •  Design and execute prompt-based test cases

     Evaluate system behavior under varied inputs and edge scenarios

     Perform exploratory testing for real-world user queries
  • Non-Functional Testing
  •  Support performance, scalability, and reliability testing

     Validate system behavior under load, especially for AI-driven components

     Ensure stability of AI responses under concurrent usage
  • Bias, Safety & Compliance Testing
  •  Identify potential biases or unfair outcomes in AI responses

     Ensure outputs comply with business rules and safety guidelines

     Validate guardrails and content filtering mechanisms
  • Collaboration & Process
 Work closely with Product, Engineering, Data Science, and DevOps teams

 Participate in sprint planning, stand-ups, and defect triage

 Track and manage defects using tools like JIRA

 Contribute to QA strategy, test plans, and best practices

 Advocate shift-left testing including AI validation early in the lifecycle

Required Skills & Qualifications

Technical Skills

 4–8 years of experience in QA for web-based applications

 Strong understanding of web architecture (UI, backend, APIs, databases)

 Hands-on experience with automation tools: Selenium, Playwright, Cypress, REST

Assured, Postman, or Karate

 Proficiency in Java, Python, or JavaScript

 Strong SQL skills for data validation

 Experience with CI/CD tools (Jenkins, GitLab CI, Azure DevOps)

AI / Agentic Testing Skills

 Understanding of AI/ML concepts and LLM-based systems

 Experience or exposure to testing AI-driven applications or chat-based systems

 Ability to validate non-deterministic outputs and define acceptance criteria

 Familiarity with prompt engineering and response evaluation techniques

 Understanding of agent-based systems and multi-step workflows

Tools & Technologies

 Test Management & Defect Tracking: JIRA, TestRail, Zephyr

 Version Control: Git

 API Testing: Postman, Swagger

 Automation: Selenium, Cypress, Playwright

 AI Evaluation (nice to have): Prompt testing frameworks, evaluation dashboards

 Cross-browser testing tools

Preferred / Nice-to-Have Skills

 Experience with inventory management, ERP, or supply chain systems

 Knowledge of microservices architecture

 Exposure to cloud platforms (AWS, Azure, GCP)

 Experience testing AI/ML or recommendation systems

 Familiarity with LLM evaluation tools or frameworks

 Basic knowledge of performance testing tools (JMeter, Gatling)

Soft Skills

 Strong analytical and problem-solving skills

 Ability to handle ambiguity in AI-driven