Autonomous Driving Vehicle Perception Engineer

india, Karnataka, Bengaluru

Full–time

Posted on: 3 days ago

Job Requirements

At Quest Global, it’s not just what we do but how and why we do it that makes us different. With over 25 years as an engineering services provider, we believe in the power of doing things differently to make the impossible possible. Our people are driven by the desire to make the world a better place—to make a positive difference that contributes to a brighter future. We bring together technologies and industries, alongside the contributions of diverse individuals who are empowered by an intentional workplace culture, to solve problems better and faster.

Key Responsibilities
  • Design and implement advanced perception algorithms for autonomous vehicles using LiDAR, cameras, radar, and GNSS.
  • Develop and optimize sensor fusion techniques to combine data from multiple sensors, improving the accuracy and reliability of perception systems.
  • Create algorithms for object detection, tracking, semantic segmentation, and classification from 3D point clouds (LiDAR) and camera data.
  • Work on Simultaneous Localization and Mapping (SLAM) algorithms, including Graph SLAM, LIO-SAM, and visual-inertial SLAM.
  • Develop sensor calibration techniques (intrinsic and extrinsic) and coordinate transformations between sensors.
  • Participate in real-time systems design and optimization to meet the high-performance requirements of autonomous driving.
  • Work with ROS2 for integration and deployment of perception algorithms.
  • Develop, test, and deploy machine learning models for perception tasks such as object detection and segmentation.
  • Collaborate with cross-functional teams, including software engineers, data scientists, and hardware teams, to deliver end-to-end solutions.
  • Stay up-to-date with industry trends and emerging technologies to innovate and improve perception systems.

  • We are known for our extraordinary people who make the impossible possible every day. Questians are driven by hunger, humility, and aspiration. We believe that our company culture is the key to our ability to make a true difference in every industry we reach. Our teams regularly invest time and dedicated effort into internal culture work, ensuring that all voices are heard.

    We wholeheartedly believe in the diversity of thought that comes with fostering a culture rooted in respect, where everyone belongs, is valued, and feels inspired to share their ideas. We know embracing our unique differences makes us better, and that solving the worlds hardest engineering problems requires diverse ideas, perspectives, and backgrounds. We shine the brightest when we tap into the many dimensions that thrive across over 21,000 difference-makers in our workplace.

    Work Experience

    Minimum 3+ years of experience in sensor calibration, multi-sensor fusion, or related domains.
  • Strong foundation in linear algebra, 3D geometry, coordinate frames, quaternions, probability, Bayesian filtering, and data association.
  • Hands-on experience with intrinsic and extrinsic calibration of LiDAR, cameras, and radar, including geometric calibration, coordinate transforms, and sensor synchronization.
  • Proven experience with perception algorithms for autonomous systems, particularly in the areas of LiDAR, camera, radar, GNSS, or other sensor modalities.
  • Deep understanding of LiDAR technology, point cloud data structures, and processing techniques; experience with PCL or Open3D.
  • Proficiency in sensor fusion for combining data from LiDAR, camera, radar, and GNSS, including handling time synchronization and motion distortion.
  • Solid background in computer vision techniques; experience with OpenCV and object detection models such as YOLO, Faster R-CNN, or SSD.
  • Experience with deep learning frameworks (TensorFlow or PyTorch) for object detection and segmentation tasks.
  • Hands-on experience with multi-object tracking algorithms such as SORT, DeepSORT, Kalman Filters, UKF, IMM, or JPDA.
  • Strong programming skills in C++ and Python; familiarity with geometric optimization libraries.
  • Familiarity with ROS2 for perception-based autonomous systems development.
  • Experience with parallel computing for real-time performance optimization (e.g., CUDA, OpenCL).