Join a forward-thinking engineering team focused on advancing autonomous ground operations through intelligent perception systems. In this role, you'll lead the creation of 3D object detection architectures that fuse data from camera, LiDAR, and radar sensors to enable safe, reliable machine understanding of complex environments.
Key Responsibilities
- Design and implement multi-modal 3D detection frameworks that integrate diverse sensor inputs
- Construct and manage end-to-end data pipelines for collection, labeling, and performance evaluation
- Develop and curate proprietary datasets to support training and benchmarking of detection models
- Train, validate, and deploy 2D and 3D detection models into production environments
- Evaluate and integrate state-of-the-art detection techniques to improve system accuracy and robustness
- Provide technical mentorship to engineers and promote best practices in model development
Qualifications
Candidates should hold a Master’s or PhD in Computer Science, Robotics, or a closely related field, with at least five years of hands-on experience building and deploying object detection systems. You must have full-stack perception experience—from sensor integration and preprocessing to detection and tracking—and a strong command of Python. Proven ability to lead technical initiatives independently is essential.
A PhD and prior publications in top-tier conferences such as CVPR, NeurIPS, ICRA, or AAAI are considered strong advantages. Experience with real-world deployment of perception systems in dynamic environments is highly valued.
Technology Environment
The role centers on 3D object detection, multi-sensor perception fusion, and scalable data infrastructure. Core tools include Python and frameworks supporting camera, LiDAR, and radar integration. You’ll work deeply with 2D and 3D detection models, data pipeline orchestration, and quantitative evaluation metrics to ensure model performance meets operational demands.
