NVIDIA research wins CVPR Autonomous Grand Challenge
NVIDIA showcases accelerated computing and generative AI breakthroughs for autonomous vehicle development at the Computer Vision and Pattern Recognition conference.
Making moves to accelerate self-driving car development, NVIDIA was named an Autonomous Grand Challenge winner at the Computer Vision and Pattern Recognition (CVPR) conference, running this week in Seattle.
Building on last year’s win in 3D Occupancy Prediction, NVIDIA Research topped the leaderboard this year in the End-to-End Driving at Scale category with its Hydra-MDP
model, outperforming more than 400 entries worldwide.
This milestone shows the importance of generative AI in building applications for physical AI deployments in autonomous vehicle (AV) development. The technology can also be
applied to industrial environments, healthcare, robotics, and other areas.
The winning submission received CVPR’s Innovation Award as well, recognising NVIDIA’s approach to improving “any end-to-end driving model using learned open-loop proxy
metrics.”
Additionally, NVIDIA announced the NVIDIA Omniverse Cloud Sensor RTX, which builds on Their Autonomous Grand Challenge win. Omniverse Cloud Sensor RTX is a new set of
software application programming interfaces (APIs) that enable physically accurate sensor simulation to accelerate the development of fully autonomous machines of every kind.
How end-to-end driving works
The race to develop self-driving cars isn’t a sprint but more a never-ending triathlon, with three distinct yet crucial parts operating simultaneously: AI training, simulation, and autonomous driving. Each requires its own accelerated computing platform, and together, the full-stack systems purpose-built for these steps form a powerful triad that enables continuous development cycles, always improving in performance and safety.
To accomplish this, a model is first trained on an AI supercomputer such as NVIDIA DGX. It’s then tested and validated in simulation — using the NVIDIA Omniverse platform and
running on an NVIDIA OVX system — before entering the vehicle, where, lastly, the NVIDIA DRIVE AGX platform processes sensor data through the model in real time.
Building an autonomous system to navigate safely in the complex physical world is extremely challenging. The system needs to perceive and understand its surrounding environment holistically, then make correct, safe decisions in a fraction of a second. This requires human-like situational awareness to handle potentially dangerous or rare scenarios.
AV software development has traditionally been based on a modular approach, with separate components for object detection and tracking, trajectory prediction, and path
planning and control.
End-to-end autonomous driving systems streamline this process using a unified model to take in sensor input and produce vehicle trajectories, helping avoid overcomplicated pipelines and providing a more holistic, data-driven approach to handle real-world scenarios.
Navigating the Grand Challenge
This year’s CVPR challenge asked participants to develop an end-to-end AV model, trained using the nuPlan dataset, to generate driving trajectory based on sensor data.
The models were submitted for testing inside the open-source NAVSIM simulator and were tasked with navigating thousands of scenarios they hadn’t experienced yet. Model
performance was scored based on metrics for safety, passenger comfort and deviation from the original recorded trajectory.
NVIDIA Research’s winning end-to-end model ingests camera and lidar data, as well as the vehicle’s trajectory history, to generate a safe, optimal vehicle path for five seconds post-sensor input.
The workflow NVIDIA researchers used to win the competition can be replicated in high-fidelity simulated environments with NVIDIA Omniverse Cloud Sensor RTX APIs. This means AV simulation developers can recreate the workflow in a physically accurate environment before testing their AVs in the real world. NVIDIA Omniverse Cloud Sensor RTX microservices will be available later this year.
In addition, NVIDIA ranked second for its submission to the CVPR Autonomous Grand Challenge for Driving with Language. NVIDIA’s approach connects vision language models
and autonomous driving systems, integrating the power of large language models to help make decisions and achieve generalisable, explainable driving behaviour.
Learn more at CVPR
More than 50 NVIDIA papers were accepted to this year’s CVPR, on topics spanning automotive, healthcare, robotics, and more. Over a dozen papers will cover NVIDIA
automotive-related research, including:
- Hydra-MDP: end-to-end multimodal planning with multi-target hydra-distillation
- Winner of CVPR’s End-to-End Driving at Scale challenge
- Read the NVIDIA technical blog
- Producing and leveraging online map uncertainty in trajectory prediction
- CVPR best paper award finalist
- Driving everywhere with large language model policy adaptation
- See DRIVE Labs: LLM-based road rules guide simplifies driving
- Is ego status all you need for open-loop end-to-end autonomous driving?
- Improving distant 3D object detection using 2D box supervision
- Dynamic LiDAR resimulation using compositional neural fields
- BEVNeXt: reviving dense BEV frameworks for 3D object detection
- PARA-Drive: parallelised architecture for real-time autonomous driving
Sanja Fidler, Vice President of AI research at NVIDIA, will speak on vision language models
at the CVPR Workshop on Autonomous Driving.