Make your existing NVIDIA Jetson Orin devices faster with Super Mode
NVIDIA has dropped an exciting update to their existing Jetson Orin product line. It is called the super mode, which boosts the internal clocks of the NVIDIA Jetson Orin Nano and NX to increase AI performance and memory throughput further.
Fig 1: Details of Jetson Commercial Modules Performance with Super Mode (Source: NVIDIA)
Users can expect to see up to 1.7 times improvement (depending on the AI model and other system conditions) in their AI workloads with their existing system with just a software update. The software update to JP 6.1 rev 1 allows users to set MAXN power mode on Orin Nano and Orin NX devices.
Fig 2: Performance Improvement of NanoOwl ViT Model with Super Mode On (Source: NVIDIA)
e-con Systems tried to benchmark the increased performance that is possible with a small test setup running the YOLOv8n model for object detection on an Orin Nano running JP 5.1.2 vs. JP 6.1 with super mode enabled. The test results will be updated with more details later.
These are the configuration details of test case one:
- Device: NVIDIA Orin Nano dev kit
- OS: Jetpack 5.1.2 (L4T 35.4.1)
- Power mode: 15W
- Demo app: object detection using the YOLOv8n model
- Model input resolution: 384 × 640
- Camera used: See3CAM_CU81 running at 1280 × 720 (8MP HDR Camera)
This is what was observed:
- Average inference time: 21.3ms
- Minimum inference time: 19.2ms
- Maximum inference time: 23.1ms
Fig 3: Screenshot Showing JTOP and Camera Preview with YOLO Object Detection Running in Real Time
Next, e-con Systems updated the same system to JP 6.1.2 and modified the power mode as follows:
- Device: NVIDIA Orin Nano dev kit
- OS: Jetpack 6.1 (with super mode on)
- Power mode: MAXN
- Demo app: object detection using YOLOv8n model
- Model input resolution: 384 × 640
- Camera used: See3CAM_CU81 running at 1280 × 720
These were the observations with super mode enabled in a more complex scene:
- Average inference time: 18.3ms
- Minimum inference time: 17.2ms
- Maximum inference time: 19.3ms
That is about a 3ms reduction in average inference time on a more complex scene. All these test results are without any other optimisations done and only with supermode on. With further optimisation techniques such as TensorRT, we can get even more performance from the same system.
Fig 4: Screenshot with JTOP, Inference Time, and Camera Preview with YOLO Running in Realtime with Super Mode On
With the increased bandwidth and AI computing of existing kits, users can process more frames per second from their existing camera setup and systems.
Disclaimer: these are preliminary test results. e-con Systems will be updated later with more comprehensive details.