Events News

AI-based object tracking with multiple high-end cameras

10th October 2024
Paige West
0

N.A.T. is presenting its NATvision vision platform with integrated artificial intelligence (AI) for object detection and tracking for the first time at Vision, the world trade fair for machine vision.

The FPGA-based MicroTCA platform, which was developed for the system consolidation of multi-camera applications, is based on AMD's (formerly Xilinx) ZYNQ UltraScale+ MPSoC FPGA processors and supports the versatile AMD Deep Learning Processor Units (DPU). The DPU integration enables developers to utilise convolutional neural network (CNN) technologies such as VGG, ResNet, GoogLeNet, YOLO, SSD, MobileNet and FPN for their multi-camera based object tracking. In combination with the highly efficient parallel video data pre-processing using FPGAs, this makes the overall system extremely fast and eminently suitable for use in real-time and high-resolution applications. Up to 48 directly connected 10GbE cameras are supported in a 19 inch 2U form factor with up to 100GbE data throughput, as well as CPU boards for higher-level management and control functions. Typical real-time applications include bulk material sorting and inline inspection and research. The system is also ideal for slow-motion applications in chemistry, materials and process engineering, stress tests, and sports analytics.

With NATvision, developers and system integrators can process multi-camera applications on a single consolidated platform. Compared to conventional PC-based vision platforms, NATvision enables many more cameras to be integrated. Thanks to FPGA-based image data pre-processing and AI analysis, the system is also faster and more energy-efficient than CPU and GPU combinations. In addition, FPGA logic can be flexibly adapted to individual processing tasks and different transmission protocols as required. This means that the solution can be continuously developed, making NATvision a secure long-term investment.

Live demos of multi-camera applications

The performance of the new real-time FPGA-based NATvision system will be demonstrated at Vision using two live demos that can be scaled for multi-camera applications. Among other things, the trajectory of a ball will be calculated and traced in real time.

The technology in detail

The FPGA-based NATvision vision system from N.A.T., being presented for the first time at the Vision trade fair, can be equipped with one to twelve FPGA cards for up to 48 directly connected 10GbE cameras in a 19 inch 2U form factor, depending on requirements and application. The performance of the AMD ZYNQ UltraScale+ MPSoC processors can be scaled from 103,000 to 1.143 million logic cells, depending on requirements. All AMD Deep Learning Processor Unit (DPU) architectures from B512 to B4096 are supported. The FPGA cards are modular in design and can therefore be integrated multiple times in parallel, which makes the AI performance very scalable. For example, if the B4096 DPU, which can perform 4096 operations per clock cycle, is integrated at 300MHz, it can achieve up to 1.2 TOPS. The design of the in-house GigEVision firmware package for the FPGA cards is also modular, so that other camera interfaces such as CoaXPress can be easily integrated instead of GigEVision in the future. The sample applications supplied with NATvision and the GenICam-compatible SDK enable users to quickly commission and integrate the system into their own software routines.

Visit N.A.T. from 8-10th October at Vision in Hall 10, Stand 10H46

Featured products

Upcoming Events

View all events
Newsletter
Latest global electronics news
© Copyright 2024 Electronic Specifier