AV perception engine launched for reliable autonomous driving
VAYAVISION has announced the release of VAYADrive 2.0, an AV perception software engine that fuses raw sensor data together with AI tools to create an accurate 3D environmental model of the area around the self-driving vehicle.
VAYADrive 2.0 breaks new ground in several categories of AV environmental perception - raw data fusion, object detection, classification, SLAM, and movement tracking - providing crucial information about dynamic driving environments, enabling safer and reliable autonomous driving, and optimising cost effective sensor technologies.
“This launch marks the beginning of a new era in autonomous vehicles, bringing to market an AV perception software based on raw data fusion,” said Ronny Cohen, CEO and Co-founder of VAYAVISION. “VAYADrive 2.0 increases the safety and affordability of self-driving vehicles and provides OEMs and T1s with the required level of autonomy for the mass-distribution of autonomous vehicles.”
The VAYADrive 2.0 software solution combines AI, analytics, and computer vision technologies with computational efficiency to scale up the performance of AV sensors hardware. The software is compatible with a wide range of cameras, LiDARs, and radars.
VAYADrive 2.0 solves a key challenge facing the industry: the detection of 'unexpected' objects. Roads are full of 'unexpected' objects that are absent from training data sets, even when those sets are captured while travelling millions of kilometers. Thus, systems that are mainly based on deep neural networks fail to detect the 'unexpected'.
To detect objects, no single type of sensor is enough; Cameras don’t see depth, and distance sensors, such as LiDARs and Radars, possess very low resolution. VAYADrive 2.0 upsamples sparse samples from distance sensors and assigns distance information to every pixel in the high resolution camera image.
This allows autonomous vehicles to receive crucial information on an object’s size and shape, to separate every small obstacle on the road, and to accurately define the shapes of vehicles, humans, and other objects on the road.
“VAYADrive 2.0’s raw data fusion architecture offers automotive players a viable alternative to inadequate ‘object fusion’ models that are common in the market,” said Youval Nehmadi, CTO and Co-founder of VAYAVISION.
“This is critical to increasing detection accuracy and decreasing the high rate of false alarms that prevent self-driving vehicles from reaching the next level of autonomy.”
VAYAVISION will be showing its solution at the CES - Consumer Electronics Show in Las Vegas from 8th-11th January 2019, at Booth 301 of the OurCrowd Pavilion, Westgate Paradise Centre.