Will the growth in AI finally get autonomous cars on the road?
Automated driving might be one of the most anticipated developments in the transport industry. From autonomous consumer cars to autonomous taxis, both businesses and consumers are equally eager for their rollout.
Yet, for a technology that has been touted and seriously trialled as early as 2017, when Waymo announcing that it had begun testing driverless cars without a safety driver in the driver position, little progress has seen to have been made in terms of them becoming rolled out.
Aside from regulation, company setbacks, wider economic headwinds, one of the key concerns contributing to the delays is safety. Indeed, protesters prompted by safety concerns are taking actions against Cruise autonomous cars to stop them driving ahead of a vote to approve their ability to charge people for rides in in San Francisco.
But could something turn the tide and help not only regulatory but popular approval of autonomous cars hitting the tarmac?
Yes, the major leap in AI as evidenced by ChatGPT’s explosion and Nvidia ramping up supply of its AI chips show that it might not be long before autonomous cars get the boost they need to be fully safe whilst functional.
Autonomous driving examined
Although the name is pretty self-explanatory, the systems behind it require a little more examination.
Fully autonomous driving refers to a vehicle navigating from point A to B without any human intervention. AI is therefore already an integral part of the process, as AI must make sense, spot patterns, make predictions and based on an analysis of all the data input it is receiving through the multiple sensors on the car.
AI’s use on autonomous cars already allows some degree of autonomy, however, it is incrementally going through stages. “The current stage L2+/L3 allows automated vehicle operating in an eyes-on/hands-off mode where the driver is required to intervene in certain situations,” RK Anand, Founder and Chief Product Officer at Recogni explains to Electronic Specifier. “AI is the most efficient way of perception processing for autonomous mobility.”
But If AI is currently good enough to allow a car to operate with just the driver remaining aware, why can it not operate entirely independently?
AI’s current capabilities in autonomous driving
“As we progress from L2+ to higher levels of autonomy, the need for AI models based on an abundance of data that depict a realistic view of the environment is ever more crucial,” says Anand.
Currently, autonomous vehicles have been showing they can operate effectively in contained areas or circuits. That can be anything from company campuses to a bus route. Indeed, this is where current forays into truly autonomous, eyes off/hands off driving could first be rolled out. “Areas that are geo-fenced or have a set path, such as a parking lot to an airport terminal loop, delivery and transportation on college campuses, and long haul delivery, are expected to be the first beneficiaries of autonomous vehicles,” says Anand.
In Phoenix, Arizona, a Waymo taxi is already picking people up and driving them around the whole area of certain neighbourhoods. But why is it that these vehicles don’t need a pair of eyes to observe like the Tesla cars running on L2+/L3?
Because where things become difficult is when autonomous cars leave the circuit they have been trained on and try to apply that information in a new setting. “Close to perfect understanding of the environment surrounding a vehicle at all times is a prerequisite for fully automated driving,” explains Anand.
To train an algorithm model to be able to have the capabilities to safely navigate, without stewardship, anywhere, is a large task. “Currently, there are a number of hurdles for AI in autonomous vehicles,” says Anand. “Close to perfect understanding of the environment surrounding a vehicle at all times is a prerequisite for fully automated driving. Yet, there still exists some issues or limitations in the sensing instruments, the computations, and algorithms.”
What AI needs to develop to help Autonomous driving
“Understanding the environment around the vehicle in real-time is fundamental for realising safe automated driving,” explains Anand. “Safety requires efficient processing of information with AI models that deliver a high degree of accuracy. Such systems operate continuously and logically without fatigue resulting in less accidents and safer driving.”
With the race for AI on, and with developments being made in the learning for the algorithms for better decision making, the hardware to allow for greater power; Edge computing capacity for more rapid and localised decisions, and models being improved for lower compute bandwidth and higher accuracy, then the road begins to be better paved for error-free self-driving.
As Anand puts it: “As we progress from L2+ to higher levels of autonomy, the need for AI models based on an abundance of data that depict a realistic view of the environment is ever more crucial. These models will facilitate the near-perfect perception that is required for autonomous vehicles to safely navigate on their own.”