5G, autonomous systems and the revenge of real time and determinism!
If you tend to look at the technology landscape through the lens of a cloud approach, you probably believe that infinite compute, storage and bandwidth are available to you thanks to the cloud. In terms of scalability and elasticity, IT and most of the popular PaaS have largely demonstrated that this is possible - just buy more compute, storage, services from your favorite cloud platform.
Guest blog written by Michel Genard, Wind River.
In the journey to cloudification, the concept of latency, determinism and real time sometimes fade away, especially with the attitude of getting a job done by simply aligning the needed cloud resources. Oversizing the resource is good, and from a performance point of view, good enough is good enough. Right?
However, now here come autonomous systems and 5G. Suddenly the importance of latency and determinism are back in the game. Why?
Take the example of an autonomous car - let’s say that you’ll have what amounts to a mini data center in the trunk (ignore for a moment that you’d have to ruggedise this system to make it impervious to road conditions/heat/etc., in which case it may be best to move some compute to the cloud) and a bunch of highly intelligent Electronic Control Units (ECUs) under the hood. These ECUs need to make decisions and take specific actions such as braking, change in speed, or gear change.
The intelligence would primarily be in the trunk (i.e. the data centre) and this brings about two critical design constraints. First, you need to make sure that the time it takes for the 'brain' to decide an action to command the ECU is not only fast but is appropriate for the expected action. Every moment counts, especially when you consider that a car driving at 60 miles per hour moves about 1 inch every second. Would you feel safe if an autonomous car was under the control of something equivalent to an IT/Data center Linux-based system with a latency of 100ms (I’m being generous and won’t even count the latency of the sensor, ECU and other compute time)?
Secondly, you want to make sure that the car can consistently behave the same way time and time again. What the IT world often refers to as Quality of Service doesn’t emphasis the need for determinism or worst case execution, it merely means that the job will be done under a certain budget (which in cloud control terms means compute and storage).
I hope you get my point - dropping an IT/Data Center equivalent kind of system into a car to take control of the equipment and yourself with a 'good-enough' approach is not going to make autonomous transportation successful at large scale. The good news is that a solution exists.
Autonomous vehicles and transportation can leverage proven technologies that draw from avionics equipment for extreme reliability and optimisation in networking communication between devices, infrastructure and other cars. Each of these industries have solved real time and reliability challenges. In our portfolio, our edge compute offering Helix Virtualisation Platform allows you to have a light weight Linux OS (for the compute and storage) running side-by-side with the most deployed commercial, safe and secure RTOS VxWorks (for all the control tasks).
Let’s now look at 5G. 5G is much more than just an update of 4G. It’s more than being simply faster, cheaper (maybe!) and better, especially if you consider how an operator like Verizon is capturing the value of 5G. I want to double click on this specifically regarding two aspects: Latency and Connected Devices. 5G has an aggressive target (if you are coming from the IT/Data Center) of 1ms. This is really the maximum you will expect if you want to manage in real time any sort of critical infrastructure in Industrial IOT, autonomous systems and all other OT use cases.
Here low latency is going to be critical for deployment at scale for use cases such as VRAN, Augmented Reality, Virtual Reality and autonomous cars. It will likely not happen by deploying IT grade operating environments like OpenStack, which has been OK so far in the Core and Data Centre of the network, otherwise 5G would likely fail from a scale and cost point of view. It is going to require significant enhancement of open source technology.
From a deployment perspective, let’s consider the enormity of Connected Devices (75billion!). It amounts to +100x more than in 4G, with a lifespan expected of ten years or more, with a required reliability of 99.999%. In practical terms, it means that many devices in the value chain (from user to IT/Data Centre) will not only be Intel-based and will likely be ARM-based as well. There will be a need to be low cost/power and tailored for new use cases supporting low interrupt and packet latency.
Anticipating the needs of 5G and to support both the far edge and the near edge topology, Wind River has been investing in Linux-based technology that fully supports containers and Openstack with enhanced real-time patches, distributed cloud (to scale), and high availability. If you want to know more, I recommend you read about our container technology and resources, our Titanium Cloud offering and our recent efforts with the O-RAN Alliance.
History has demonstrated that trends are cyclical. At the end of the last century the market view was 'latency and determinism are dead!' Nowadays, however, we can say 'long live latency and determinism!'
Courtesy of Wind River.