Industrial

How fast will machine learning reach us?

19th July 2021
Alex Lynn
0

The July 2021 issue of IEEE/CAA Journal of Automatica Sinica features six articles that showcase the potential of machine learning in its various forms. The applications described in the studies range from advanced driver assistance systems and computer vision to image processing and collaborative robotics.

Automation of technology has reshaped both the way in which we work and how we tackle problems. Thanks to the progress made in robotics and artificial intelligence (AI) over the last few years, it is now possible to leave several tasks in the hands to machines and algorithms.

To highlight these advances, the IEEE and the Chinese Association of Automation (CAA) decided to join forces, in the first issue of IEEE/CAA Journal of Automatica Sinica. This journal is among the top seven percent ones in artificial intelligence, control/systems engineering, and information systems (ranked by CiteScore), with high-quality papers on all areas of automation science and engineering. In the July 2021 issue, the journal features six articles covering innovative applications of AI that can make our lives easier.

The first article, authored by researchers from Virginia Tech Mechanical Engineering Department ASIM Lab, USA, delves into an interesting mixture of topics: intelligent cars, machine learning, and electroencephalography (EEG). Self-driving cars have been in the spotlight for a while. So how does EEG fit in this picture?

Sometimes drivers become distracted or fatigued without realising it, increasing the risk of a traffic accident. Fortunately, cars can now be equipped with AI systems that sense and analyse the driver’s EEG signals to constantly monitor their state and issue warnings when deemed necessary. This article reviews the latest EEG-based driver state estimation techniques. They also provide detailed tutorials on the most popular EEG decoding methods and neural network models, helping researchers become familiarised with the field. The authors explained: ‘By implementing these EEG-based methods, drivers’ state can me estimated more accurately, improving road safety’.

Next, a research team from Sichuan University, China, propose a new approach for image captioning, a task that is difficult for computers. The problem is that even though computers can now aptly recognise objects in a given image, it is tricky to describe the scene solely based on these objects. To tackle this, the researchers developed a global attention-based network to accurately estimate the probabilities of a given region in the image of being mentioned in the caption.

This was achieved by analysing the similarities between local visual features and global caption features. Using an attention module, the model can more accurately attend to the most important regions in the image to produce a good caption. Automatic image captioning is a great tool for indexing large images datasets and helping the visually impaired.

In the third article, scientists of Xidian University, China, attempt to bring collaborative robotics to the field of top-view surveillance. More specifically, they propose a detailed framework in which deep learning is used in top-view computer vision, contrary to most studies that focus on frontal-view images. This framework uses a smart robot camera with an embedded visual processing unit with deep-learning algorithms for detection and tracking of multiple objects (essential tasks in various applications, including crime prevention and crowd and behavior analysis).

In the fourth article, researchers from Guiling University, China, propose a new approach for producing super-resolution images based on features that a neural network can extract and use. Their method, called weighted multi-scale residual network, can leverage both global and local image features from different scales to reconstruct high-quality images with state-of-the-art performance. The authors say, “Current imaging devices certainly cannot provide enough computing resources, and thus, we designed a fast and lightweight architecture to mitigate this problem.”

The fifth article by researchers from the University of New South Wales, Australia, covers the complex topic of transparency and trust in human–swarm teaming. According to the authors, explainability, interpretability and predictability are distinct yet overlapping concepts in artificial intelligence that are subordinate to transparency. By drawing from the literature, they proposed an architecture to ensure trustworthy collaboration between humans and machine swarms, going beyond the usual master–slave paradigm. The researchers conclude, “Human-swarm teams will require increased levels of transparency before we can begin to leverage the opportunity that these systems present.”

Next, scientists from the University of Electronic Science and Technology of China showcase yet another use of deep neural networks in the field of computer vision— more specifically, in video anomaly detection. Existing models for automatically detecting anomalies in video footage try to predict or reconstruct a frame based on previous input and, by calculating the reconstruction error, determine if anything seems out of place.

The problem with this approach is that abnormal frames are sometimes reconstructed well, leading to false negatives. The scientists tackled this problem by developing a cognitive memory-augmented network that imitates the way in which humans remember normal samples and uses both reconstruction error and calculated novelty scores to detect anomalies in videos. With verified state-of-the-art performance, the network can be readily applied in surveillance tasks, such as accident and public safety monitoring.

We are all very likely to witness artificial intelligence becoming pivotal in many real-life applications soon. Keep up with the times by checking out the July 2021 issue of IEEE/CAA Journal of Automatica Sinica.

 

Featured products

Product Spotlight

Upcoming Events

View all events
Newsletter
Latest global electronics news
© Copyright 2024 Electronic Specifier