Artificial Intelligence

The future of artificial intelligence

1st November 2018
Alex Lynn
0

 

Reviewing industry analyst and media pundit predictions confirms that many expected 2017 to be a significant year for artificial intelligence (AI). Forbes, Deloitte, Gartner, Accenture, Digital Trends and TechRadar all concurred, listing AI and machine learning among tech trends to watch in 2017. 

By Mark Patrick, Mouser Electronics

Articles proclaiming it as the year of AI ran in journals from Fortune to The Guardian, while US newspaper The New York Times asserted that: ‘…machine learning is poised to reinvent computing itself.” However, since AI has been around more than half a century, and the tradition of getting machines to do the work of humans has been well established since the industrial revolution, the question is, why now? 

Despite common perceptions, AI hasn’t become an emergent technology trend overnight. In reality, concurrent advances in theoretical understanding, access to vast amounts of data and computational power, as well as a focus on solving specific problems - have all helped to fuel a resurgence in AI techniques. Indeed, AI has been developing, evolving and embedding itself in our everyday lives since the turn of the century, although a perfect storm of factors have seen things accelerate significantly in recent years.

From strong AI to weak AI

Back in the mid-1950s, pioneering AI researchers were convinced that artificial general intelligence (human-level intelligence and cognition, known as ‘strong’ or ‘full’ AI) was possible and would exist within a few decades. However, it became apparent that researchers had grossly underestimated the difficulty of the task. 

Recent successes have been achieved by addressing particular problems that needed solving. These ‘applied’ or ‘weak’ AI systems, utilising techniques such as neural networks, computer vision and machine learning, have been widely adopted. 

Early AI implementations often focused on enabling computers to compete in strategic games, such as draughts (checkers) or chess - based on the assumption that only a super-intelligent machine could beat a highly proficient human player. Over time, however, as computational power increased and AI techniques developed, AI proved able to master such games. 

In 1994, Chinook won a draughts match against multiple world champion, Marion Tinsley. In 2007, a final solution to the game was completed, meaning the best possible result any opponent can achieve against Chinook is a draw. In 1996, IBM’s Deep Blue finally beat reigning world champion, Garry Kasparov, in a chess game under standard match conditions.

Combining neural networks and deep learning - AlphaGo

Following these triumphs, AI researchers targeted Go. Originating in China over 3,000 years back, Go is a game of profound complexity, with an astonishing 10170 possible board configurations - that’s more than the number of atoms in the known universe. Its far larger branching factor makes previous ‘brute force’ methods (constructing search trees covering all possible positions) prohibitively difficult to use. 

Deepmind Technologies, acquired by Google in 2014, began a research project to test how well neural networks using deep learning could compete at Go. Researchers exposed AlphaGo to large numbers of games to help it develop an understanding of all its nuances. They then had it play against itself thousands of times, incrementally improving by learning from its mistakes, through a process known as reinforcement learning. 

In 2015, AlphaGo became the first computer to beat a professional Go player (three-times European champion, Fan Hui), without handicaps. AlphaGo then went on to defeat legendary 18-time world champion Lee Sedol, earning the highest-possible nine dan professional ranking.

The AI effect

Long considered milestones that could signify the realisation of true AI, once reached, are each downgraded to ‘merely computation’, and therefore not true AI. This phenomenon, written about by Pamela McCorduck (author of several books on the history of AI) and Kevin Kelly (founding executive editor of Wired magazine), is known as ‘the AI effect’ and tends to negatively skew how AI is perceived. 

When computers such as Deep Blue beat human chess masters, but cannot do anything else, it is hard to reconcile this with our expectation of a super-intelligent AI entity. As with magic tricks, once we understand how they are done, there is a tendency to ignore the skill and work involved, and dismiss the achievement. 

At the same time, techniques developed and matured in the course of AI research, such as OCR (optical character recognition), NLP (natural language processing) or image recognition, are integrated into everyday applications, without being called AI. As AI researchers note, ‘true AI’ then gets reframed to mean ‘whatever hasn’t been done yet’. 

Another way of thinking about it is that tasks we previously thought would require strong AI turn out to be possible with weak AI. Hence, people fail to realise how AI increasingly permeates our everyday lives, from facial recognition in photo apps and smartphones, through recommendation engines in content delivery networks, such as Netflix, YouTube, iTunes or Amazon, to customer service chatbots and virtual assistants, like Siri and Alexa.

Where next for AI? 

In her 2017 article for the World Economic Forum, technology entrepreneur and investor, Sandhya Venkatachalam argued that we are on the cusp of a completely new computing paradigm. One where machines are starting to understand and anticipate what we want to do - and, in the future, will do it for us. 

AI today doesn’t look much like full AI, working across narrow, defined use cases. Virtual assistants can understand human language, access and search vast volumes of data, and respond to deliver relevant answers or actions. They can’t, however, clean your house or drive your car. Likewise, self-driving cars can’t learn chess or cook you a meal. These types of AI do one or two things humans already do fairly well - but they save us time and could end up doing those things better than most humans.

Venkatachalam outlines the preconditions that have enabled the acceleration of AI over the past five years. Sensors, processors and connectivity are being added to everything - and as the sources, different types and amounts of data grow exponentially, “data is becoming the new oil.” Powered by that data, “machine learning is becoming the new combustion engine” - taking raw, unrefined data and applying algorithms and mathematical models to discover implicit patterns within it. Then using them to figure out whether new data points fit with predicted future outcomes.

ADAS - Pushing towards autonomous vehicles

One key application area where AI techniques are being applied is the advanced driver assistance system (ADAS) technology that will eventually enable autonomous cars to occupy our roads. Many ADAS implementations thus far have been built using classical vision algorithms. 

This works for simple, independent tasks, such as lane detection or collision warning, but as the scope of ADAS functionalities increases, simultaneous detection and interpretation of the environment gets more complex. This is where AI/machine learning’s scalable approach comes into its own. 

Deep learning methods rely on training data, where visual or behavioural features are learned, and crucially, AI can generalise much better than classical algorithms, thereby increasing its robustness. Deep learning emulates the way human brains learn, recognising patterns and relationships, understanding language and coping with ambiguity.

AI and security

Another field benefiting from AI techniques is security, with companies using machine learning as a force multiplier for resource-challenged teams to better detect security breaches or risks, and respond faster and more effectively. 

In a cyber security context, this can mean scanning network traffic to identify unusual, potentially bad or unauthorised access or behaviour. AI is particularly good at recognising patterns and anomalies within them, making it an excellent tool for detecting threats. 

Physical security and surveillance technology is also adopting deep learning-based video analytics. Smart cameras can monitor premises - detecting unusual access attempts or performing facial recognition, matching against both ‘safe’ and ‘watch’ lists. 

Smart systems can utilise a range of sensing technologies to detect and intelligently act on data in real-time. For instance, they could detect fires from heat maps and not only identify who is at home, but also flag unusual situations automatically, helping deliver safe and personalised security solutions to homeowners and businesses alike.

Obstacles for AI 

One key challenge facing AI is the vast amount of data required to power deep learning systems. The emergence of big data and IoT technology is helping drive the acquisition of data, with connected sensors and devices everywhere. However, there are fields where data is not so easily available - for instance, healthcare, in which there may be regulatory and ethical barriers to data access.

Another challenge is the processing power needed. As AI usage has increased, specialised hardware has been created or adapted to help deliver the performance required. DSPs, GPUs and FPGAs have all been used to accelerate hardware performance in neural network and deep learning applications, and some companies have developed dedicated AI hardware.

Movidius, acquired by Intel in 2016, designs ultra-low-power processor chips - referred to as vision processing units (VPUs) - that are optimised for deep-learning and machine vision algorithms. Its balance of power efficiency and performance allows makers to deploy deep neural network and computer vision capabilities on devices such as smartphones, drones, intelligent cameras and wearables. 

In addition to its system-on-chip (SoC) VPUs, Movidius offers its neural compute stick, a plug-and-play VPU on a USB drive, helping make AI more accessible, particularly when prototyping or training neural networks. 

As AI becomes more pervasive, the question of how and where to deploy it - either embedded locally on-device or cloud-based - becomes more pressing. Until recently, much of the heavy lifting has been done in the cloud, where it is easier for tech giants like Apple, Google and Amazon to scale the processing power and network architecture needed. 

But, as AI becomes embedded in so many everyday applications, latency and reliability become critical. If Siri or Alexa drop their connection to the cloud, we can cope with having to wait for that restaurant recommendation or directions. 

Conversely, when it comes to ADAS systems that enable autonomous cars to avoid collisions with pedestrians or other vehicles, operations need to happen deterministically in real-time, with the processing involved being done at the edge.

Featured products

Product Spotlight

Upcoming Events

View all events
Newsletter
Latest global electronics news
© Copyright 2024 Electronic Specifier