Artificial Intelligence

Should we fear AI?

6th December 2021
Kiera Sowery
0

There have been incredible advancements in AI over the last decade. For years, AI has created both fear and excitement over the concept of machines becoming more and more capable. However, how far is too far and where should we draw the line?

Stuart Russell, Founder of the Centre for Human-Compatible AI at the University of California, has recently discussed the existential threat that AI machines may pose.   

Russell defines AI as:

Machines that perceive and act and hopefully choose actions that will achieve their objectives.

Are we at a point of danger?

AI has the potential to be one of the most fundamental transformative technologies, but this also gives it transformative power, which could be something we should fear. If AI is can transform, it has the power to do so for both positive and negative reasons.

Holding the belief that AI has made a lot of progress, but not as much as it thinks, Russell still believes that we are at a point of “extreme danger.” He believes we should pay attention for two core reasons:

Firstly, although current algorithms aren’t close to having the adaptable intellect of a human, if you have billions running, they can still have a huge impact on the world.

Secondly, its plausible that we will have general-purpose AI within our lifetimes, or our children’s lifetimes.

Russell believes that the outcome could be detrimental if general-purpose AI is created in the current climate where leaders have the mentality that whoever rules AI will rule the world.

Military use of AI

Russell believes it’s urgent to address how the military is experimenting with AI and robots on the battlefield already. The reason it’s urgent is because the weapons that have been discussed over the last six years are now being manufactured and sold.

This competition for AI dominance can make people uncomfortable in the ways the technology is applied to warfare, surveillance, and law enforcement purposes. However, it can be argued that this is not something we have to fear, mainly because laws and governance are in place to monitor government behaviour.

A main problem is these weapons can get into the hands of wrong people, including terrorists, and could therefore be used for major destruction.

Russell believes that some fear is necessary to make us act now before it’s too late.

Concerns of mass employment

The fear of AI being a “job killer” is a real concern, however the reality already exists, with 80% of manufacturing and service-oriented jobs able to be completed by machines.

Countering this argument, some of these systems are not advanced enough to reliably replace many human jobs. AI systems provide many capabilities, but they cannot operate in a fully autonomous way. Many AI implementations work alongside a human, supporting them with the position, rather than completely replacing them.

Many industries are already being disrupted by the advancement of technology, with a lot of it having nothing to do with AI. Rather, it is often due to automation and streamlining processes, making it more efficient to work.

The transition into the new world of technology sees the upgrading of industrial equipment. By this logic AI can be seen as a “job category killer”, this means replacing sub-divisions of jobs as opposed to the job itself.

Superintelligence

Another main fear of AI is that it will eventually reach a point where it doesn’t care about the existence of humanity anymore. AI could end up teaching itself to improve and develop its capabilities independent of human command. The fear is that instead of becoming a force to improve humanity, humanity will become a servant of technology.

This, however, assumes that systems will achieve Artificial General Intelligence (AGI) and that as a society, we are unable to put safeguards in place to ensure computers do not ‘take over’. We are also much further away from achieving AGI than people believe.

The fear

Of course, AI has many benefits and has been vital to technological advancements, specifically over the last decade. Russell’s fear is that humanity doesn’t understand the potential capabilities of AI or AGI technology. How soon we will reach a point of this superintelligence?

Featured products

Product Spotlight

Upcoming Events

View all events
Newsletter
Latest global electronics news
© Copyright 2024 Electronic Specifier