Artificial Intelligence

Dr Geoffrey Hinton warns of AI dangers as he quits Google

2nd May 2023
Kiera Sowery
0

Known as the AI godfather, Dr Geoffrey Hinton has quit his role at Google, warning about the dangers from developments in the field. Dr Hinton announced his resignation from the tech giant in a statement to the New York Times.

Dr Hinton warns some of the dangers of AI chatbots are “quite scary”. He told the BBC: “"Right now, they're not more intelligent than us, as far as I can tell. But I think they soon may be."

Now, Dr Hinton can speak freely about the risks of AI.

Dr Hinton also accepted that his age played a role in the decision to leave Google, explaining he’s 75 and it’s time to retire.

For half a century Dr Hinton nurtured the technology at the heart of AI chatbots such as ChatGPT. His pioneering research on neural networks and deep learning paved the way. in 2012, Dr Hinton and two of his graduate students at the University of Toronto created technology that became the intellectual foundation for the AI systems that are key to the future of tech for many companies.

He has now joined a growing sea of critics who say those companies are racing toward danger with their aggressive campaign to create products based on generative AI.

Dr Hinton’s journey from AI godfather to critic marks a historic moment for the technology industry, offering perhaps its most vital inflection point in decades. Industry leaders believe the new AI systems could be as important as the introduction of the web browser in the early 1900s. AI systems could lead to breakthroughs in everything from drug research to education.  

But eating away at many industry insiders, including Dr Hinton, is the fear they are releasing something dangerous into the world. Generative AI has already proved to be a tool for misinformation. Soon it could be a risk to jobs, and somewhere down the line, a risk to humanity.

“It is hard to see how you can prevent the bad actors from using it for bad things,” Dr. Hinton said in his exclusive interview to the New York Times.

Mark Rodseth, VP of Technology, EMEA, CI&T states: “The development of AI is different to other forms of tech because it is a black box and can also develop without human intervention. The combination of not knowing what’s really going on in these models, and its ability to teach itself, means we can't fully predict how this technology will evolve and the consequences of each evolutionary step.

“That being said, any decisions which have a major environmental, societal, economic and political impact will continue, for the foreseeable future, to be made by humans due to the political systems in place. The greater risk here is who is making those decisions, and what machine learning systems are providing them with data and insights.

“Machine learning models can be prone to misinformation and bias depending on the data they are trained on and how the model is trained. Regulation and transparency are going to be key to ensuring this technology is benign and operated in the right way.”

Risk to society and humanity

After OpenAI released its latest version of ChatGPT in March 2023, over 1,000 technology leaders and researchers signed an open letter calling for a six-month moratorium on the development of new systems as AI technologies pose “profound risks to society and humanity.”

Shortly after, 19 current and former leaders of the Association for the Advancement of Artificial Intelligence released their own letter warning of the risks of AI. The group included Eric Horvitz, Chief Scientific Officer at Microsoft.

Dr Hinton did not sign either of those letters and explained he did not want to publicly criticise Google or other companies until he had resigned from his role.

As companies improve their AI systems, Dr Hinton believes, they become increasingly dangerous. “Look at how it was five years ago and how it is now,” he said of A.I. technology. “Take the difference and propagate it forwards. That’s scary.”

Until last year, Dr Hinton said, Google acted as a “proper steward” for the technology, careful not to release something that might cause harm. But now that Microsoft has augmented its Bing search engine with a chatbot — challenging Google’s core business — Google is racing to deploy similar technology. The tech giants are locked in a competition that might be impossible to stop, Dr. Hinton said.

Dr Hinton’s concern is that the internet will be flooded with false photos, videos and text, and the average person will “not be able to know what is true anymore.”

In the future, he is worried that future versions of the technology pose a threat to humanity because they often learn unexpected behaviour from the vast amounts of data they analyse. This becomes an issue, he said, as individuals and companies allow AI systems to generate their own computer code and run that code on their own. Dr Hinton fears a day when truly autonomous weapons, those killer robots, become reality.

Google’s Chief Scientist, Jeff Dean, said in a statement: “We remain committed to a responsible approach to A.I. We’re continually learning to understand emerging risks while also innovating boldly.”

Alec Boere, Associate Partner for AI and Automation, Europe at Infosys Consulting suggests that to mitigate fears surrounding AI, responsibility should be at the forefront of the enterprise when implementing AI models - with a particular focus being placed on the five core pillars of trust.

Boere explains: “To mitigate fears surrounding AI and prevent dangerous outcomes, responsibility should be at the forefront of the enterprise when implementing AI models. To do this, particular focus should be placed on the five core pillars of trust.

“The first pillar is fairness - having an AI model that runs without bias, to treat consumers and employees fairly. The second is protection, to not only safeguard an individual’s personal information but to resist potential cyberattacks and comply with all legal and regulatory environments. Then comes business accountability and explainability for the decisions the model makes, followed by inclusivity and societal benefit.

“These core focus areas in the delivery of AI-based solutions stem from the human and cultural approach led from within the enterprise. For example, businesses must have diverse teams to avoid transferring human bias into the technical design of AI - as the AI is driven by human input. Businesses should also avoid using outdated data, because these algorithms will then only amplify the patterns from the past and not design new ones for the future. This was highlighted by the OpenAI Dall.E2 model, which when asked to paint pictures of startup CEOs, all were male.

“Whilst OpenAI has opened the ChatGPT door, greater controls need to be put in place, allowing for the management of data sources and more guardrails to ensure trust. To help maintain this trust, every organisation should have policies to ensure they are being AI responsible and should be working with organisations like the CBI and TechUK to help shape government policies too.”

Dr Hinton believes the race between Google, Microsoft and other tech giants will escalate into a global race that will not stop without some sort of global regulation.

Featured products

Product Spotlight

Upcoming Events

No events found.
Newsletter
Latest global electronics news
© Copyright 2024 Electronic Specifier