ChatGPT CEO calls for US lawmakers to regulate AI
In a surprise turn of events, the creator of the widely successful and advanced chatbot ChatGPT has called on US lawmakers to regulate AI.
This came as OpenAI CEO Sam Altman testified before a US Senate committee on Tuesday (16th May) about the possibilities – and pitfalls – of the new technology.
Since its debut in November 2022, OpenAI’s generative chatbot has gone on to dominate much of the business and cultural landscape. TV shows like South Park used it to write an episode, the now-defunct Buzzfeed used it to write its quizzes; universities are having to incorporate it into their curriculum or employ methods to detect it in essays and tech titans like Meta have now pivoted its focus from the Metaverse to AI.
This buzz has brought many companies to try and bring their own chatbot to market, and it’s this Altman has voiced concern over.
"I think if this technology goes wrong, it can go quite wrong...we want to be vocal about that," Mr Altman said. "We want to work with the government to prevent that from happening."
Altman suggested potential measures could include the forming of a new agency to license companies focusing on AI as he compared the explosion of its everyday use to be as pivotal as "the printing press", all the while stressing its potential dangers.
Alexey Khitrov, Founder and President, ID R&D, agrees with Altman’s assessment, but expressed a more positive outlook: "OpenAI is correct in its assessment that there are risks with every new type of technology. Yet, generative AI is also a force for good. Businesses and the wider society should be reassured that AI is also being leveraged to thwart fraud attempts.”
Giving examples of "a combination of licensing and testing requirements", Altman offered suggestions for how this new US agency could control theindustryand be used to regulate the "development and release of AI models above a threshold of capabilities". He even went as far to say his own company,OpenAI, should be independently audited. Some Senators instead suggested new laws were needed to make it easier for people to sue OpenAI.
His comments come fresh off the tails of Dr Geoffrey Hinton, who also warned about the dangers of the developments in the field of AI as he resigned from his position at Google.
Such dangers range from the existential, like rouge AI wiping out humanity, to social,like AI taking away people’s jobs. With businesses seeing the potential of this generative AI, and economic headwinds taking a toll on their cash flow, many could consider now the conditions are in place to downsize staff. BT recently announced moves to cut 55,000 jobs, with aims to replace up to a fifth of them with AI.
"As organisations move forward in 2023, it would be unwise to dismiss AI, as used correctly it will become the most valuable business tool,” said Jeremy Rafuse, VP & Head of Digital Workplace at GoTo. “It is only alongside human expertise that AI and advanced machine learning can run effectively. Human support staff can provide guidance and expertise to AI systems to help them better understand and respond to requests. By training AI systems and incorporating human feedback, AI can improve its accuracy and responsiveness over time.”
It's not all milk and honey for generative AI, however. When Google first launched ‘Bard’, a competitor to ChatGPT, the system answered questions incorrectly and resulted in $100 billion being wiped off the company’s share value.
Yet, as Altman told the committee:" These models are getting better." So, a future where all what was discussed may be over the horizon so soon that some legislators have wondered if any of the actions – bar an outright, albeit it short, ban like Italy – could even keep up.