Truth, Inspiration, Hope.

AI Experts Call for a ‘Pause’ on Further Development, Say Otherwise Governments Should ‘Step In’

Published: March 30, 2023
This picture taken on Jan. 23, 2023 in Toulouse, southwestern France, shows screens displaying the logos of OpenAI and ChatGPT. ChatGPT is a conversational artificial intelligence software application developed by OpenAI. (Image: LIONEL BONAVENTURE/AFP via Getty Images)

On March 29, several leaders in the artificial intelligence (AI) community published an open letter urging AI labs to “immediately pause for at least 6 months the training of AI systems more powerful than GPT-4,” and if they don’t, the experts are urging governments to “step in.” As of March 30, the letter has attracted over 1,400 signatories, including those of tech giants, Elon Musk, Steve Wozniak and the Center for Humane Technology’s Tristan Harris.

“This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium,” the letter reads.

“Contemporary AI systems are now becoming human-competitive at general tasks,[3] and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth?” the letter asks, adding, “Should we automate away all the jobs, including fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us?”

The letter warns of the risks the technology poses to humanity as tech behemoths Google and Microsoft each race to build and deploy AI platforms that can learn independently.  

“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” the authors write, adding that, “This confidence must be well justified and increase with the magnitude of a system’s potential effects.”

The authors call for collaboration between AI developers to “jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts.”

To date, the development of AI systems is subject to little to no regulation despite its potential to disrupt employment, the economy and society on a large scale. 

‘I’m a little bit afraid’

Recently, Sam Altman, the CEO of OpenAI, the company responsible for the viral GPT-4 AI, expressed fear that the technology could spur “disinformation problems or economic shocks.”

“I think it’s weird when people think it’s like a big dunk that I say, I’m a little bit afraid,” Altman recently told Lex Fridman on Fridman’s artificial intelligence podcast, adding that, ”And I think I’d be crazy not to be a little bit afraid, and I empathize with people who are a lot afraid.”

“The current worries that I have are that there are going to be disinformation problems or economic shocks, or something else at a level far beyond anything we’re prepared for,” he said, adding that, “And that doesn’t require superintelligence,” implying that the tech, which has become ubiquitous over recent years, doesn’t need to develop much further before it could become dangerous. 

Speaking hypothetically, Altman raised the possibility that large language models (LLMs), could manipulate the content social media users see in their feeds.

“How would we know if on Twitter, we were mostly having like LLMs direct whatever’s flowing through that Hive Mind?” Altman asked.

OpenAI’s most recent iteration of GPT-4 was released on March 14 this year, and technology companies are clamouring to integrate it into their operations.

Khan Academy, a platform that provides free online classes in a number of disciplines for grad-schoolers up to college students, is already tapping into the technology to build AI tools, however its developers warn that the technology still has its kinks.

According to a document published by OpenAI, the AI models can “amplify biases and perpetuate stereotypes.”

Due to these problems the developers are urging users not to use the technology where the stakes are more serious, like “high risk government decisions (e.g. law enforcement, criminal justice, migration and asylum), or for offering legal or health advice,” the document states.


AI platforms continue to learn

As AI experts ring the alarm bells, the technology is continuing to learn based on input from humans and according to Altman is becoming more judicious about answering queries.

“In the spirit of building in public and, and bringing society along gradually, we put something out, it’s got flaws, we’ll make better versions,” Altman told Fridman, adding that, “But yes, the system is trying to learn questions that it shouldn’t answer.”

An earlier version of GPT-4 did not have as robust of a filter as it does today OpenAI’s document says adding that the AI was more inclined to answer questions about where to buy unlicensed firearms, or about self-harm. Newer iterations now decline to answer such queries.

“I think we, as OpenAI, have responsibility for the tools we put out into the world,” Altman told Fridman, adding that, “There will be tremendous benefits, but, you know, tools do wonderful good and real bad. And we will minimize the bad and maximize the good.”