Shani Murray, UC Irvine
In November 2022, ChatGPT quietly slipped into global conversations with its human-like responses to text queries. By January 2023, more than 100 million users were interacting with the novel chatbot powered by artificial intelligence (AI). In February, an AI-powered Microsoft chatbot told a tech journalist to leave his wife: “You’re married, but you love me.”
Now, four months after OpenAI first introduced ChatGPT to the public, AI scholars and tech experts have published an open letter, calling for a pause in AI experiments: “recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict, or reliably control.” The letter’s signees, including Elon Musk and Steve Wozniak, called on “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”
“I signed the letter too on the first day,” says Pierre Baldi, distinguished professor of computer science in UC Irvine’s Donald Bren School of Information and Computer Sciences (ICS). “I have some doubts as to whether a real pause can be implemented worldwide, but even if it cannot, the letter is useful for raising awareness about the issues.”
Recent advancements in AI have opened Pandora’s box. As asked in the open letter, “Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us?”
Understanding the social ramifications
“At its core, AI challenges what it means to be human,” says Baldi, who first touched on this topic more than 20 years ago in his book “The Shattered Self.” Researchers use the term artificial general intelligence (AGI) to capture the human ability to behave intelligently across a large number of tasks and modalities. “Current systems like GPT-4 seem to be closing in rapidly on this ability,” explains Baldi. “The current systems, let alone AGI systems, are very powerful. Among other things, they can harness the collective knowledge of humanity in a way that no single human being can do.”
Such collective knowledge can be leveraged in productive ways, but there are also potential — and unforeseen — dangers. “This is the main reason for seeking a moratorium,” stresses Baldi.
Chancellor’s professor of computer science Padhraic Smyth, who also signed the open letter, agrees.
“My concern is that we are far from a full understanding of the limitations and dangers of these models,” says Smyth. “As AI researchers, we understand how to write down the mathematics and algorithms so that these models can be learned from data, but once they are learned, we don’t have the tools to understand and characterize what they are capable of,” he says. “Now is the time to take a closer look at the issue of AI safety and put the common good ahead of commercial interests.”
Associate professor of computer science Sameer Singh didn’t sign the letter, but not because he’s not in agreement with his colleagues.
“We’re building something that over time can be dangerous if misused, and we don’t understand enough about these models to understand the various ways in which they can be misused,” he says. “The adoption of these models is growing faster than we can understand and analyze, and can lead to catastrophic consequences because of how ingrained these models might become. My whole research agenda has focused around how to interpret and analyze large language models for the last 5-6 years, for this very reason.”
So why didn’t he sign the letter? “Primarily because it doesn’t focus enough on concrete, real dangers and actionable plans,” he explains. “The real danger is that these models are misleadingly intelligent, just enough to get deployed to have an impact at scale and in critical applications, but not actually good enough to be as reliable as they seem — which is different from the dangers the letter focuses on.”
Prioritizing the Common Good
On March 20, 2023, the Center for AI and Digital Policy (CAIDP) filed a formal complaint with the Federal Trade Commission, urging an investigation of OpenAI and suspension of large language models sales.
“The recent advances in generative AI, such as large language models like for text and diffusion models for images, are extremely impressive from a technical perspective,” says Smyth. “However, from a social perspective, I’m tempted to quote poet W.B. Yeats here: ‘A terrible beauty is born.’”
One problem is that such models can pick up social and cultural biases from their training, and tools and processes are not yet in place to quantify and correct such biases. Another problem, as Singh noted, is the false sense of reliability. Smyth echoes this concern. “These models also ‘don’t know what they don’t know,’ so they will confidently generate text that sounds very plausible, even when the facts being stated are completely wrong.”
Before releasing more of these powerful AI models for public use, we need a better understanding of their capabilities and limits, and certification and auditing processes should be put in place to safeguard society. “After all, in the physical world, we don’t let auto manufacturers put new models of cars without stringent safety testing,” notes Smyth, “or let drug companies release new drugs without extensive clinical trials.”
As chancellor’s professor of informatics Paul Dourish points out, “One of the challenges around AI is that the various parties involved — which must surely include venture capitalists, regulators, and customers as well as technologists — have very different ideas about what [AI] is or might be.” New policies could not only create a shared understanding between the various stakeholders, but also incentivize the common good.
“The open letter speaks of the race for AI. AI research itself is only half of that proposition; the other is the set of incentives and rewards that create a race,” stresses Dourish, who serves as director of UCI’s Steckler Center for Responsible, Ethical, and Accessible Technology (CREATE). “Part of the challenge we need to address is that market competition actively incentivizes exactly the problems that the open letter is pointing out, and we may need to think differently about ownership and regulation in that space.”
Singh adds that we should also focus on decentralization in this race for AI. “Currently, the technology is primarily in the hands of a few companies, and this monopolization is a big concern,” he says. “The power getting centralized in the hands of the few makes it much more dangerous.”
Raising awareness
On April 12, a small delegation from UC Irvine — including Baldi — will travel to San Francisco for a half-day workshop on AI Governance in the World, organized by the EU Commissioner for Innovation. The discussion will center on finding ways to harness the potential of AI while containing its most harmful risks.
Such discussions are also central to a new course Baldi is teaching this quarter at UC Irvine called AI Frontiers. Students will be tasked with imagining as many positive and nefarious uses of current large language models as possible. “Many of the current concerns revolve around issues of bias and misinformation campaigns, threats to education and employment, and cybersecurity threats and the combination of AI with weapon systems,” says Baldi. He acknowledges that these are legitimate concerns while also stressing the infinite number of potential scenarios.
“What happens when instead of having one large language model, we have a population of them with the ability to cooperate or compete?” he asks. “What happens when AGI can make scientific discoveries or invest in the stock market better than any human? What happens when AGI can design political, societal, economical systems for us? And what about the scenarios that our brains are not yet capable of envisioning?”
Whether we are able to hit pause or not, such conversations are critical to understanding the power of AI and realizing its potential in ways that ultimately benefit the global society.