For example, one prevailing counterpoint is that LLMs don’t have a body. Yet the writers remind us of examples like physicist Stephen Hawking, who interacted nearly entirely through text and synthesized speech. His physical limitations did not diminish his intelligence; therefore, motor capabilities should not be a prerequisite for intelligence, the authors suggest.
“This is an emotionally charged topic because it challenges human exceptionalism and our standing as being uniquely intelligent,” says Belkin. “Copernicus displaced humans from the center of the universe, Darwin displaced humans from a privileged place in nature; now we are contending with the prospect that there are more kinds of minds than we had previously entertained.”
Acknowledging that machines are capable of intelligence matching that of humans can be a frightening prospect. Concerns about potential social upheaval is enough for some to fervently deny the possibility, a “heads in the sand” response as Turing described in his 1950 paper. Chen, Belkin, Bergen and Danks suggest embracing the emotions that arise with compassionate curiosity, not anxious evasion.
Risks and rewards
There’s no denying that we’re in the midst of an unprecedented technological revolution as artificial intelligence pervades our personal and professional lives. The authors position this period as both “remarkable and concerning,” with plentiful possibility and significant responsibility.
In the essay, the experts describe numerous economic demands that are placed on LLMs, which they claim can distort true assessments about whether artificial general intelligence has arrived. Industry leaders often set standards based on profitability rather than intelligence itself—demanding perfect reliability, instant learning or revolutionary discoveries that exceed what we require of individual humans. Yet the UC San Diego faculty point out that speed, efficiency and profitability are simply a potential output of general intelligence, not a defining quality.
A distinct objection centers on what critics call the ‘stochastic parrot’ problem — the claim that LLMs merely recombine patterns from their training data without genuine understanding, and therefore must fail on truly new problems. “We have built highly capable systems, but we do not understand why we were successful,” says Bergen. “LLMs learned about the world through processes unlike human learning, and we lack a detailed account of how their abilities emerged. This gap in understanding grows more important as the systems grow more capable.”
AI systems are also becoming more autonomous. The authors clarify that this does not contribute to their intelligence, but it does make responsible design and shared governance an urgent priority.
“We’re developing AI systems that can dramatically impact the world without being mediated through a human and this raises a host of challenging ethical, societal and psychological questions,” explains Danks. “AI is a future that we are building right now. Ultimately, we’re innovating because we want something better, and the very idea of better should have ethics and safety baked in.”
An unconventional team
The four faculty that assembled to explore artificial general intelligence represent multiple disciplines across UC San Diego, a public research university that prioritizes cross-collaboration.
Chen is part of the School of Arts and Humanities, a philosopher of science who explores big questions about the smallest parts of our universe, as well as questions about the nature of the mind and cognition. These studies complement the research of Bergen, a linguist and computer scientist in the School of Social Sciences who is investigating the science of LLMs.
This research intersects with work being done by Belkin, a data scientist focused on the theory and applications of machine learning at the School of Computing, Information and Data Sciences’ Halıcıoğlu Data Science Institute, with an affiliation with the Jacobs School of Engineering Computer Science and Engineering Department. With a similar focus on data, Danks examines the ethical, psychological and policy issues around AI using methods from machine learning, philosophy and cognitive science.
“I’ve learned so much from this group,” said Chen. “UC San Diego’s institutional structure made this collaboration possible—we simply wouldn’t have crossed paths elsewhere. It’s a powerful example of what cross-disciplinary work can achieve when applied to fundamental questions facing humanity.”
Learn more about research and education at UC San Diego in: Artificial Intelligence