The recent AI boom is based on the misconception that 'language ability is intelligence'



Major IT companies are investing heavily in generative AI, and as of the time of writing, we're in a booming AI era. However, Benjamin Riley, founder of

Cognitive Resonance , a venture capital firm that aims to deepen our understanding of human cognition and generative AI, argues that the boom in AI is based on a fundamental misunderstanding of language ability and intelligence.

The AI boom is based on a fundamental mistake | The Verge
https://www.theverge.com/ai-artificial-intelligence/827820/large-language-models-ai-intelligence-neuroscience-problems



Executives at major IT companies have expressed confidence in the creation of AGI , with Meta CEO Mark Zuckerberg saying , 'The development of superintelligence is now imminent,' and OpenAI CEO Sam Altman claiming , 'We are now confident about how to build artificial general intelligence (AGI) as traditionally understood.' However, Riley argues that, given the scientific view of human intelligence and the AI systems these companies have produced, these statements are far from credible.

OpenAI's ChatGPT, Google's Gemini, Anthropic's Claude, and Meta AI product families are all ' large-scale language models ,' which essentially collect massive amounts of language data, find correlations between words (tokens), and predict what output will follow for a given prompt. So, to the best of their knowledge, the AI they develop is essentially a language model.

If language = thinking, then AGI will be possible in no time, as AI developers gather vast amounts of data about the world and combine it with increasingly powerful computing power to improve statistical correlations. The big problem is that, based on current neuroscience, human thinking has very little to do with language.

It is true that humans use language to communicate the results of their intelligence, such as inference, abstraction, and generalization, and they sometimes think using language, but that does not mean that 'language = thinking.' Therefore, no matter how sophisticated language models become, there is no guarantee that they will lead to intelligence that surpasses that of humans.

'This theory has serious scientific flaws,' says Riley. 'Large-scale language models are merely tools that mimic the communicative functions of language. No matter how many data centers we build, they do not translate into the distinct cognitive processes of thought and reasoning.' 'Understanding this distinction is key to distinguishing scientific fact from the science fiction of AI-obsessed CEOs.'



In 2024, researchers from the Massachusetts Institute of Technology and the University of California, Berkeley, among others, published a paper in the academic journal Nature titled '

Language is primarily a tool for communication rather than thought. ' This paper summarizes decades of scientific research related to language and thought, and is said to be useful in clarifying misunderstandings surrounding the AI boom.

The paper begins by explaining the fallacy of the concept that 'language creates thinking and reasoning abilities.' If language were essential for thinking, then depriving someone of language would deprive them of their ability to think. However, when examining which parts of the brain are activated using advanced functional magnetic resonance imaging (fMRI), it was found that networks distinct from those involved in language ability were activated when performing thought tasks such as 'solving mathematical problems' or 'guessing the minds of others.'

It is also clear that people who lose their language skills due to brain damage or other disorders do not lose their overall ability to think. Even people with severe language disorders can solve math problems, follow non-verbal instructions, understand the motives of others, and perform logical and causal reasoning. In fact, even babies, before they acquire the ability to speak, discover and understand many things and learn about the world in their daily lives. In other words, scientifically speaking, language is only one aspect of human thinking ability, and much of intelligence is related to non-verbal abilities.

The research team's second argument is that language is a tool for people to share ideas with each other—an efficient communication code. Language is easy to generate, understand, and learn, concise and efficient to use, and robust to noise. These characteristics have enabled humans to share knowledge through language and build extraordinary cultures across generations.

But just because language enhances human cognition does not mean that language generates or defines all thought: even if we were deprived of the ability to speak, we can still think, reason, form beliefs, fall in love, and explore the world.



From the above, we can see that 'thinking' does not equal language in humans, but if we remove language from the large-scale language models that form the basis of existing AI, literally nothing remains. While it is certainly possible that AI could reach superintelligence through a completely different route than humans, Riley points out that there is no clear reason to believe that AGI can be achieved through text-based training.

In fact, there is a growing recognition in the AI research community that large-scale language models alone are insufficient as models of human intelligence. For example,

Yann LeCun , a Turing Award recipient for his AI research, left Meta to found an AI startup focused on building world models: 'systems that understand the physical world, have persistent memory, can reason, and can plan complex action sequences.' Yoshua Bengio , a fellow Turing Award recipient, and others have tentatively defined AGI as having the 'cognitive diversity of a well-educated adult.'

Bengio and his colleagues argue that intelligence is made up of various items, such as 'Knowledge,' 'Math,' 'Working Memory,' 'Visual,' and 'Auditory,' as follows. While Riley points out that this is commendable for breaking away from the traditional framework of large-scale language models, it is difficult to consider the sum of these to be AGI.



Riley believes that even if an AI were to achieve all of the intelligence that Bengio and others claim, it would not be the kind of AI that would lead to groundbreaking scientific discoveries. This is because even if an AI could mimic human thinking, there is no guarantee that it would be able to achieve the same 'cognitive leaps' that humans do.

A ' paradigm ,' a concept in the history and philosophy of science, refers to a unique achievement that is enthusiastically supported by people and that poses various problems to research groups. In the history of science, there have been frequent paradigm shifts in which paradigms have undergone major transformations. These have not been the result of repeated experiments, but rather have occurred when new questions or ideas that do not fit into existing scientific descriptions—in other words, when a cognitive leap has occurred.

AI systems that span multiple cognitive domains can certainly predict and replicate given instructions like highly intelligent humans. However, because these predictions are performed by electronically aggregating and modeling existing data, they are unlikely to become dissatisfied with the input data itself, and as a result, they are unlikely to achieve human-like cognitive leaps.

'AI systems may remix and reuse our knowledge in interesting ways, but that's all they can do. They're forever trapped in the vocabulary we've encoded in our data and used to learn,' Riley said. 'Actual humans—beings who think, reason, and use language to communicate with each other—will remain at the forefront of transforming our understanding of the world.'

in AI,   Note, Posted by log1h_ik