News

Das KI-Wettrüsten

The path to superintelligence: Is computing power enough or does it require a stroke of genius?

After our survey illuminated the grim endpoints of an AI race and the utopian promises of tech pioneers, a fundamental technical question arises: How exactly should a human-superior intelligence emerge? Our analysis shows that there is no consensus among leading developers on this. The debate divides the AI ​​world into two camps and revolves around one central question: Is the massive scaling of existing architectures sufficient, or are today's AI models a technological dead end that requires a completely new breakthrough?

The Scaling Hypothesis: Brute Force Toward AGI?

One camp — including figures like Sam Altman from OpenAI and Demis Hassabis from Google DeepMind — is convinced they already know the path forward. Their hypothesis: the path to AGI lies primarily in the extreme scaling of existing transformer-based AI models. The approach is to close the immense gap to the human brain through sheer, almost unimaginable amounts of compute and data. Estimates from the analyzed papers illustrate the scale: the human brain has roughly 1,000 times more neuron-equivalents and 100 times more synapses than today’s largest models.

Even these staggering numbers seem modest in light of a recent, mysterious announcement from Sam Altman. In a new blog post, he outlines his vision of “Abundant Intelligence” and describes what he calls “the coolest and most important infrastructure project ever.” His goal is to build a factory capable of producing one gigawatt of new AI infrastructure every week. For comparison: earlier estimates projected a total need of around 70 gigawatts. Altman plans to increase this capacity weekly. He justifies this massive effort with potential benefits: “Maybe AI with 10 gigawatts of compute can figure out how to cure cancer.” Implementation, he admits, will be extremely difficult and require innovation across all layers — from chips to energy to robotics. This vision turns the “brute force” approach from theoretical speculation into a declared goal of industry leaders.

The Call for New Foundations: Why Today’s AI Doesn’t “Understand” the World

On the other side of the debate are influential critics like Yann LeCun of Meta, who vehemently disagree. His verdict, cited in a recent survey, is blunt: “LLMs suck.” He believes today’s large language models are fundamentally limited. Their problem is that they only operate in the discrete, narrow space of language. They’re trained to predict the next word in a sentence based on probability. But true general intelligence, says LeCun, must grasp the infinitely complex, dynamic, and often unpredictable physical world.

LeCun supports his critique with a powerful comparison: a four-year-old child, in their first years of life, has already processed more visual data about how the world works than all the internet text data used to train LLMs. This fundamental gap between digital knowledge and lived experience is why today’s AIs fail at tasks that are second nature to living beings. A prime example is autonomous driving. Despite the massive amounts of data collected by Tesla vehicles daily, new, unforeseen traffic scenarios continually emerge that push systems to their limits.

To close this gap, LeCun and other critics argue, we need entirely new capabilities. These include autonomous goal-directed behavior — something referred to as “agentic AI.” An AI must be able to take initiative, plan strategies, and adapt to its environment. As intelligent as ChatGPT may seem, it has never independently posed a new scientific question or invented something. LeCun provocatively argues that even cats outperform the best AI models, as they can plan complex actions in space to reach hidden food. This lack of physical competence is also illustrated by the “Wozniak Test” — a robot is only considered to have passed when it can walk into any home and make a cup of coffee without special programming. We are far from that, as shown by a 2025 robot race in Beijing mentioned in the survey, where the machines fumbled awkwardly while the human runner finished three times faster.

The Puzzle of Thinking: AlphaGo’s Unpredictable Leap

Beyond simply experiencing the world, an AGI must also develop the ability for logical reasoning — the kind that goes beyond mere pattern recognition. Former OpenAI chief scientist Ilya Sutskever points out that it’s precisely this “reasoning” ability that makes AI models so unpredictable. A legendary example of this is “Move 37” during the Go match between Google’s AlphaGo and world champion Lee Sedol.

In the middle of the game, the AI made a move no human expert had anticipated. All commentators initially dismissed it as a rookie mistake. But hours later, it became clear that this seemingly odd move was the brilliant turning point that led to AlphaGo’s victory. The system had discovered a strategy that lay outside human understanding. This event shows that AI is already capable of creative, superhuman problem-solving — but also how alien and opaque its thinking can be to us.

Conclusion: A Race Without a Compass

The technological foundation for AGI is far from settled — it’s the subject of deep scientific controversy. While one side pursues aggressive scaling of current systems, as Sam Altman’s plans clearly illustrate, the other warns that these very systems may be leading us into a dead end. This fundamental disagreement among the architects of the future may be the greatest risk of all. It shows that we’re speeding toward a goal whose mechanics and trajectory remain poorly understood.

What happens when such an intelligence, built on contested principles, begins to develop its own goals? In the coming weeks, we’ll explore the concrete warnings from the “Godfathers of AI” and examine the real threat of AI misalignment.

[Download Paper as PDF]

At SemanticEdge, we are closely monitoring these developments. As a pioneer of conversational AI, we understand both the potential and the risks of advanced AI systems. SemanticEdge stands for secure and transparent conversational AI through the interplay of generative AI with a second, expressive, rule-based intelligence that minimizes the risk of hallucinations and alignment faking. Subscribe to our newsletter for further analyses from our research paper.