It was supposed to be a congenial dinner in San Francisco—until one simple question cast a chill over the room: Do you think today’s AI could someday achieve human-like intelligence (AGI) or surpass it? To some top tech executives, the answer is an obvious “yes,” and perhaps soon. To others, it’s far less certain. That divide underscores a growing rift within AI leadership about the reality of near-term superintelligence—and what it might take to get there safely.
For those watching from the sidelines or actively working in AI safety, this conversation is crucial. If we aim to steer AI toward societal benefit rather than unchecked risk, it’s vital to balance ambition with clear-eyed assessments of what our technology can truly achieve. After all, progress in AI shouldn’t be about winning a hype race. It should be about building a future in which advanced AI—whether it matches human intelligence or not—serves humanity’s broader interests, safely and ethically.
Conclusion: In the global race to develop ever more powerful AI, the “AI realists” remind us that hype alone doesn’t solve fundamental engineering challenges—especially those involving creativity and subjective judgment. Their message resonates strongly within AI safety circles: as we push toward transformative AI, let’s do so with a firm grounding in the technical details, mindful that the path to true AGI is neither guaranteed nor likely to be straightforward. By acknowledging the gaps in today’s models and methodically charting a path forward, we stand a better chance of creating AI systems that are not only more capable, but also fundamentally safer and more beneficial for all.
Rising Optimism
From the outside, it can seem like most of Silicon Valley is confident that AGI is just around the corner. Several high-profile CEOs have declared that large language models (LLMs)—the core technology behind ChatGPT, Gemini, and more—are on track to match or exceed human intelligence in a matter of years.- Dario Amodei (Anthropic) has suggested AI could be “smarter than a Nobel Prize winner across most relevant fields” as soon as 2026, ushering in a new epoch of sweeping benefits for society.
- Sam Altman (OpenAI) has spoken in similarly bold terms, claiming his company knows how to build “superintelligent” AI and that such breakthroughs could “massively accelerate scientific discovery.”
An Emerging Skepticism
But as the dinner conversation in San Francisco revealed, not all experts are convinced. A group of AI leaders—sometimes less vocal but increasingly willing to speak out—harbor real doubts about how soon, or even whether, today’s approaches can reach AGI. Their stance is not anti-technology but something more akin to “informed optimism,” grounded in a sober look at what current LLMs can and cannot do.- Thomas Wolf (Hugging Face) recently critiqued the idea that near-future AI will spontaneously generate the kind of entirely new questions and breakthroughs that have historically defined Nobel Prize-level work. While LLMs excel at answering known questions, genuine scientific revolutions typically start by asking unprecedented questions—a far trickier feat to replicate in a model driven largely by patterns in existing data.
- Demis Hassabis (Google DeepMind) has indicated AGI might still be a decade away, based on the many tasks today’s models simply can’t handle.
- Yann LeCun (Meta) echoed this at Nvidia’s GTC, calling the idea that LLMs alone could achieve AGI “nonsense.” In his view, entirely new architectures will be essential if we want to move beyond glorified text completion.
Creativity as a Missing Piece
This skepticism doesn’t mean these researchers think AI is doomed to stagnate. Rather, they argue that key elements—like genuine creativity—are largely absent from current approaches. Without that creative spark, a model might never generate the left-field questions or insights that characterize big scientific leaps.- Kenneth Stanley (Lila Sciences) is among those tackling this challenge head-on, investigating a field known as open-endedness. His startup aims to design AI capable of automating the full scientific process, from forming hypotheses to conducting experiments—a far cry from merely parsing datasets or generating code snippets on request.
- Stanley points out that building a “reasoning” engine and a “creative” engine are not the same task. AI reasoning methods, which systematically home in on the correct answer, can be “antithetical” to creativity, where venturing outside established goals often yields surprising innovations. In short, an AGI will likely need both structured reasoning and the capacity for imaginative thinking.
Why It Matters for AI Safety
For those concerned about AI safety, the notion that LLMs may not smoothly morph into AGI might come as a relief—at least initially. But it also raises new questions:- If today’s models are missing key cognitive ingredients, do we risk pushing them too far, too fast?
- Could focusing on scale alone lead to systems that appear intelligent without robust safeguards or genuine creativity?
- What happens when new architectures that enable creativity finally do emerge—how do we ensure safety from the start?
A Call for Grounded Innovation
The push and pull between AI optimists and realists reflects a necessary tension in a rapidly evolving field. On one hand, big visions galvanize investment, talent, and rapid progress. On the other, skepticism grounds those ambitions in practical realities—ensuring that safety, creativity, and genuine breakthroughs remain at the center of AI research rather than dismissed in the rush for hype.For those watching from the sidelines or actively working in AI safety, this conversation is crucial. If we aim to steer AI toward societal benefit rather than unchecked risk, it’s vital to balance ambition with clear-eyed assessments of what our technology can truly achieve. After all, progress in AI shouldn’t be about winning a hype race. It should be about building a future in which advanced AI—whether it matches human intelligence or not—serves humanity’s broader interests, safely and ethically.
Conclusion: In the global race to develop ever more powerful AI, the “AI realists” remind us that hype alone doesn’t solve fundamental engineering challenges—especially those involving creativity and subjective judgment. Their message resonates strongly within AI safety circles: as we push toward transformative AI, let’s do so with a firm grounding in the technical details, mindful that the path to true AGI is neither guaranteed nor likely to be straightforward. By acknowledging the gaps in today’s models and methodically charting a path forward, we stand a better chance of creating AI systems that are not only more capable, but also fundamentally safer and more beneficial for all.