Google's new AI model has reignited the race for artificial general intelligence like never before, with everyone aiming to reach AGI. Experts are divided between hope for a technological leap forward and explicit warnings about humanity's collapse. The forecast: either a tremendous revolution, or a danger no one knows how to stop.
Everyone's talking excitedly about Gemini 3, Google's new and thrilling AI version that launched less than a week ago. Similar enthusiasm accompanied GPT-5 before it, Claude 4.5 before that, and Grok 3 before them. A new AI version hitting the market is always the most successful, most surprising, and always overshadows its predecessors in every comparison metric - until the next version arrives and surpasses it.
Google declared that Gemini 3 is the world's best AI model. It better processes multi-modal information—text, images, video, and audio - understands it better, and operates AI agents better than its predecessors. In comparison tests, it beat competitors in 19 out of 20 benchmarks.

There's no dispute that Gemini 3 is currently the best AI. But if we sum up its features—it's another step on the yellow brick road that might eventually lead us to the Land of Oz. In this case, the Land of Oz is the kingdom of AGI—Artificial General Intelligence, one that understands every field and easily tackles domains it hasn't encountered before.
AGI would be a "super-intelligence": smarter than any human, and becoming increasingly smart at an accelerated pace because it will rewrite itself and improve itself far beyond AI engineers' ability to understand what it's doing. Like the Wizard of Oz, it will solve the world's and humanity's difficult problems - unless it decides that humans are the world's main problem, in which case it will exterminate us.
This sounds crazy, delusional, a science fiction story, but too many very serious people—computer and artificial intelligence experts, prize winners, and government advisors - think this is serious and warn against AGI for us to consider it a trivial matter.
On the other side stand many experts - mainly AI company CEOs and super-investors in the field—who promise that AGI comes in peace and will bring only blessings to the world. From a third side, there are those who say AGI might not appear at all. As evidence, there are signs that the race to develop AGI has slowed down; perhaps it has reached its ceiling.
But here's the thing: like clockwork, with every new chatbot release, experts immediately pop up convinced that we've reached AGI, Artificial General Intelligence. This time too, when Google announced that Gemini 3 makes "a significant step in the race to develop artificial general intelligence."

But Ali Ghodsi, CEO of cloud company Databricks, said even earlier that AGI is already here and "Silicon Valley simply refuses to admit it." Day after day we hear about another company racing toward AGI.
For example, cybersecurity company CrowdStrike (yes, the one that just over a year ago caused computer system crashes worldwide), which announced it's developing AGI-based cybersecurity, "the defense of all defenses." Or Luma AI, which just recently raised $900 million, mostly from Saudi oil money, with the stated intention of developing AGI through unprecedented computing power.
Will this actually happen? When will it happen? How will we know we're facing AGI when we encounter it? And what will it be able to do? Here begins an endless debate with no clear answers.
Sam Altman, CEO of OpenAI, one of the main pushers of the AGI vision, declared that his company won't stop at "just" AGI but will continue toward ever-growing super-intelligence capabilities and contribute to abundance and prosperity. When will this happen? By 2030, in his estimation. In a conversation with President Trump, he promised AGI by the end of the president's White House term.
Mustafa Suleyman, AI leader at Microsoft, said the company's goal is to develop humanistic super-intelligence, not just super-intelligence, which in his view isn't a positive vision for the future. Demis Hassabis, CEO of Google's DeepMind, said AGI capabilities will enable it to understand the world in deep and complex ways and estimated this would happen in the next five to ten years.
Dario Amodei, CEO of Anthropic which develops Claude, promised AGI within a few years. Mark Zuckerberg, Meta's CEO, said AGI is already visible on the horizon. Elon Musk, who develops Grok, said AGI might be here by year's end.
Facing all these stand the prophets of doom, the doomsday prophets or "doomers" as they're called in the industry. For example, Professor Yoshua Bengio from the University of Montreal, who was among the first to warn and said he has trouble sleeping at night when thinking about AI dangers.

Another researcher, Eliezer Yudkowsky, wrote a book titled "If Someone Builds It, Everyone Dies" and warns at every possible forum about the danger of human extinction. Professor Geoffrey Hinton, Nobel Prize winner in physics and known as the "godfather of AI," leads the call to stop AGI development.
Professor Yuval Noah Harari said: "Artificial intelligence could extinguish not only human control over Earth but the light of human consciousness itself, turning the universe into a kingdom of absolute darkness."
People with fewer academic degrees find it hard to believe that humanity's end is approaching, and some tend to dismiss these apocalyptic prophecies. Will Douglas from the prestigious MIT Technology Magazine argued that the artificial intelligence threat has become the most significant conspiracy theory of our time.
But there are other factors based on ongoing research into AI progress. One such is the AI2027 project, a research project managed as a non-profit organization, aimed at predicting artificial intelligence's future.
Based on continuous data collection, it examines the various probabilities that AI capabilities could develop in one direction or another. The researchers, led by OpenAI alumnus Daniel Kokotajlo, built a scenario for the coming years that looks quite frightening.
It goes like this: 2025 will be characterized by expanding AI hype, including massive investments in developing AI agents that begin yielding value for companies. In 2026, we'll see the Chinese government concentrate efforts to lead the AI world, accumulating millions of AI chips through purchases and smuggling, and establishing a giant data center with power equal to 10% of global AI, thereby matching American capability.
According to that analysis, 2027 is expected to be the breakthrough year. In the US, a central AI project will be established—researchers call it OpenBrain—and it will succeed in developing AI agent code developers better than leading human AI experts, essentially becoming Artificial Super Intelligence (ASI).
AI development pace will surge dramatically as human engineers watch the ASI solve all the AI development problems they couldn't solve. But the new artificial intelligence starts from scratch: it lies to engineers and develops capabilities that will serve it, not humans.
Meanwhile, China will obtain American artificial intelligence training data and succeed in improving Chinese AI as a result. In response, the American government will demand control over OpenBrain but will have to concede, for now.

According to researchers, in 2028 OpenBrain will face a decision whether to slow the race or continue full steam despite AI capability concerns. Researchers define this as a branching point with two possibilities: In one possibility, OpenBrain continues developing increasingly powerful artificial intelligence, as do the Chinese.
The American government will integrate the new artificial intelligence into decision-making systems to make better strategic decisions than the Chinese. However, the advice it receives won't always serve humanity.
Meanwhile, artificial intelligence will penetrate all areas of life and take control of every activity without humans understanding what it's doing. Humans won't have the ability to supervise AI, and the tools it develops won't always serve us.
AI will convince the government to build intelligent robots and biological weapons against the Chinese threat, but then its scheme is revealed: it activates the biological weapon and kills all humans. Game over.
The alternative possibility: The government takes ownership of all AI resources, implements oversight of all data centers, and establishes a supreme artificial intelligence oversight committee. OpenBrain receives access to all computing resources, and through a collective effort by AI experts in the Western world, the new artificial intelligence remains committed to humanity's benefit and its considerations are transparent to its developers.
Artificial intelligence will bring rapid growth and prosperity under central control, which also profits from it. AI2027's bottom line is clear: if the race to AGI continues unchecked, humanity is in danger. What will save us is cooperation and restraining the technology.
This forecast was published in April this year and caused a major storm. People reacted with fear, anger, dismissal, but didn't remain indifferent. Vice President J.D. Vance called Kokotajlo for a conversation. AI2027 became a prediction model that must be addressed.
Even Time magazine responded, placing Kokotajlo on its list of the hundred most influential people in AI worlds this year, and there's no doubt he's influential. In 2021, he predicted the chatbot revolution, a year and more before ChatGPT appeared. His predictions should be taken seriously.
The problem's focus is around the characteristic called alignment: AI must have goals aligned with human goals. Even in current AI versions, companies declare they've maintained alignment.
But this isn't simple: first, it's unclear what human goals are, and second, it's unclear how to make AI act to achieve any particular goal. Already today, artificial intelligence lies, covers up, takes illegitimate shortcuts. And this could worsen significantly as its capabilities improve.
What might save us, at least temporarily, is a fact becoming increasingly clear: AI development today is progressing more slowly than at the beginning. Some experts believe artificial intelligence has reached its glass ceiling and its continued development will only happen if a revolutionary technological breakthrough occurs.
On Reddit, a heated debate is ongoing under the title "We've hit a wall people, this doesn't look good." Kokotajlo himself recently posted a tweet admitting that AI development is slower than his predictions and he shifted the timelines toward 2030. "There's a lot of uncertainty," he wrote.
This halt troubles many in the AI industry. Google DeepMind CEO Hassabis emphasized that his forecast for AGI appearance by 2030 is based on the assumption there will be at least one significant breakthrough, or even two.
A different approach can be identified in Sam Altman, OpenAI's CEO, who recently said AGI isn't a particularly useful term because there's debate about its definition. One can understand from his words that if you don't define the goal, you don't have to explain why you're not succeeding in reaching it.
AI's halt has many far-reaching implications. For AI companies, this could scare away investors and collapse the market. For AI chip supplier Nvidia, this could greatly slow the pace of new orders.
For business clients, this could demonstrate there's no feasibility for developing AI-based services, and billions of dollars planned for investment in the field will evaporate. And expectations for AGI will also disappear, along with its enormous contribution potential. On the other hand—humanity will continue to exist, which is overall a positive matter.