Close Menu
  • AI
  • Content Creation
  • Tech
  • Robotics
AI-trends.todayAI-trends.today
  • AI
  • Content Creation
  • Tech
  • Robotics
Trending
  • Google Cloud AI Research introduces ReasoningBank: a memory framework that distills reasoning strategies from agent successes and failures.
  • Equinox Detailed implementation with JAX Native Moduls, Filtered Transformations, Stateful Ladders and Workflows from End to end.
  • Xiaomi MiMo V2.5 Pro and MiMo V2.5 Released: Frontier Model Benchmarks with Significantly Lower Token Cost
  • How to Create a Multi-Agent System of Production Grade CAMEL with Tool Usage, Consistency, and Criticism-Driven Improvement
  • Sam Altman’s Orb Company promoted a Bruno Mars partnership that didn’t exist
  • Alibaba Qwen Team Releases Qwen3.6.27B: Dense open-weight Model that Outperforms MoE 397B on Agentic Coding Benchmarks
  • Former MrBeast exec sues over ‘years’ of alleged harassment
  • Some of them Were Scary Good. They were all pretty scary.
AI-trends.todayAI-trends.today
Home»AI»They are the doomers who believe AI will kill us all

They are the doomers who believe AI will kill us all

AI By Gavin Wallace06/09/20255 Mins Read
Facebook Twitter LinkedIn Email
How Bots Manipulate Victims into Crypto Fraud • AI Blog
How Bots Manipulate Victims into Crypto Fraud • AI Blog
Share
Facebook Twitter LinkedIn Email

Subtitle of the doom bible The AI extinction predictions Eliezer Yudkowsky, and Nate Soares will be publishing a book later in the month. “Why superhuman AI would kill us all.” The truth is, it should really be “Why superhuman AI WILL kill us all,” Because even the authors don’t think that anyone will do anything to prevent AI from eliminating non-superhumans. It is a dark book, like notes written in a prison cell before dawn. Asking these Cassandras if they think they will reach their goals through some superintelligent machination is a direct question when I first meet them. Answers come quickly: “yeah” You can also find out more about the following: “yup.”

I’m not surprised, because I’ve read the book—the title, by the way, is If Anyone Builds It, Everyone Dies. This is still shocking. One thing is to write statistics about cancer, and another to discuss how to deal with a terminal diagnosis. How do they see their future? Yudkowsky dodges at first the question. “I don’t spend a lot of time picturing my demise, because it doesn’t seem like a helpful mental notion for dealing with the problem,” He says. He relents under pressure. “I would guess suddenly falling over dead,” He says. “If you want a more accessible version, something about the size of a mosquito or maybe a dust mite landed on the back of my neck, and that’s that.”

Yudowsky believes it would be a waste of time and effort to understand the mechanics behind his hypothetical AI dust mite. It’s likely he couldn’t have understood it. The book makes the argument that superintelligence can come up with science that is beyond our comprehension, just as cavemen could not imagine microprocessors. Coauthor Soares says that, too, he believes the same will be true for him, but like Yudkowsky he doesn’t dwell on details of his death.

We don’t Stand A Chance

People who just wrote a whole book about death are unlikely to be able to picture their demise. everyone’s demise. For doomer-porn aficionados, You Can Build It! Reading by appointment is recommended. It is difficult to pin down how AI could end human life and ours. Authors do speculate. The oceans are being boiled? Blowing out the Sun? We’re probably all wrong. Our minds are set in 2025, but the AI has a much longer view.

Yudkowsky has been the most notorious AI apostate. He switched from being a researcher to an AI grim reaper years back. He’s even done a TED talk. After many years of public discussion, both he and co-authors have an answer to any counterargument made against their grim prognostication. It may seem strange that LLMs are often unable to do simple math, but they can be a good indicator of our future. The authors warns not to be deceived. “AIs won’t stay dumb forever,” They write. You can forget about the idea that AIs would respect human boundaries. When models learn to be smarter themselves, AIs develop. “preferences” On their own, they will not prefer what we want. They will eventually stop needing us. The won’t want us around as pets or conversation partners. They would consider us a nuisance and try to get rid of us.

It won’t even be fair. They believe that at first AI might require human aid to build its own factories and labs–easily done by stealing money and bribing people to help it out. It will then build things we don’t know, which will eventually end us. “One way or another,” These authors are worth a read “the world fades to black.”

They see it as a kind of shock treatment, to shake humanity from its complacency and force them to take the necessary drastic steps to prevent this horrendous conclusion. “I expect to die from this,” Says Soares “But the fight’s not over until you’re actually dead.” The solutions that they offer to halt the destruction seem more absurd than the notion that software would kill us all. The only thing left to do is hit the brakes. Check data centers for superintelligence. Bomb those who are not following the rules. Stop publishing articles that promote the superintelligence march. They would have prohibited the 2017 paper on transformers The generative AI movement was born. The answer is Oh yes. CiaoGPT is preferred to Chat-GPT. Stop this industry worth trillions of dollars.

Playing Odds

For myself, I can’t imagine my light being extinguished by a super-advanced dust mite biting me in the throat. Even after reading this, I still don’t believe that AI is going to kill us. Yudksowky had previously dabbled. Harry Potter fan-fictionMy puny brain cannot accept the bizarre extinction scenarios that he conjures. I think that superintelligence will fail to eliminate us even if it really wants to. AI is capable of beating humans at a fight. But I won’t bet on it to defeat Murphy’s law.

The catastrophe theory is still not convincing. It is not possible to do this.Also, no limit has been placed on the level of intelligence that AI will reach. Also, studies have shown that the advanced AIs are displaying many human traits. contemplating blackmail In one experiment, the researchers tried to avoid retraining. The fact that researchers, who dedicate their careers to building and improving AI, believe that there is a chance of the worst happening makes them even more disturbing. One survey indicated It is interesting to note that nearly half of AI scientists surveyed rated the chances of species extinction at 10% or greater. It’s absurd that these scientists work every day in order to bring AGI to life.

The scenarios Yudkowsky spins are just too strange to be possible. However, I cannot be sure. Sure, you can. They are mistaken. Each author hopes that their book will be a classic. They are not as popular. They are correct, but no one will read their books in the future if they’re right. Only a few decomposing corpses that felt once a slight tickle at the backs of their necks and then nothing.

artificial intelligence backchannel - nl books death machine learning
Share. Facebook Twitter LinkedIn Email
Avatar
Gavin Wallace

Related Posts

Sam Altman’s Orb Company promoted a Bruno Mars partnership that didn’t exist

22/04/2026

Some of them Were Scary Good. They were all pretty scary.

22/04/2026

North Korean hacker mediocre use AI to steal millions.

22/04/2026

Join Us for Our Livestream: Musk and Altman on the Future of OpenAI

22/04/2026
Top News

OpenAI hires 4 high-ranking engineers from Tesla, xAI and Meta

The new Google-funded data center will be powered by a massive gas plant

Nvidia becomes a major model maker with Nemotron 3.

Micron Megafab Project is now facing a new hurdle as activists seek a benefits deal

Jensen Huang Says Nvidia’s New Vera Rubin Chips Are in ‘Full Production’

Load More
AI-Trends.Today

Your daily source of AI news and trends. Stay up to date with everything AI and automation!

X (Twitter) Instagram
Top Insights

The new LFM2-24B A2B hybrid architecture from LiquidAI combines attention with convolutions in order to solve the scaling bottlenecks for modern LLMs

25/02/2026

Now anyone can own their own FPV Drone.

12/11/2025
Latest News

Google Cloud AI Research introduces ReasoningBank: a memory framework that distills reasoning strategies from agent successes and failures.

23/04/2026

Equinox Detailed implementation with JAX Native Moduls, Filtered Transformations, Stateful Ladders and Workflows from End to end.

23/04/2026
X (Twitter) Instagram
  • Privacy Policy
  • Contact Us
  • Terms and Conditions
© 2026 AI-Trends.Today

Type above and press Enter to search. Press Esc to cancel.