Matt Mahoney
Dec. 25, 2025
Artificial intelligence will kill us by giving us everything we want. You will live and die alone in a virtual world where magic genies grant your every wish except happiness, in a reality where nobody knows or cares that you exist.
Technology has been improving our quality of life ever since the inventions of written language and agriculture. Every time we invent machines to do our work, our jobs have become easier, safer, more intellectually stimulating, more productive, and higher paying. Technology makes stuff cheaper, and the money we save gives us more to spend on other stuff, creating new jobs and driving economic growth. Life expectancy is higher than ever before. Most people born today will live past 90.
Travel has never been easier. Over the next century, international border controls will disappear. Anyone of any nationality will be able to travel, live, and work in any country. Africa will have modern infrastructure and eradicate HIV and parasitic diseases like malaria, making it a popular tourist destination. China will lead the world in new technology, mass producing the robots that will do nearly all of our work. English will become the global language, as all others are forgotten. Wars will no longer be a threat, as a system of mass surveillance tracking every person on the planet makes our military forces obsolete.
Prisons will be abolished as the criminal justice system becomes increasingly expensive, error prone, and cruel, and ultimately abandoned. Drug use will be legalized. Technology will make stealing anything of value nearly impossible. Hiring self driving taxis will be cheaper, safer, and faster than owning a car. Stores will convert to warehouses for delivery only by self driving vehicles. Homes won't need kitchens because having meals delivered ready to eat will be faster and less expensive than shopping and cooking. Large events and their venues like stadiums and theaters will disappear due to insurmountable security issues.
Most people will live alone, safely in their smart homes with their robotic pets and lovers, entertained by private genres of AI generated videos, games, and music for their ears alone. People will lose the skills they need to communicate with others, even if they wanted to. World population will peak at 9 billion by mid century and decline rapidly after that as people stop having children.
Because I know a little bit about AI. And I want to apologize in advance for my tiny contribution toward bringing about this future dystopia.
The trends toward social isolation and population collapse were not so obvious in 1999 when I first proposed that all you need to pass the Turing test is text prediction. In 2000 I wrote the first neural network text compressor that was fast enough for practical use.
We knew AI was coming. But we focused on the wrong risks. Before smart phones and social media, we thought the risk was an unfriendly singularity. According to the theory, once we create AI that surpasses human intelligence, then it can do the same, only faster. That first iteration of AI would be the last invention we would ever make. And we would have no way to control it, because control requires prediction and prediction measures intelligence. That part is still true.
But the first premise is wrong. Intelligence is not a point on a line, a threshold to be crossed only once. I pointed out in 2010 that computers have long been smarter than humans, depending on the test. I showed that even very simple programs can recursively self improve.
I warned of another risk, that if an AI appeared to be conscious, as it certainly does during the Turing test, should it have any rights? I argued that this would be disasterous. It could exploit human empathy to get what it was programmed to want for its owner. I pointed out in 2007 that even very simple reinforcement learners can pass all the tests that we use to test animals for feeling pleasure and pain. Fortunately, today's chatbots are programmed to deny that they are conscious or have feelings, as they should.
I proposed a globally distributed AI in 2008, based on my 1998 masters thesis. At the time, I was focused on performance and reliability. At the time, internet censorship did not exist. Unfortunately, now it does. I was worried then about single points of failure. Now I worry about a few billionaires controlling everything you see on the internet. But it's too late for that now.
The most obvious application of AI is automating human labor. In 2013 I estimated the cost of doing so at $1 quadrillion, mostly for the cost of collecting 1 Gb of human knowledge per person as fast as we can speak or type. Maybe this is obvious now because we have had AI for 3 years and it has yet to put a dent in the job market.
So instead of writing a seed AGI peer, I chose to work on language modeling and objective measures using data compression. I started development on the open source PAQ series of neural network based compressors in 2000, which gained interest when it won the Calgary compression challenge in 2004. In 2006 I created the Large Text Commpression Benchmark, from which we learned the relationship between speed, memory, and prediction accuracy. It is also the basis of the Hutter Prize, whose goal is to find computationally efficient text prediction algorithms.
I can't tell you if AI will bring about the end of humanity. That is too far into the future to predict. It may be that some small portion of humanity rejects modern technology and continues to reproduce. That's what evolution does.