The Dead Internet Theory is a thought that has gained a lot of traction recently. I have to admit, the first time it was explained to me, I felt an eerie realization. Like I had already been experiencing it, but I hadn't paid too much attention to it. The first moment, I felt scared for the future and nostalgic for the past. But that's an old man's attitude, to be so resistant of change. After that moment, I started reflecting. How did we get here? And how can we prevent the Internet from continuing down this concerning path?
What Is The "Dead Internet Theory"?
The main premise of the Dead Internet Theory is an outrageous statement that challenges our view of the state of the Internet:
The Internet feels empty and devoid of people.
This theory originated in a 4chan post around 2019. This 4chan anonymous user writes his recent experience on the current state of the Internet. While his tone is pretty outrageous and paranoid (as is expected of a 4chan board), he raises some valid concerns that resonate with many other Internet users. This has led to this hypothesis gaining a lot of traction online and others sharing their own experiences and thoughts on it on other Internet boards.
So what do they mean by empty and devoid of people? There's plenty of people on the Internet, right? I believe there's two factors to it.
First, we are increasingly interacting less directly with other humans. Now we are talking to an "audience". But really we are talking to the recommendation algorithm, our god, so it graces our posts with engagement. When we post on social networks, we don't expect our friends and relatives to see and interact with it. We expect strangers we don't know or care about to like it. This impersonal relationship is making the Internet less social and more of a hustle.
Second, an increasing amount of users of the Internet are not people. They are bots. Fake profiles that algorithmically play the viral content game to gain influence on the social platforms. Advertisement and/or scam e-mails, messages or even calls massively horizontally scaled thanks to bots. Generative AI answering questions on behalf of actual people on PhysicsForums and other forums like StackOverflow.
The Internet was supposed to connect people all around the world. And it did that, beautifully. But for the last years, it's been going in the opposite direction. It's driving us apart, isolating us from other people and keeping us content with experiencing an Internet massivelly filled with content, without the need to interact with other people.
Is It Really That Bad?
To be fair, it's not like nobody uses the Internet to communicate with anyone anymore. Messaging apps are very prevalent and the users of these apps are mostly human. We talk with our friends, make plans, catch up with people that are not as much in our lives as they used to. But this is a private Internet, not an open Internet. A private chat is not meant to be shared, discovered by people interested on what you are talking about or build a little garden in a corner of the Internet that's just yours.
Also, people are not interacting with the Internet like they used to. I believe the rise of smartphones has contributed to that greatly. Smartphones have been very optimized for consuming content, and especially for infinite, mindless scrolling. It's much more lucrative for your app to have your users trapped in an infinite scroll, consuming content (and ads) by the ton, than to have them create and share. And to achieve that, they provide (impose on) you an amazing recommendation algorithm that will play your psychology to keep you engaged. But if people are only consuming, they are not interacting, discussing, or building. You see a lot of people on your feed, sure, but in such large numbers and impersonal style that it doesn't feel like there's people on the other side.
Paradoxically, recommendation algorithms also make us more likely to interact with content that arouses anger in us than we agree with, since this drives more engagement. This amazing video by Kurzgesagt talks about the psychology of social interactions in an open, algorithmic internet. It comes to the conclusion that small, year 2000 forum-like communities on the Internet were the most similar to the social mechanisms our brains are accustomed to and thrive in.
And it has gotten worse recently, thanks to Generative AI and Large Language Models. Bots were already popping up everywhere, but with the power of recent Generative AI models, they have gotten so much better at deceiving people. And some content posted by people is not actually thought about and written by those people. They just asked an LLM to write it for them and copy-pasted the output into their post. A lot of news, or articles like this one, are written almost instantly (and without any thought behind) thanks to Generative AI. Yes, I have also used it to write, but I try to keep it just as a proofreading/editing/brainstorming/idea drafting aid. I don't post anything I haven't thought about deeply and didn't originate from me.
StackOverflow is a board for asking and answering programming-related questions. Since ChatGPT, it has seen a lot of answers generated by LLMs. I would argue that if you post a question there, you would like an answer from a person that has experience in that topic. If you wanted an answer from ChatGPT you would ask it yourself. And sometimes you might want to, but ChatGPT generally will not be able to solve problems it has never seen like an expert would, but it will still be happy to answer. And this makes a lot of the answers confidently wrong and void of human expertise. This is why StackOverflow has decided to ban all use of Generative AI in content posted to the platform.
The wave of content generated by AI and unapologetically non-reviewed by humans has been so strong that a term has been coined for it: AI Slop. Slop is usually defined as spam, but for AI-generated content. I would define it as low-quality content created by a Generative AI model without any human review or even human thought behind it. AI Slop threatens to flood the Internet with useless posts, algorithm-pleasing content, mass AI-generated pictures and videos at a pace unprecedented until now, thanks to no human being needed to create them.
As if this wasn't bad enough, Meta is now experimenting with not only allowing AI bots on the platform, but using those bots themselves to drive up engagement. In an interview with the Financial Times, Connor Hayes said:
They’ll have bios and profile pictures and be able to generate and share content powered by AI on the platform ... that’s where we see all of this going.
[...] make our apps more entertaining and engaging.
Basically, if you post something on Instagram and nobody likes it or comments it, it might dissuade you from interacting with the app. But if suddenly your post got a lot of likes, comments and messages, even if it's from AI profiles, you will spend more time on it. Never actually interacting with anyone.
If we continue down this path, soon 99% of all content on the Internet will be AI generated and the Internet will have converted from a place to share and communicate with other people to a place to consume endless slop. It will be lonelier than ever, an Internet without people.
My Final Thoughts + Possible solutions
In my opinion, the epicenter of the problem is not AI as a whole but recommendation algorithms and infinite scrolling. The problem is a business model built on the need to drive up engagement and keeping users on the platform at any cost. And it's a titanic problem to tackle, since the whole modern Internet is built around this concept.
AI-generated content can also be a problem, but I believe it's more of a misuse problem than a fundamental problem with the technology. Generative AI models should be our helpers and assistants, but not take over our personas.
Some people advocate on making AI-generated content easier to spot by watermarking. I don't believe this is the way not only because it's difficult to do so with all the kinds of output LLMs can produce, but also because it's impossible to enforce.
Simon Willson has a great oath on Personal AI ethics. While he admits to using AI as his writing assistant, he promises to not post anything that takes longer to read than it took him to write. And I promise to do the same as well. Because, just like Simon, I think it's rude to publish text that you haven't even read yourself.
As for possible solutions to make the Internet less dead, and taking into account that deleting the Internet and starting from scratch is not possible, I've come up with a few actions that I will be applying:
Build social circles in online social vehicles that don't feature recommendation algorithms / infinite scrolling and have a limited number of people. Think Discord/Slack groups or online forums. The lack of recommendation algorithms will allow you to more directly explore the content and people you are interested in. The lack of infinite scrolling will keep you from endlessly consuming content and encourage you to interact / build more. And the limited number of people will allow for easier connection building with other members.
Also interact outside that circle to get content and world views from outside your circle, but be very critical of what you read and see. Take into account that what you read and see might very well be AI generated. Don't let that take most of your time.
Favor subscription feeds like RSS to algorithmic recommendation apps and web pages. Subscribe to your favorite blogs (wink), newsletters and podcasts via an RSS reader or similar. Explore the personal pages of other people. If you are looking for content you consume, this is the one you are most likely to enjoy and won't keep you stuck infinitely scrolling. When you are done, you are done.
In the end, to make your Internet less dead, the goal is to not spend as much time on the Internet, but to be better connected to the things that matter to you.
Top comments (8)
Certainly Zuck, with ai users on fb, will never become as lonely as Tom over at Myspace did ;).
omg lol
The internet always had private personal (email), public personal (USENET, forums), and public nonpersonal aspects (websites, wikipedia) and a lot in between. Digital spam dates back to the 1990s. A lot of our current communication still mimics the analog world with letters, billboards, and newspapers, despite occasional claims of "social" web, Web 2.0, Web3, Web 5, web whatever.
Many web designers and developers still call the initially visible part of a website "above the fold," as if it's a printed newspaper. Paper headlines, TV and billboard ads use deceptive techniques to catch people's attention and appeal to their lower instincts. That's not unique to our modern digital era either.
Now there is AI slop, but generative AI already cut jobs for junior content creators and copywriters, so less human-made slop, more automated slop. The problem remains the same, just on a larger scale.
Definitely nodding along here, thanks for sharing.
This reminds me of Murphy's law:“Anything that can go wrong will go wrong”. We can only do our best to make this world(internet) better.
Thanks for sharing, I'm also thinking a lot about why the interactions between me and my friends are different now.
Most of the time, I'm going to social media not to connect with people, but instead to scroll through mindless content
Social Networks; What started as connecting people is now driven by commerce and connection is not at center of it.
Great read ngl got me thinking a lot