- Flip The Tortoise
 - Posts
 - Getting Sloppy
 
Getting Sloppy
Slop is as slop does.
Oh Hi!
I hope you’re enjoying reading these as much as I am enjoying writing them.
Cheers,
-Growdy
“ChatGPT can make mistakes.”
This message sits at the bottom of your ChatGPT window as a daily reminder and warning to check everything that ChatGPT generates and throws at you.
I asked ChatGPT, “What does an image of AI slop look like?” This is the image it gave me.

Yep, That’s Slop Alright
I think it nailed it.
This wasn’t a mistake.
It also wasn’t what I was expecting.
To be fair, I didn’t give much direction.
But also, ew, gross.
AI-generated slop is already a problem. Slop is spamming the internet.
Any idiot can create sloppy, spammy AI content.
And a lot of people can’t tell the difference between human-written or produced content and AI-generated slop.
In other cases, people don’t care.
Some folks kinda like the slop.
A few weeks ago, I wrote about “Metacognitive Laziness” and the risk that AI will change how we think by altering how we use language.
Are large language models susceptible to the same sorta thing?
A new study from the University of Texas at Austin, Texas A&M, and Purdue University suggested that large language models fed a diet of popular, low-quality, sloppy (some human-made, maybe some not) social media content experience a kind of “brain rot” similar to what a person might experience if they spend too long doomscrolling.
These researchers sought to determine whether LLMs were more or less susceptible to brain rot.
If you’re not entirely familiar with the concept of brain rot, it was voted the Oxford Word of the Year for 2024. Brain rot is defined as “the supposed deterioration of a person’s mental or intellectual state, especially viewed as the result of overconsumption of material (now particularly online content) considered to be trivial or unchallenging. Also: something characterized as likely to lead to such deterioration”.
“If large language models learn from the same internet firehose, the question becomes unavoidable: what happens when we keep feeding models the digital equivalent of junk food? Studying “Brain Rot” for LLMs isn’t just a catchy metaphor—it reframes data curation as cognitive hygiene for AI, guiding how we source, filter, and maintain training corpora so deployed systems stay sharp, reliable, and aligned over time.”

As the models were fed more junk, they exhibited cognitive decline, including reduced reasoning abilities and degraded memory.
The models also became less ethically aligned and more psychopathic (rut roh).
The challenge now is that about half of the new content published online is created by artificial intelligence.

Reproduced from Graphite.io; Chart: Axios Visuals
Although MIT research recently suggested that AI-generated content is improving, the risk is that much of that content is likely sloppy.
And as models consume more of that spammy slop, they are at risk of seriously sucking and possibly even “collapsing”.
“Model collapse” is the term researchers use for what happens to generative AI models when they are trained on data generated by other AIs rather than by humans.
Since we use the open Internet as the source of the language we feed large language models, we don’t just put our own cognitive well-being at risk; we also put the machine’s cognitive well-being at risk.
As we flood the internet with slop, the likelihood of model collapse increases.
What is the solution?
Some have suggested that the only way to protect AI from model collapse is to limit its understanding of the world to content published before 2023 (before OpenAI opened the proverbial Pandora’s box of ChatGPT).
And to only add to models as content is curated and published by human hands.
But that’s a weird bit of information to digest.
Was the world (at least in written and content form) in 2023 the best example of our best selves?
I’m not sure. Signs point to no.
What appears to be true, the adage that you are what you eat, is valid for people and for large language models.
Slop is as slop does.
And spam is only delicious when served with pineapple.
“Not all AI content is spam, but I think right now all spam is AI content.”
– Jon Gillham, CEO of Originality.ai
Tech moves fast, but you're still playing catch-up?
That's exactly why 100K+ engineers working at Google, Meta, and Apple read The Code twice a week.
Here's what you get:
Curated tech news that shapes your career - Filtered from thousands of sources so you know what's coming 6 months early.
Practical resources you can use immediately - Real tutorials and tools that solve actual engineering problems.
Research papers and insights decoded - We break down complex tech so you understand what matters.
All delivered twice a week in just 2 short emails.

