- Flip The Tortoise
- Posts
- Infinite Synthetic Content
Infinite Synthetic Content
"Authenticity is becoming infinitely reproducible."
Recapping some of last year’s Flip The Tortoise posts, generative artificial intelligence tools and the companies that build them have dropped us into a world where we don’t know what real looks like or sounds like anymore.
And in many cases, we don’t care that we don’t know.
Maybe we should care a bit more.
Sometimes, when we can tell that something isn’t real, we don’t like it, especially when it comes in a commercial advertising format.
A weird distinction considering that every image, video, or audio clip we might see or hear online, in a news feed or in the news, is potentially now “generated” or augmented in a way that challenges our construct of knowledge and the foundation of epistemology.
The challenge to our way of knowing things didn’t happen overnight.
Our basic understanding that a photo represents some version of something that is “true” has deteriorated over time.
Apple, Google, Samsung, and others have equipped us with cameras that augment photos digitally to the point where they aren’t really photos anymore.
They are creations.
Add in a filter and some point-and-click editing, and no digital image (which is pretty much everything you see) is technically “real”.
Apple’s Craig Federighi has said that he’s “concerned” about the impact of AI editing in the photos their phones generate.
Samsung’s Patrick Chomet was clear last year that “actually, there is no such thing as a real picture,” following controversies over the Samsung phones’ approach to astrophotography.
Let’s look at a more light-hearted example.
I took a selfie with my dog, Gurdy.

Gurdy The Goldendoodle.
It isn’t a great photo.
So I used the magic eraser on my Google Pixel to remove the protruding ceiling lamp and the vent in the top right corner of the photo.
Presto, chang-o image editing magic!

Thanks, Google Pixel Magic Eraser.
That’s better, but it is still kinda boring.
I uploaded the photo to Gemini and asked it to put Gurdy and me on the International Space Station.
That goldendoodle, she’s a real space nut.

Space Dog!
That’s cool, but not very realistic.
Gurdy loves the water, so I thought it might be nice to visit a beach.

Dreaming Of Warmer Weather.
Gemini got the shadows in there on the left, but the lighting on my forehead and the cutout edges of my hoodie don’t make this very convincing.
Then I asked Gemini for a suggestion for where Gurdy and I should go next.
I have no clue why it picked a theatre, and I don’t know why our backs are to the screen, but here we are.

Gurdy Is A Real Film Buff. How Did Gemini Know?
We look much better here, but I thought it might be nice if we got a bit more dressed up.
I asked Gemini to put us in tuxedos.

Formal Doodle.
I think we look smashing.
There’s a bit more “gut” in this tux than I would like (thanks for nothing, Gemini).
Maybe it’s time to fashion a new year’s resolution to play with AI less and get out of the office more.
It took seconds to edit and generate these photos.
Maybe they aren’t “real” looking, and you’re probably already familiar with how quickly you can edit or augment an image.
It is nice that Gemini added its little watermark to the images.
However, it was just as easy to upload that image back to my phone and remove the watermark with Google’s magic eraser, so you wonder why they even bother to add it in the first place.

Google Is As Google Does. Goodbye Watermark.
I probably should have asked it to remove my gut.
The sheer speed and simplicity with which I was able to generate these images is again both the problem and the solution.
Anyone can generate an infinite amount of synthetic content.
Head of Instagram, Adam Mosseri, highlighted the challenge and the opportunity in his year-end post.
He points out that we have a brief moment where flaws in images are a weak, likely short-lived signal of authenticity.
It might be time to dust off the old Polaroid camera.
Once AI tools can simulate the flaws more convincingly, and in a world where there’s no arbiter for real or fake images (no watermark is gonna help), we likely need to start from a place of questioning everything and work our way back to reliable sources.
Regardless, my dog looks good in a tux. Don’t you agree?
“By far, the greatest danger of Artificial Intelligence is that people conclude too early that they understand it.”
– Eliezer Yudkowsky, Machine Intelligence Research Institute