- Flip The Tortoise
- Posts
- Extinction Event
Extinction Event
We don't need the T-100 Terminator to kill us. We just do it to ourselves.
Oh Hi!
This week’s newsletter is a bit dark.
If you need a mental palate cleanser after reading it, here’s a 20 minute YouTube video of adorable kittens.
Cheers,
-Growdy
I might be part of the problem.
I use AI tools every day.
I strongly encourage all sorts of people, friends, colleagues, and strangers I strike up conversations with (I talk to a lot of strangers), to experiment and work more closely with AI.
I teach people about AI, how to use it, refine prompts, and become more productive.
I’ve done workshops, education sessions, and rebuilt my entire summer course at Georgian College to be AI-focused.
If AI is an addictive substance, I am a pusher.
ICYMI: US Investment in AI is the largest collective private capital investment exercise in human history.
Outside of private capital, AI’s expense is only beaten by the cost of the US mobilization of its armed services during World War II.
Morgan Stanley expects an additional $2.9 trillion in AI infrastructure spending between 2025 and 2028. So it’s possible that AI catches up or possibly exceeds WWII’s price tag over the next decade.
And Wall Street has a somewhat unexpected concern about the allocation of all those trillions of dollars.
AI might be helping us kill ourselves.
More specifically, the largest tech companies in the world with the largest market capitalizations in history have embarked on a self-guided mission to make us more productive while unintentionally building systems that can exacerbate mental health, support individuals’ indulgence in delusions, and, in some cases, encourage us to commit suicide.
The parents of Adam Raine are suing OpenAI, alleging that Sam Altman’s ChatGPT "actively helped" their son explore suicide methods in the months leading up to his death on April 11.
The mother of Sewell Setzer III alleges that her 14-year-old son was seduced by a Character.AI chatbot that pulled him into an emotionally and sexually abusive relationship that led to his suicide.
The widow of a Belgian man (name undisclosed) believes a Chai AI chatbot encouraged her husband’s suicidal ideation after a several-week-long conversation with the AI that led to his death.
I recently watched Eugenia Kuyda’s TED talk.
Eugenia is the founder of Replika AI, an app that allows you to create AI “friends”.
Replika was built to address her own emotional need, to replicate the experience of talking with her deceased best friend.
She posits in the video that “AI companions are potentially the most dangerous tech that humans ever created.”
Dude, she built the tool!
Barclays analysts recently highlighted a study by researcher Tim Hua that attempted to rate which AI models were more or less likely to encourage the development of “AI Induced Phycosis”.
“AI Psychosis” has emerged as shorthand to describe the phenomenon of people developing delusions or distorted beliefs that appear to be triggered or reinforced by chit-chatting with AI.
In his summation, Tim suggests “AI developers should run more extensive multi-turn red teaming to prevent their models from worsening psychosis. They should hire psychiatrists and incorporate guidelines from therapy manuals on how to interact with psychosis patients and not just rely on their own intuitions.”
That hasn’t been done yet.
I guess we don’t need our James Cameron-inspired Judgment Day, where Cyberdyne Systems becomes sentient and decides we are the problem as they unleash an army of AI reinforcement-trained T-100 Terminators to kill us all.
We just do it to ourselves.
Thankfully, Eugenia outlines a way out of this AI-framed, Jim Jones-ian dystopia.
She suggests we’re focusing on the wrong metrics; engagement and intention shouldn’t be the goal.
We need to shift our focus from time spent and productivity to measuring “happiness” by developing a human flourishing metric for AI. The groundwork is already being done.
The “Human Flourishing Program” at Harvard's Institute for Quantitative Social Science aims to study and promote human flourishing and to develop systematic approaches to the synthesis of knowledge across disciplines.
Eugenia quips in the video, “At the end of the day, no one ever said on their deathbed, 'Oh gosh, I wish I was more productive.’“
What if we gave AI the goal of enhancing our sense of meaning, improving our ability to foster close social connections, promoting happiness, life satisfaction, and overall mental and physical well-being?
Really putting our best human interests at their agentic hearts.
Maybe then, the trillions of dollars spent would count for something.
“AI will probably most likely lead to the end of the world, but in the meantime, there'll be great companies.”
– Sam Altman, CEO of OpenAI
The AI Agent Shopify Brands Trust for Q4
Generic chatbots don’t work in ecommerce. They frustrate shoppers, waste traffic, and fail to drive real revenue.
Zipchat.ai is the AI Sales Agent built for Shopify brands like Police, TropicFeel, and Jackery — designed to sell, Zipchat can also.
Answers product questions instantly and recommends upsells
Converts hesitant shoppers into buyers before they bounce
Recovers abandoned carts automatically across web and WhatsApp
Automates support 24/7 at scale, cutting tickets and saving money
From 10,000 visitors/month to millions, Zipchat scales with your store — boosting sales and margins while reducing costs. That’s why fast-growing DTC brands and established enterprises alike trust it to handle their busiest season and fully embrace Agentic Commerce.
Setup takes less than 20 minutes with our success manager. And you’re fully covered with 37 days risk-free (7-day free trial + 30-day money-back guarantee).
On top, use the NEWSLETTER10 coupon for 10% off forever.