Giving credit where credit is due, I entirely stole this subject from the newly released Stanford University Human-Centred Artificial Institute 2026 Index report.
If you have time, you should poke around the report.
If you don’t have time, you should still probably read their summary.
Interns are great. Interns kinda suck.
I’ve been repeating an analogy in classrooms and boardrooms that I think helps describe where we are with AI tools today.
The analogy goes like this: “Using AI tools is like having a genius-level intern who has access to all of the world’s publicly available information but who’s never done any work before.”
As the Stanford HAI team suggests, it’s like having a new staff member who is smart enough to win a mathematical olympiad but who also lacks the understanding of your business context, often gets simple tasks wrong, and who can’t tell time.
If you’ve ever hired a summer intern, you probably know what I’m talking about.
The promise of youthful, less corporate-biased, cheap internship labour is often crushed by the amount of time required to train and instruct your new hire on how work in your office is actually done.
Because of their lack of experience, interims will often fill their time with the tasks they’re given (as if they can’t tell time), taking noticeably longer to complete things you expected them to wrap up in about half the time.
But also, you didn’t frame the task properly, because that’s work too. It’s probably your fault.
The next time you’re asked to bring another interim into your team, you think to yourself, “That didn’t work out great last time,” so thanks, but no thanks.
AI Can Make Mistakes. We Think That’s Funny Until We Don’t.
Although we have a certain amount of error tolerance for things we might ask AI to do or answer in our personal lives (assuming we notice the errors at all), we have much less tolerance for errors in our work lives.
At work, stuff needs to happen. Work needs to get done.
The internet is now filled with listicles and Instagram feeds of somewhat comical errors and some less comical mistakes AI has made.
And the error-plus-training challenge of AI is causing us to get frustrated with the AI tools we’re given, and is partially responsible for a bit of a backlash against AI at work.
At the same time, people who DO use AI tools at work are finding they have to work longer and harder to get their genius-level interns to produce the outputs they desire.
More From HAI
The Human-Centred Artificial Institute explained that AI continues to expand its capabilities, learning new things and sucking less at others, and achieving higher scores on specific benchmarks across task types.
But not all of our AI tools’ capabilities are distributed so equally.
Frontier models now match or exceed human capabilities in super-academic and nerdy areas, such as PhD-level science questions, certain forms of reasoning, and competitive mathematics.
Basically, AI is the mathlete you made fun of in high school.
In other tasks, AI lags behind, including learning from video, generating coherent, realistic video, telling time, managing multi-step planning, conducting financial analysis, and answering certain expert-level academic exams.
All We Need Is A Little Patience
Riffing off the immortal words of one of history’s modern-day bards, Axl Rose, “all we need is a little patience. Ohhh, yeah.”
If we reframe our use of AI tools the same way we might sit down with an intern to provide the context and expectations for their work, we might mitigate our own frustration before it happens.
By giving AI context, feedback, additional information to frame the task, and the appropriate resources it needs to complete the work packages we’re assigning it, just as we would with an intern, we are likely to get better results.
You could also adjust your prompting by asking your AI tool what it needs to be better at the tasks you give it.
Try it for a while, and then hit reply to this post and let me know how it went.
“There’s going to be two types of companies in this world: Those who are great at AI, and everybody else that they put out of business.”
– Mark Cuban
Protect online privacy from the very first click
Your digital footprint starts before you can even walk.
In today’s data economy, “free” inboxes from Google and Microsoft, like Gmail and Outlook, are funded by data collection. Emails can be analyzed to personalize ads, train algorithms, and build long-term behavioral profiles to sell to third-party data brokers.
From family updates, school registrations, medical reports, to financial service emails, social media accounts, job applications, a digital identity can take shape long before someone understands what privacy means.
Privacy shouldn’t begin when you’re old enough to manage your settings. It should be the default from the start.
Proton Mail takes a different approach: no ads, no tracking, no data profiling — just private communication by default. Because the next generation deserves technology that protects them, not profiles them.


