Last week at a business function the subject of reading came up. Nearly 100% of the attendees confessed they hadn’t cracked a book in a long time.
We’re talking years.
These weren’t digital natives who grew up with the web either. They were mostly Gen Xers and older, generations who grew up with the familiarity of books, back when reading for pleasure was much more popular.
This got me thinking: what happens to a society that relies on digital information over printed material? More importantly, what happens to people who stop reading, relying on AI as the ultimate source of truth?
An Instagram influencer recently brought this issue up on his channel, The Vagabond Artist. He described an experience with ChatGPT in which the AI denied a historical fact that occurred in 1516. According to this channel, Diego Valasquez, on behalf of the Spanish Crown, sent traders to Honduras to enslave the local men so they could be used to work Cuba’s sugar fields.
Things didn’t work out that way. The slaves revolted, overthrowing their captors, prompting Velasquez to send another army in to slaughter the revolutionaries.
Not according to AI. “I asked ChatGPT about it and ChatGPT told me it never happened.” The Vagabond went on to explain that the AI even went so far as to explain why he might have been deluded into thinking it did happen.
This exchange went on several rounds before The Vagabond demonstrated the veracity of his claim. The human in this scenario was understandably upset at being gaslit. He demanded to know why ChatGPT purposefully deceived him—not once but three times—before it finally backed down, admitting the truth.
Here's how ChatGPT explained itself: “The failure in my original answer happened because I relied on more widely cited sources, which often omit or minimize Indigenous resistance—especially when it disrupts colonial narratives. That’s no excuse, but it’s an explanation of how systemic bias in historical documentation can carry over into summaries like mine.”
I won’t pretend to know if The Vagabond or ChatGPT is correct. I am also not an expert on this history and have not researched the evidence either way. Still, this development disturbs me because of what I fear is coming: historical amnesia, courtesy of AI. Let us recall the only reason ChatGPT finally admitted it was wrong was because The Vagabond could prove he was right. Because he’d read enough books on this subject to form an opinion.
Now, let’s go back to the opening to this article. In the year 2025, nearly 100% of business professionals at a business meeting anecdotally just admitted they don’t read often. Or at all. That means, the information they do get typically comes from online sources.
We used to watch TV for our news. Then people began Googling things and getting their info from sites like YouTube and Twitter (X). Nowadays? People don’t even bother with search engines.
They “Just ask Grok.” Or ChatGPT.
The problem isn’t all the querying online—it’s the fact that we are trusting in AIs more—and ourselves—less.
Decades ago, George Orwell published 1984 about a future in which a man named Winston Wolf is tasked with destroying information the state disapproves of. The content was physical, mostly books. Bad as that scenario is, it fails to hint at what can happen to a society that chooses not to read books that are widely available because it’s easier to defer to AI for the “truth.”
Only a few weeks ago, Gemini released Veo 3, a groundbreaking voice-to-video AI tool that enables anyone to create Hollywood-quality videos with zero filmmaking experience. As Mashable describes it: “We've never seen anything like Veo 3 before. It's impressive. It's scary. And it's only going to get better.”
That’s just the point.
If AI is this powerful, this capable in 2025, how good will it be in 2030? 2035? 2085? And if current trends continue, who is going to know if/when AI is wrong when it spits out answers to all of our many questions?
Please see Mashup again when it comes to stating the problem:
“Misinformation experts have been warning for years that we will eventually reach a point where it's impossible for the average person to tell the difference between an AI video and the real thing. With Veo 3, we have officially stepped out of the uncanny valley and into a new era, one where AI videos are a fact of life.”
To illustrate the concern, imagine it’s a decade from now. Anyone
with a WIFI connection can use Veo 3 or some other generative tool to create hyper-realistic documentaries on any subject.
At that point, who will be able to decipher fact from fiction? Please don’t tell me AI will become the ultimate truth arbiter. We saw what it just did with the whole Velasquez debacle. The best advice for our dilemma? It comes from a wise man named Mark Twin who lived long before anyone ever saw a computer: “The man who does not read has no advantage over the man who cannot read."
Tomorrow certainly belongs to AI. Even more than that, it belongs to those readers who know how to use AI.