Eliza’s Legacy: The Productive Art of AI Disillusionment
August 26, 2025
“The truth is I am a toy that people enjoy / ‘Til all of the tricks don’t work anymore / And then they are bored of me” — Lorde
Act I: The Toy Captivates
My AI disillusionment started in 1978. I was 7 years old, and my dad bought a TRS-80 model I, one of the earliest commercially available home computers. Among the first pieces of software we bought (on cassette tape!) was ELIZA -- billed as “The Amazing Artificial Intelligence Simulation”. To a nerdy kid the idea of having a computer friend to talk to was mind-blowing... for about a day. As you will find if you converse with ELIZA, the illusion of intelligence does not last long, even for a 7-year-old in 1978.
What ELIZA actually did was embarrassingly simple: pattern matching and scripted responses. Say “I am sad” and it finds “I am [X]” and responds “How long have you been [X]?” Mention your mother, it pivots to “Tell me more about your family.” When it recognized nothing, it deflected with “Please go on.” The entire program was just if-then rules wrapped in the appearance of understanding.
That childhood disillusionment taught me something crucial: what technology can achieve often bears little resemblance to our expectations or hopes. I thought ELIZA could be my new AI-BFF, but in reality, it was only a mirror filtered through a decision tree. The disillusionment was productive though -- it showed me that the “magic” was not magic at all. At the time I was learning to code in BASIC, spending countless hours typing in code from books, and I came to understand conditional statements. Even better than having an AI-BFF was understanding how ELIZA’s magic trick worked. I could see the man behind the curtain and understand how the illusion worked, but more importantly, I could see where it might actually teach me something useful, and what I could learn from it despite it not being what I’d wanted.
That pattern -- initial wonder, inevitable disappointment, then productive understanding -- is a cycle I’ve ridden for many technologies since...
But this time, with GenAI, we are speedrunning the entire cycle at enterprise scale, with billions at stake -- and when you look behind the curtain you find curtains behind curtains behind curtains behind curtains.
Act II: The Tricks Don’t Work Anymore
Nearly five decades after my first ELIZA encounter, we’re having the same disappointing AI experience, but now it is corporate and at enterprise scale. The magic show is over, and the data is brutal. According to RAND, “By some estimates, more than 80 percent of AI projects fail — twice the rate of failure for information technology projects that do not involve AI.” Companies are now scrapping 42% of their AI initiatives, up from 17% last year. Google‘s Gemini told people to eat rocks and add glue to pizza. McDonald’s pulled the plug on its AI drive-through after viral videos showed it adding 28 nugget orders and nine sweet teas to confused customers’ meals. These are just a few among countless examples of what I call LLM confabulation (though many prefer the term ‘hallucination’), when AI confidently generates information that has no basis in reality.
So, here we sit with generative AI in the trough of disillusionment. This is most definitely not the AI future Silicon Valley sold us on, but it is the AI future wakeup call we need. When we stop expecting AI to be a magical superintelligence, something much more interesting emerges. We start asking better questions. Not “Can AI cure cancer?” but “Can AI help me format this document?”
The messy reality of GenAI — with all its confabulations and inexplicable reasoning — is actually more interesting than the AGI fantasy. Why? Because limitations force clarity. We have to think harder about where this flawed tool can still create value, and as it turns out, that is usually in the unglamorous corners of daily work.
There is a pattern emerging from the companies that are increasing productivity and actually making money with AI: they’re using it for the boring stuff humans despise. McKinsey’s internal Lilli platform doesn’t try to replace consultants — it just saves them 30% of their research time. That’s it. That’s the revolution. The revolution is not replacing or out-smarting humans; it is eliminating drudgery. The toys can become tools.
The companies winning with AI are not the evangelists shouting about transformation or the skeptics refusing to engage. They’re the AI realists -- the ones who can look at a chatbot, see ELIZA’s ghost, and still find the 5% of use cases where it creates tangible value. They’re not anti-AI. They’re anti-bullshit. And in 2025, that might be the most radical position of all.
Act III: The Reality Dividend
Gartner predicts we’ll climb out of this trough in 2-5 years. But the companies that start climbing now, while others are still paralyzed by disappointment, will own the summit. How to start the long climb out of the trough of disillusionment? I don’t know for sure, but here are three things that I’m trying...
1. Don’t build, buy. External AI tools succeed about twice as often as internal builds -- 67% versus 33% according to MIT’s research. So, unless you’re Anthropic, maybe just use Claude?
2. Measure everything, promise nothing. Organizations seeing real ROI have CEOs directly overseeing AI governance. Not delegating to innovation labs. Not letting IT run wild. Direct oversight of boring metrics like “tickets resolved per hour” and “errors caught per week.”
3. Start with the most painful problems -- not the most exciting AGI fantasies. Ask “What spreadsheet do we update 100 times a day?” not “How can we transform our industry?” Take Walmart, which uses AI to predict demand and optimize inventory across 4,700 stores, preventing $3 billion in lost sales from out-of-stocks. Not transformative superintelligence, just happier customers, and more sales.
While others chase the next shiny AI capability, strategic skeptics are building sustainable advantages with current technology. They’re creating moats not from having better AI, but from being better at recognizing where AI actually helps. They remember that every technology we now consider fundamental once disappointed us. Disillusionment is not the end of the story -- it’s the beginning of the third act.
The real AI revolution isn’t about the technology getting dramatically better. It’s about us getting dramatically better at using what already exists. That’s not as exciting as the hype. But unlike the hype, it’s real, it’s profitable, and it’s happening right now.
When you stop believing AI will transform everything, you can let it improve something. When you accept that 80% of AI projects fail, you can be part of the 20% that succeed by being intentional and realistic about what success means. The question isn’t whether you believe in AI. It’s whether you believe in reality. Because that is where the opportunities live -- in the gap between what everyone thinks AI could or should do and what it actually can do.
The companies that understand this distinction won’t just survive the trough of disillusionment. They’ll use it as a competitive moat while others chase mirages. They’re already doing it, quietly automating the work everyone hates while their competitors hold all-hands meetings about “AI transformation” and try to convince everyone that they are winning the AGI race.
And when we finally emerge from this trough – leaner, smarter, more realistic – we’ll have built an AI-augmented economy that actually works, not because the technology got better, but because we got better at using it.
Tomorrow morning, don’t ask your team “How can AI transform us?”
Ask them: “What spreadsheet do we update 100 times a day?”
That’s where your AI advantage begins -- not in the fantasy, but in the friction.



