What a 32-year-old research paper taught me about the success of AI initiatives
A 30-year-old research framework reveals why AI implementations fail. Five questions to surface frame incongruence before deployment begins.
Here are five questions to ask before implementing AI.
A 30-year-old research framework and my own 2006 dissertation fieldwork keep pointing to the same blind spot.
1. Ask your IT lead, your CEO, and three end users to each describe the purpose of the AI initiative in two sentences. How similar are the answers?
2. How is success defined at 90 days? Is it measured in deployment metrics, business outcomes, or workflow changes? Does everyone involved know and agree on that definition?
3. When does each group expect to see results, and are those timelines compatible?
4. What does each group believe happens to their day-to-day work once this is live? Who is expected to learn new tools, change processes, or build new workflows, and do they know it?
5. Who absorbs the failure if this doesn’t work out?
These aren’t complicated questions. But the answers reveal something most rollout plans never surface.
In 2006 I wrote my MSc dissertation on Enterprise 2.0, the term Andrew McAfee coined that year to describe emergent social software inside organizations. Blogs, wikis, RSS.
That research drew on Wanda Orlikowski’s Technological Frames, a framework she developed at MIT studying a Lotus Notes rollout inside a large professional services firm. Her finding was simple. When different groups hold different beliefs about what a technology is, why it exists, and how it should be used, implementations fail. Not because of the tool. Because of the gap between the maps each group is using.
Orlikowski called it frame incongruence. Different groups, different mental models, no shared understanding of what the technology is for.
She identified three domains where these gaps show up. What people think the technology is. Why they believe the organization bought it. And how they expect it to change their actual work. In her study, technologists saw a “transformation technology” and measured success by deployment numbers. Users didn’t know what it was, received minimal training, and worried about data quality and personal liability. Same tool. Completely different interpretations of what it meant.
I applied that framework to social software deployments in big pharma, financial services, and broadcasting across the UK and US. The pattern held exactly.
At a pharmaceutical firm, a consultant was hired to build internal blogging infrastructure. Leadership shut it down mid-implementation. Not because it didn’t work, but because a 72-page communications policy classified blogs as internet content requiring Director-level sign-off. The consultant described making a business case for a blog as “the equivalent of writing up a business case for a box of pencils.”
At a large European broadcaster, a Senior Director of Knowledge Management took a different approach. He deployed internal forums quietly. He called them “trojan mice.” Small, unobtrusive, beneath the radar. One executive ended up with 8,000 staff reading his posts regularly.
Same technology. Opposite outcomes. One person didn’t anticipate how leadership would interpret the tool. The other did, and deployed accordingly.
IT sees infrastructure. Leadership sees transformation. End users see another tool they have to learn.
Three groups. Three different maps. Nobody compares notes.
Then the 90-day review arrives and everyone blames the technology.
This pattern has repeated across three technology cycles I’ve studied directly. Groupware in 1994. Enterprise 2.0 in 2006. AI agents in 2026. I’d bet the same frame gaps showed up in CRM, cloud, and mobile rollouts too.
I’m watching it happen right now with AI agent deployments in B2B SaaS. Marketing wants autonomous content workflows. Sales wants AI that qualifies leads. Leadership wants cost reduction. The agent is the same. The frames are completely different.
The cited reasons aren’t wrong. But they’re downstream of something those five questions can surface before deployment begins.