Carl-Johan Nakamura: Why enterprise AI keeps failing — and what leaders can do about it
Serial entrepreneur and former Chief Data Officer Carl-Johan Nakamura on why enterprise AI keeps failing, the brutal first weeks of a CDO - and why regulation may finally force boards to pay attention
Carl-Johan Nakamura has spent nearly two decades at the intersection of data, AI, and enterprise leadership — serving as CIO, COO, and CDO at organizations including Siemens Healthcare, IBM, and ZEISS. Today he is CEO of AI81 Works, an executive search and consultancy firm helping companies find the right talent to make AI work in practice.
We spoke about what’s really holding enterprise AI back, and what it will take to change it — from boardroom culture to the first three weeks on the job as a CDO.
From data mining to the first unofficial CDO office
Nakamura’s path into data and AI began in 2007, when he was managing software implementations at US hospitals for Siemens Healthcare. “Back then it wasn’t called AI or machine learning — it was called data mining and decision support systems,” he recalls. What struck him early was a lesson that remains central to his thinking today: “if the data foundation isn’t right, nothing built on top of it can be trusted.”
One experience in particular became a turning point. Three different teams within Siemens were measuring the same customer satisfaction KPI against the same dataset — and arriving at three different ratings: red, yellow, and green. “That inconsistency became my entry point into business intelligence and analytics,” he says.
What followed was unusual for someone inside a large corporation. Nakamura wrote a 45-page business plan — the kind of thing you’d expect from a startup founder — for what he called the “Office of the Chief Data Officer.” He went door to door pitching it to senior executives, and then it became the first unofficial CDO office at Siemens Health Services.
Why ‘81%’ — and the cost of getting the hire wrong
The name of Nakamura’s company encodes a core conviction: for AI to truly work in any organization, at least 81% of the people need to be engaged, educated, and genuinely living its value. “You need 81% of the people drinking the Kool-Aid,” he says — half-jokingly, but with a serious point. That belief became the name AI81 Works.
The company operates primarily as an executive search and strategy consultancy firm, helping organizations fill data and AI leadership roles. Nakamura is blunt about why this matters: the field has enormous churn, and most companies underestimate the cost. A data governance leader who joins for two and a half years typically needs six months to onboard and another six to learn the local context. Add the reality that many decide to leave months before they actually exit, and the company may be left with barely 12–15 months of truly productive value. Getting the hire right the first time isn’t optional — it’s existential.
The iceberg problem — and the data janitor trap
When asked where the biggest friction points lie, Nakamura returns to what he calls the iceberg analogy. Executive boards across industries are investing heavily in the glamorous, visible tip — the AI models, the chatbots, the dashboards. Meanwhile, the vast data foundation underneath goes ignored.
“The perception building up among executives is that AI can solve pretty much anything.” Technically, there is enormous potential. But applying these tools inside enterprises burdened with decades of process debt and data debt — organizational, not just technical — is a different story. “It’s no wonder that figures as high as 95% get cited for enterprise AI project failure,” he says. “That’s with emphasis on enterprise AI.”
He also describes what he calls the “data janitor trap” — a pattern he has encountered in every CDO role, whether leading a team of 10 or 500. Colleagues would approach him with requests better suited to a help desk: a nightly job that didn’t run, a dashboard that won’t refresh. “The CDO role gets reduced to being the Chief Dashboard Officer — when it should be about strategic enablement.”
The first three weeks define everything
Nakamura believes the well-known Gartner advice about “the first 100 days” is far too generous. “As a first-time or second-time CDO in a legacy business model and company, your success or failure will be evaluated in the first three weeks,” he says.
He speaks from experience. When he was appointed as the first CDO at ZEISS, his approach was to spend his first weeks listening rather than pitching — meeting the CEO, the CFO who had hired him, and key business-side SVPs, letting them do most of the talking.
The strategy paid off. By the end of week one, he had built relationships with several influential leaders, earning an unexpected invitation to address the executive board in just his second week. Those early moves positioned him as a strategic enabler rather than a data janitor stuck on defense.
The executive AI literacy gap
Nakamura sees a broader transformation underway in what it means to be a business executive. “The modern business executive won’t be successful doing what they used to do,” Nakamura says. “The modern executive needs to dramatically raise their AI literacy.”
Yet almost nobody at the top is doing so. He recently participated in discussions about bringing a top-tier university’s corporate board AI program to Europe, and the pattern was clear: mid-level leaders attend, but at the board level, almost nobody signs up. “They think it doesn’t touch them,” he says.
That mindset is starting to crack — driven primarily by regulation. The EU AI Act, Canada’s AIDA, and the US NIST framework are forcing boards to engage with AI governance. “Europe is taking this more seriously from a responsible and ethical perspective than anyone else,” he says. “Just as we saw with GDPR but this time with real teeth, he hopes.”
Two bets for the next 18 months
Looking ahead, Nakamura identifies two factors that will separate leaders from laggards.
The first is responsible AI as a brand. As personal data grows more valuable and data ownership more contested, organizations that embed responsible AI into their corporate DNA — and communicate it externally — will gain both trust and competitive edge.
The second is agentic fluency. As agentic AI matures, winning organizations will move beyond “data literacy” toward something more fundamental: orchestrating human and AI workforces together. “Some firms are already hiring agentic AI systems, giving them employee ID numbers, and onboarding them alongside human employees,” he notes.
But none of this happens by default. In Nakamura’s view, it takes senior leaders who educate themselves first — and then drive the capability through the organization.
Note! The content on this blog reflects my personal opinions and does not represent my employer. As the publisher, I am not responsible for the comments section. Each commenter is responsible for their own posts.

