Blog

Why your data and AI investments are not paying off

The budget was approved. The tools were deployed. The consultants delivered their roadmaps. Six to twelve months later, the board is asking a question nobody wants to answer: where are the results?

According to McKinsey's State of AI report, only 19% of C-level executives report more than a 5% revenue increase from AI. Only 23% see AI delivering favorable cost changes. The rest are spending money and hoping the next upgrade fixes things. It will not.

Key takeaways

  1. Only 19% of C-level executives report more than 5% revenue increase from AI, and only 23% see favorable cost changes (McKinsey).
  2. Most AI underperformance traces back to the data layer, not the AI models themselves.
  3. Fragmented systems, inconsistent definitions, missing governance, and manual handoffs are the four root causes.
  4. Adding more AI tools on top of a broken data foundation accelerates waste, not value.
  5. Fixing the data layer first is the prerequisite for AI investments to pay off.

The spending is real, the returns are not

Enterprise AI spending has grown year over year since 2023. Budgets for data platforms, analytics tools, and AI assistants keep expanding. But the results have not kept pace. In most organizations, the gap between what was promised and what was delivered is growing, not shrinking.

Gartner now places generative AI in the "Trough of Disillusionment." That is a polite way of saying that many organizations invested heavily based on vendor promises and are now confronting reality.

The instinct is to blame the AI. The models are not mature enough, the reasoning is not good enough, the next version will be better. But the pattern repeats regardless of which model you use. The problem is not the intelligence layer. The problem is everything underneath it.

Four reasons the data layer breaks AI investments

When AI investments disappoint, the root cause almost always sits in the data layer. Not in the AI models, not in the team's skills, not in the vendor's product. Here is where it actually breaks down.

1. Data is scattered across too many systems

The average enterprise runs dozens of data-producing systems. CRM, ERP, marketing platforms, finance tools, HR systems, custom applications. Each one stores data in its own format with its own schema. Asking AI to reason across this landscape is like asking someone to write a report using books in five different languages with no translator.

2. Business definitions are not consistent

What is an "active customer"? Ask marketing, sales, and finance and you will get three different answers. These definitions live in people's heads, in scattered spreadsheets, in tribal knowledge that never gets transferred into systems. When AI encounters this ambiguity, it picks an interpretation. Sometimes it picks the right one. Often it does not. And nobody catches the error until the board deck has the wrong number in it.

3. Governance is missing or manual

Data governance in most organizations means someone manually reviewing access requests and hoping the rules are followed. There is no automated layer enforcing who can access what data, under what conditions, with what audit trail. Without this layer, every new AI tool is a potential compliance risk.

4. Manual handoffs kill speed and accuracy

Business teams need data. IT teams prepare it. This handoff takes days or weeks. By the time the data arrives, the question has changed or the window has closed. AI was supposed to fix this, but if the AI relies on the same manual preparation pipeline, it inherits the same delays. Faster intelligence on top of slow infrastructure just means you wait slightly less while still getting inconsistent answers.

Why adding more AI makes the problem worse

The natural response to disappointing AI results is to invest in better AI. Upgrade the model. Add another tool. Hire a prompt engineer. This feels productive. It is not.

Every new AI tool you deploy connects to the same broken data layer. It encounters the same scattered systems, the same ambiguous definitions, the same governance gaps. It just encounters them in new and creative ways. Research shows LLMs achieve less than 50% accuracy on structured enterprise data queries. Better models improve this margin incrementally. They do not solve it.

Worse, each new tool multiplies the problem. Now you have three AI assistants giving three different answers to the same question, each confident, each pulling from a different slice of your fragmented data landscape. Finance's AI says revenue grew 12%. Sales says 8%. The CEO's dashboard says 10%. Nobody knows which one is right.

This is not an AI problem. This is an infrastructure problem wearing an AI mask.

What a functioning data layer actually looks like

The organizations that do see returns from their AI investments share a common trait. They fixed the data layer first. Not by buying another tool, but by building (or adopting) an abstraction layer between their raw data and everything that consumes it.

A functioning data layer has specific characteristics. Business definitions are established once and used everywhere, so "active customer" means the same thing regardless of who (or what) is asking. Access controls are enforced automatically, not through manual review. Audit trails track every data request so compliance teams can verify what happened and when.

Most importantly, the layer separates understanding from execution. AI can interpret questions (it is excellent at that). But the actual data retrieval should run through deterministic, tested logic that returns the same answer every time. Same question, same answer. No guessing.

When this layer exists, AI works. Not because the models got smarter, but because the foundation is solid. Every AI tool, every dashboard, every integration consumes the same governed data. Consistency replaces chaos.

The question for leadership is not "should we invest more in AI?" It is "have we built the data foundation that makes AI investments pay off?" If the answer is no, more spending on AI will produce the same disappointing results. Fix the layer underneath first. The returns follow. For a closer look at how different approaches to this problem compare, read the comparison of template-based data access and text-to-SQL.