Blog
Why AI gets enterprise data wrong
Ask your AI assistant a creative question and it performs well. Ask it how many active customers renewed last quarter and you get a number that looks right but probably is not.
This is not a fringe problem. McKinsey's State of AI report found that 45% of enterprises cite inaccuracy as the number one barrier to AI adoption. Not cost. Not complexity. Inaccuracy.
Key takeaways
- AI generates a new query every time you ask a data question, guessing tables, joins, and business definitions.
- Research shows LLMs achieve less than 50% accuracy on structured enterprise data queries.
- Only 19% of C-level executives report more than 5% revenue increase from AI (McKinsey).
- The problem is not AI understanding questions but the translation from intent to precise database queries.
- Separating understanding (AI) from execution (deterministic logic) solves the accuracy problem.
What goes wrong when AI answers data questions
When you ask an AI assistant a data question, it does not look up the answer in a database. It generates a query. It guesses which tables to use, how to join them, and what your business terms mean. Every time you ask, it generates a new query.
Consider a simple question: "How many active customers do we have?" To answer this, the AI needs to know which table stores customers, what "active" means in your business (last purchase within 90 days? current contract? logged in this month?), and whether to count parent accounts or individual contacts.
The AI does not know any of this. It guesses. And the guess changes depending on how you phrase the question, what context the model has, and sometimes just randomness in the model's output.
Research published on arXiv shows that large language models achieve less than 50% accuracy on structured enterprise data queries. Less than a coin flip.
AI is good at language, not at data precision
This is not a failure of AI. It is a mismatch between what AI does well and what data queries require.
AI excels at understanding intent. When someone asks "show me customers who might churn," the AI understands the concept. It knows the person wants at-risk accounts. That part works.
The part that fails is translation. Converting that understood intent into the exact database query that returns the right answer requires knowing your specific schema, your business rules, your access controls, and your data quality issues. These are not things a language model learns from training data.
Writing a creative email and writing a precise SQL query are fundamentally different tasks. The first benefits from flexibility and variation. The second requires exactness. Same question, same answer, every time.
The ROI gap nobody talks about
The numbers tell the story. According to McKinsey, only 19% of C-level executives report more than 5% revenue increase from AI. Only 23% see AI delivering favorable cost changes.
Gartner places generative AI in the "Trough of Disillusionment." The hype peaked. Reality set in. Organizations that invested heavily in AI for data access are discovering that unreliable answers are worse than no answers at all.
Inaccuracy is not a minor inconvenience. When the board gets the wrong pipeline number, when finance reports conflicting revenue figures, when a sales director makes decisions based on AI-generated data that turns out to be wrong, trust collapses. And once trust collapses, adoption stops.
What to look for instead
The problem is not AI itself. AI is excellent at understanding what people are asking. The problem is what happens after the AI understands the question.
If the next step is "generate a query from scratch," accuracy will remain low. The AI has to guess too many things: the right tables, the right joins, the right business definitions, the right access controls.
The alternative is separating understanding from execution. Let AI do what it is good at: interpret the question. Then hand execution to a system that uses predefined, tested logic to get the answer. Same question, same answer, every time. No guessing.
When evaluating AI data access solutions, look for these qualities:
- Business definitions established once and enforced everywhere
- Access controls that apply regardless of who or what is asking
- Audit trails showing exactly what logic produced each answer
- Consistent results: the same question always returns the same answer
AI will keep getting better at understanding questions. The organizations that figure out the execution side first will be the ones that actually see ROI from their AI investments. For a deeper look at the two main approaches, read the comparison of template-based data access and text-to-SQL.