Tech Finance Update Q2, 2026

Why you’re not ready for Business Intelligence, Machine Learning, or Agentic AI, and what to do about it

It seems that every boardroom conversation in 2026 eventually arrives at the same destination: artificial intelligence. The promise is autonomous agents making decisions, machine learning optimising operations, business intelligence dashboards replacing gut instinct even replacing workforce. Software vendors are lining up to sell it and it’s an easy sell in this climate. Boards are voting to fund it. And a disturbing number of organisations are discovering, expensively, that they were never ready for any of it.

The reason is almost always the same: it’s not the technology and not the talent. It’s the data. More specifically, the absence of the three foundational disciplines that must exist before any intelligent system can function: clearly defined organisational metrics, understood and reliable data points, and rigorously sanitised data. Get these wrong, and the AI investment becomes the most expensive mistake you have ever signed off. Or written off. Most organisations I meet are nowhere close and solution vendors are selling AI tools with no incentive to tell them otherwise.

Pillar One

Organisational metrics because you cannot measure what you haven’t defined

The first question when considering a Business Intelligence or AI deployment is deceptively simple: what does success look like for your business, and how do you currently measure it? We always talk about KPIs but the answers are almost always revealing and rarely reassuring.

Metrics are the language an intelligent system uses to understand your business. If that language is inconsistent, ambiguous, or undefined, the system cannot learn from it. Worse, it will learn the wrong things and deliver confident, wrong answers at automated speed.

Consider a service provider firm tracking “billable hours.” Is time recorded at the point of task completion, or when it is entered into the billing system which, in most firms, can be days or weeks later? Does a six-minute unit capture the full context of a client call, or only the call itself, excluding preparation and follow-up? Are write-offs recorded against the original matter or absorbed silently into partner overhead? Now extend this to a professional services firm managing project profitability: is margin calculated on contracted value, on invoiced value, or on cash received? Are internal resource costs allocated at standard rates or actuals? Does the model account for scope creep that was delivered but never billed? These distinctions are not semantic. A machine learning model trained on ambiguously defined billing or project data will identify patterns that do not exist and systematically miss the margin erosion hiding in plain sight.

The history is educational. In 2012, Target’s now-famous pregnancy prediction model worked precisely because the metric framework was iron-clad. Target’s statisticians had defined what they were measuring down to the behavioural SKU level: the purchase of unscented lotion, vitamin supplements, and cotton pads in specific sequence. Every data point had a crisp definition and a commercial rationale. 1 Contrast this with the UK’s National Health Service, spending over £10 billion on its National Programme for IT between 2003 and 2011 — a programme ultimately abandoned in full or part, mainly because different NHS trusts defined core clinical metrics differently. ‘Patient episode,’ ‘discharge date,’ and ‘referral-to-treatment time’ meant different things in different trusts. The system could not reconcile what the organisation had never agreed upon. The algorithm was not the problem. The governance was. 2

Before a single line of AI code is written management must invest in a metric-definition exercise. This is not a technology project. It is a governance project. It requires cross-functional alignment between operations, finance, sales. and the C-suite. This is where every serious data engagement must begin.

Pillar Two

Data points and data quality

When I inform a client that their data is not ready for AI, the response is usually defensive: ‘We have years of data. We have everything in our ERP.’ Volume is not the issue. Quality and relevance are.

Data quality operates across five dimensions that organisations consistently underestimate: accuracy, completeness, consistency, timeliness, and validity. A dataset can be vast and fail on all five simultaneously. According to Gartner research, the average enterprise loses approximately $12.9 million annually due to poor data quality alone. 3 That figure does not account for the compounding effect when that same poor data is fed into a machine learning model that then acts on it autonomously.

The Amazon hiring algorithm case remains one of the most instructive failures of the decade. In 2014, Amazon developed an AI-powered recruitment tool trained on a decade of historical hiring data. By 2018, they scrapped it entirely. The model had learned from data generated in a male-dominated hiring environment. The data points — CVs, interview outcomes, internal hiring decisions — accurately reflected the biases of the humans who created them. The model penalised CVs containing the word ‘women’s,’ as in ‘women’s chess club.’ The data existed in abundance. Its quality, for the purpose of unbiased decision-making, was fundamentally compromised. 4

More recently, Zillow’s algorithmic home-buying venture ‘Zillow Offers’ collapsed in late 2021 with losses exceeding $880 million and wiped 25% of its workforce. The company’s machine learning models, designed to predict residential property prices, were working with data that was lagging, incorrectly weighted, and insufficiently connected to real-time market signals. Zillow had data. They did not have the right data, correctly understood and properly related. The algorithm pursued systematically wrong valuations with mechanical confidence and institutional capital. 5

Data volume may give organisations false comfort. It is the quality, the lineage, and the interconnection of data points that determines whether an AI system delivers insight or industrialises error.

Before deploying any intelligent system, organisations must conduct a rigorous data point mapping exercise. Which data sources feed which processes? What is the lineage of each data point: where does it originate, who enters it, what validation exists at the point of entry? Are data sources internally consistent, and do they cross-validate against one another? These are not IT questions. They are business architecture questions, and they require business advisors to answer them.

Pillar Three

Data sanitisation and cleaning the house before you move in

The ‘garbage in, garbage out’ principle is as old as computing itself and it is regrettably, as routinely ignored. Data sanitisation, the systematic process of identifying, correcting, and standardising data before it enters any analytical or AI pipeline is unglamorous, time-consuming, and non-negotiable.

Sanitisation requires both experience and rigour and involves distinct disciplines. Deduplication removes duplicate records that corrupt aggregations and distort pattern recognition. Standardisation ensures fields are formatted consistently — a postcode field should not simultaneously contain ‘MT-1234,’ ‘MT 1234,’ and ‘mt1234.’ Validation rules ensure entries fall within logical parameters. Outlier detection identifies records that are statistically anomalous enough to skew model training. Referential integrity checks ensure that relationships between data entities are coherent — that every invoice links to a valid customer, every production order to a valid material.

In 2012, Knight Capital Group executed one of the most catastrophic software failures in financial history. Within 45 minutes of deploying a new trading algorithm into a live market environment, the firm had haemorrhaged $440 million. While the proximate cause was a flawed deployment that activated legacy code, the conditions that enabled it — inconsistent data states between live and legacy systems, unvalidated data inputs crossing system boundaries, years of accumulated technical debt in the data environment — are precisely the conditions that data sanitisation disciplines are designed to prevent. 6 Knight Capital ceased to exist as an independent firm within weeks.

Healthcare provides an equally sobering example at the intersection of data sanitisation and algorithmic consequence. A 2019 study published in ‘Science’ demonstrated that a widely deployed algorithm used to direct patients toward high-risk care management programmes was systematically undertreating Black patients. The cause was a proxy variable — annual healthcare cost — that reflected historical underinvestment in Black communities rather than actual health need. The data had not been cleansed of its structural bias before being used to train the model. Algorithmic discrimination became embedded in clinical decision-making, scaled across millions of patients. 7

Data sanitisation is not a one-time project: it’s a mentality and ongoing operational discipline. It requires data governance frameworks, designated data stewardship roles, agreed-upon data standards, and validation at the point of data entry. Most organisations have none of these in place when they first engage with an AI vendor. They are, in effect, attempting to construct a skyscraper on an unexcavated site.

Technology Is Not Plug-and-Play

Next, next, next, done. Not. 

The technology industry has a vested commercial interest in making AI appear simpler than it is. ‘Connect your data, deploy in days, see results immediately.’ This is a compelling sales narrative. It is also demonstrably false for any organisation of meaningful complexity. It will also be the downfall of the AI bubble being experienced in the stock market, riding on the perception of an easy way out of otherwise tedious work.  

Business intelligence platforms, machine learning frameworks, and agentic AI systems are sophisticated amplifiers. They amplify whatever exists in the data environment they are given access to. Feed technology structured, well-defined, sanitised data and they perform with genuine power. Feed them the average enterprise data environment, years of inconsistent entries, undefined metrics, unvalidated sources, and they perform at scale with the same errors and biases that always existed, only faster, with greater reach, and with the dangerous credibility of algorithmic authority.

The 2023 IBM Global AI Adoption Index found that 42% of enterprises cited data complexity and data quality as the primary barrier to AI adoption — ahead of cost, skills shortages, and regulatory concerns combined. 8  Organisations are not failing to benefit from AI because the tools are immature. They are failing because their data foundations are.

The Three Pillars

Where expert advisors add irreplaceable value

This is where experienced management consultants and not software vendors, not internal IT teams — become the critical enabler. The three pillars I have described are not, at their core, technology problems. They are organisational, strategic, and cultural problems that happen to have a technical expression.

An external consultant brings three capabilities that internal teams almost universally lack.

First, objectivity: the ability to assess a data environment without the political constraints that prevent internal teams from naming poor practices, challenging legacy decisions, or holding senior stakeholders accountable for the quality of the data their departments generate.

Second, cross-sector pattern recognition: having diagnosed data environments across manufacturing, financial services, healthcare, retail, and professional services, an experienced consultant identifies immediately which problems are cosmetic and which are structural — and which are being actively obscured.

Third, durable frameworks: the ability to design and implement metric governance protocols, data quality standards, and sanitisation disciplines that outlast the engagement and embed themselves in the organisation’s operating model.

The organisations extracting real value from AI in the next five years are not those with the most advanced technology but the ones that did the foundational spadework first.

If you’re in doubt reach out for a Data Readiness Assessment: a structured assessment that maps an organisation’s current metric definitions, data source architecture, data quality profile, and sanitisation maturity against the specific requirements of the AI or BI capability being targeted. We do not recommend technology. We assess readiness for it and we are consistently direct about what we find.

The verdict is usually the same: our clients are further from AI-readiness than they believe and closer than they fear if they are willing to do the trenchwork first. The algorithm is not going anywhere. Neither is the competitive advantage it confers on the organisations that deploy it correctly. The question is not whether your business will be affected by this revolution.

The question is whether you will be among those who shaped it, or among those who paid to learn why you were not ready.

ABOUT US

Credence operates at the intersection of strategy, execution, and governance. We work with owners and leadership teams who need clarity under complexity, and support decisions where accountability matters. Our role is not to advise from a distance, but to design, execute, and carry outcomes alongside our clients.

ABOUT THE AUTHOR

Damian Xuereb is a Director at Credence, specialised in technology-enabled business transformation, financial systems, and M&As. 

You can get in touch with Damian via email, or through his LinkedIn page.