29 April 2026

Author | Dirk Janse van Rensburg, Head of On Key Software Solutions

A little while ago, I watched an AI tool take my child’s simple drawing and “complete” it into a polished, impressively detailed version. It was clever. It was confident. And it was not entirely true to what my child had drawn. It was a simple reminder of one of the real risks in AI in asset management.

That stayed with me because the same temptation is now showing up in asset management.

Many asset teams are asking AI to make sense of incomplete information: missing supplier fields, inconsistent model data, partial maintenance histories, and gaps in execution records. If the picture is incomplete, why not let AI finish the sketch?

Original child’s drawing beside an AI-generated interpretation, illustrating how AI can confidently complete an incomplete picture.
A child’s drawing and its AI-generated interpretation, used to illustrate a simple question with serious implications for asset management: when AI completes the picture, who remains responsible for the truth?

Because there is a meaningful difference between AI helping us interpret what is there and allowing technology to invent what is not.

AI in asset management: interpretation is one thing. Augmentation is another.

That distinction matters. In asset-intensive environments, decisions shape maintenance priorities, capital timing, confidence in compliance, risk exposure, and how leaders assess operational health. If AI starts “completing” missing truth, and we accept that version because it looks plausible, we are no longer working from a better dataset. We may be working from a more persuasive fiction.

Used well, AI can be enormously valuable. It can identify patterns, surface anomalies, summarise histories, flag missing fields, and help teams focus attention where it matters. That is increasingly part of how strong asset teams will work.

But the risk changes when AI moves from highlighting gaps to filling them in. The moment an inferred field, assumption, or generated explanation begins to behave like operational truth, the line starts to blur. The technology has not just helped us read the picture; it has begun drawing parts of it on our behalf.

When AI in asset management turns a plausible story into “truth”.

I have seen how easily this can happen. A mixed-source analysis meant to reflect different customer realities can be flattened by AI into one coherent narrative that sounds right but is not. Once a generated version of events looks polished enough, people start treating it as fact.

From there, the risk compounds. A likely supplier here. A probable date there. A tidy explanation where uncertainty used to sit. Once those assumptions begin feeding reports, decisions, or future analyses, the organisation can start reinforcing an invented reality. In simple terms, it becomes a circular reference: the machine learns from the version it helped create.

That is not a data problem alone. It is a responsibility problem.

Responsibility cannot be delegated.

Too much of the conversation still gets reduced to “human in the loop”, as though human presence on its own is the safeguard. In practice, what matters is ownership.

Someone still needs to decide what the data says, what it does not say, what may be inferred, and what must remain unresolved until verified. Someone still needs to resist the urge to move quickly because the output sounds polished enough to pass.

That discipline is not theoretical. McKinsey’s 2025 global AI survey found that 51% of respondents at organisations using AI reported at least one negative consequence from AI use, with inaccuracy the most commonly cited issue, reported by nearly one-third.1 The same survey found that high performers were more likely than others to define when model outputs required human validation.

For asset-management practitioners, this should not be read as an argument against AI. Quite the opposite. It is an argument for using it where it is strongest, while being disciplined about where responsibility must remain firmly human.

Let AI flag the blanks. Let it help you interrogate patterns. Let it accelerate analysis. But be careful about allowing it to write assumptions back into the operational record without clear governance, traceability, and accountable review.

That is where coalescence belongs: not in letting machine output and operational truth collapse into one another, but in combining machine speed with human accountability.

That is also why the role of EAM software matters. As AI becomes more capable, organisations need platforms that support disciplined capture, visibility, and accountability around asset information. On Key EAM software should help teams preserve the difference between what is known, what is missing, and what still requires human judgement.

AI may well help complete the picture. But in AI in asset management, responsibility for the truth in that picture still belongs to us.

Reference

  1. The state of AI in 2025: Agents, innovation, and transformation

    How did you hear about us?