Most of the AI in CRM conversation is focused on the wrong thing. The emphasis tends to fall on platforms and tools, on the degree of autonomy a system can be given, and on how quickly it can be deployed. Those are reasonable things to think about. But they are not where the real risk sits.
The risk is that AI will make decisions with whatever data you give it. It does not interrogate the inputs it receives or pause when something looks inconsistent. It will work with whatever signal is available, regardless of whether that signal is complete, timely, or even pointing in the right direction. When those decisions are then automated at scale, the cost of getting it wrong compounds quickly.
Which makes the more useful question not how advanced the model is, but whether the data beneath it is strong enough to support decisions you would actually stand behind.
What AI actually depends on
There is a version of the AI story that most suppliers are happy to tell. Systems that decide who to target, what to say, when to send, and how to optimise in real time, learning and improving with every interaction. That picture is not wrong as a direction of travel. But it tends to skip over what needs to be true before any of it can work as described.
AI decisioning is only as reliable as the environment it operates within. When the underlying data is fragmented, delayed, or inconsistent, the system does not compensate for those weaknesses. It scales them. The outputs may look coherent, the decisions may appear confident, but they are being made on ground that has not been properly prepared. And the more automated the system becomes, the harder those problems are to see and to fix.
Four things that determine whether it works
If AI decisioning depends on data quality, the next question is what that actually looks like in practice.
In most CRM environments, it comes down to four foundations, all of which are straightforward to describe but often difficult to get right.
- Customer identity resolution is the ability to recognise the same customer across channels, devices, and interactions. When this is weak, everything downstream becomes fragmented. You are effectively optimising against partial profiles, duplicating effort, and drawing conclusions from journeys that are only partially visible. One of the clearest signs of this is when performance varies significantly by channel, but no one can confidently explain the reason.
- Data latency determines how quickly information moves from interaction to action. If key signals arrive hours or days after the fact, your decisioning is always reacting to the past rather than responding to the present. This often shows up in campaigns that feel slightly out of sync with customer behaviour, where the trigger itself is sound but the timing undermines its effectiveness.
- Event and exposure history refers to whether you have a clear and complete record of both what customers have done and what they have been exposed to. Without that, it becomes difficult to separate correlation from cause. You may see patterns in the data, but you cannot say with confidence whether your activity influenced behaviour or simply coincided with it. As a result, teams tend to repeat activity that appears to work, without being able to explain why.
- Causal measurement is the ability to isolate the true impact of your marketing. This is where many organisations struggle most, because it requires moving beyond surface-level metrics and into controlled ways of understanding incrementality. Without it, optimisation becomes guesswork and budget decisions rely on proxies rather than evidence. You will often see this reflected in trading discussions that focus on outputs such as sends, opens, and clicks, rather than the actual commercial impact those activities are driving.
Each of these foundations delivers immediate, practical benefits when improved, from more accurate targeting and better-timed campaigns to clearer reporting and stronger budget conversations. They also happen to be the same conditions required for AI decisioning to work as intended.
The same work solves both problems
For most CRM leaders, AI is not the most urgent issue they are dealing with. There are more immediate pressures, whether that is hitting performance targets, justifying spend, or demonstrating value within increasingly short timeframes.
In that context, AI readiness can feel like something separate, or even like a distraction from the work that matters today.
In practice, the opposite is true. The work required to support AI decisioning is the same work that improves performance in the near term. Strengthening customer identity resolution improves targeting accuracy. Reducing data latency improves relevance and responsiveness. Building a reliable event and exposure history improves how teams learn from past activity. Introducing causal measurement improves the quality of decision-making across the board.
Causal measurement, in particular, tends to change the nature of internal conversations. When you can isolate the true impact of your activity, trading discussions become more grounded and budget decisions become easier to defend. Over time, this creates a learning layer that compounds, allowing teams to build on what works rather than relying on repetition or assumption.
At one large multichannel retailer, introducing causal measurement as part of this approach delivered a 40%+ uplift in incremental revenue. Not from new technology, but from being able to identify which activity was genuinely driving value and acting on that rather than assumption. That learning layer is what AI builds on. It is not something AI can create on its own.
Where to start
You do not need to address all of this at once, but it does help to have a clear view of where you currently stand.
Plinc’s AI Readiness Checklist is built around these four foundations. For each one it covers what good enough looks like, the warning signs that suggest something is off, a set of sanity-check prompts to run with your team, and one pragmatic improvement you can make without a transformation programme. It takes five minutes to work through and tends to surface the right conversations.
Blog posts
Decisions, Decisions: Getting Ready for AI Decisioning
Stuart Russell, Chief Strategy Officer at Plinc, hosted a 30-minute webinar cutting through the AI decisioning conversation to focus on what CRM and loyalty…
The Decisions Behind the Decisions: What AI in CRM Really Depends On
Insights from Plinc’s breakfast briefings in London and Manchester, where senior CRM leaders from Space NK, Secret Escapes, Currys, Secret Sales and Crew…
CRM Resources That Landed Best This Year
CRM leaders are navigating rising expectations around personalisation, prioritisation, and AI decisioning, while trying to focus effort on what will…
Inside the Personalisation Payoff Report: What Customers Really Value in 2025
Personalisation has been at the top of marketing agendas for more than a decade. Yet despite the time, technology and effort invested, many brands are still…
Personalisation in Practice: What We Learned from CRM Leaders in London
Insights from Plinc’s breakfast briefing in London, where senior CRM leaders from Moonpig, Anthropologie Europe and The Body Shop discussed how to build…
Personalisation in Practice: Key Takeaways from Manchester’s CRM Leaders
Insights from Plinc and the DMA’s breakfast briefing in Manchester, where senior CRM leaders explored the biggest blockers to personalisation, how to…