Decisions, Decisions: Getting Ready for AI Decisioning
Stuart Russell, Chief Strategy Officer at Plinc, hosted a 30-minute webinar cutting through the AI decisioning conversation to focus on what CRM and loyalty teams actually need to do, and why the foundations required aren’t as daunting as the hype suggests.
AI decisioning has become one of the most talked-about topics in CRM and customer marketing. Platforms are launching native functionality, leadership teams are issuing mandates to ‘adopt AI’, including commentary from LinkedIn posts and big platform keynotes, has reached a kind of fever pitch.
But for most CRM and loyalty professionals, the reality looks rather different. The day job hasn’t slowed down. Teams are still managing volume versus relevancy, trying to hold onto budget, fighting for headcount, and working through the unglamorous business of keeping customer data clean and connected. Into that context, AI decisioning can feel like one more thing to absorb, and a lot of the loudest voices in the room aren’t necessarily speaking to the world those professionals actually inhabit.
That’s the gap this webinar was designed to address. The argument isn’t that AI decisioning is overhyped or that teams should slow down. It’s that the path to getting there is more accessible than it’s often made to sound, and that much of the groundwork looks a lot like problems CRM teams are already working on.
Watch the webinar in full below. If you’d rather read than watch, we’ve also shared a summary of the key themes underneath.
Watch the Webinar
What is AI decisioning, and why now?
AI is a genuinely difficult conversation to land consistently with a CRM audience. The gap between small, agile businesses dipping into AI for specific use cases and large enterprise organisations navigating governance and process is wide. Personal experience shapes opinion dramatically. And the loudest industry voices, big platforms and LinkedIn evangelists alike, are often coming from a very specific vantage point that doesn’t always reflect the complexity of B2C or enterprise customer marketing.
A useful working definition of AI decisioning is this: using AI to automate and personalise decisions in real time, determining the next best action for each individual customer: what message to send, which channel to use, when to make an offer. The shift it represents is from fixed rules, static segments and pre-planned journeys to a world of real-time optimisation, continuous experimentation and outcome-based actions. It’s a direction the industry has been heading in for years, but one that’s become increasingly viable to deliver at scale with the emergence of agentic AI.
The mandate problem is real too. Most marketing teams are being told to adopt AI in some shape or form, but often without clear use cases, defined ownership or an agreed view of what success looks like. And the most common use case being pushed, efficiency and cost reduction, risks building AI foundations that are too narrow to unlock anything genuinely new.
Two ways to think about AI decisioning
Two practical examples help bring the concept to life. The first is campaign-level: a fashion retailer customer who has browsed a new season launch but hasn’t purchased. That customer could qualify for several campaigns simultaneously: loyalty communications, lapse prevention, abandonment triggers, BAU trade emails. The challenge isn’t knowing which campaigns exist; it’s having an authoritative way of deciding which one wins and why.
AI decisioning addresses this by drawing on the full first-party data picture, complete customer history, a proper definition of value, all available behavioural signals, and making a decision in the moment that can learn and update based on what actually happens next.
The second example is more strategic: the Monday trade meeting. Currently, these meetings are typically manual, data-heavy, and slightly out of date by the time they happen. Decisions are made by whoever speaks loudest. A different picture is possible: automated reporting landing in inboxes ahead of the meeting, AI-generated recommendations ranked by confidence and risk, and department-level agents able to run live audience counts in the room. Whether that’s three months or three years away will vary by organisation, it’s a clear direction of travel, and the decisions being made now will determine how quickly teams can get there. longer separate conversations. As personalisation becomes more intelligent and more automated, explainability and openness will be essential to maintaining confidence.
The data foundations that make it possible
Getting ready for AI decisioning doesn’t require a deep understanding of agentic AI or complex algorithm design. It starts with data foundations, and there are five areas that matter most.
Customer identity and data connectivity determine whether decisioning is happening at a channel level or a person level. Matchability, meaning the confidence with which signals from different sources are attributed to the same individual, shapes the quality of every decision the system makes. Event and exposure history is the memory that allows AI to make genuinely informed choices, rather than acting on a limited slice of recent behaviour. Data latency is about matching refresh frequency to use case: not everything needs to be real-time, and the wrong assumption here can create unnecessary cost and complexity. And insight accessibility, meaning how easily teams can get into the data to interrogate it, determines whether the operating system is genuinely usable or just theoretically complete.
Linked to all of this is the question of where the decisioning brain actually sits. Platform-based approaches offer speed and integration but are constrained by what data flows into them. Internally held decisioning capability can draw on the full customer history but requires more build. Most organisations will end up with some combination of all three: platform, partner-built and internal. The choices made now will shape how much flexibility they have later.
The causal learning loop: the piece most teams are missing
The causal learning loop is the layer that sits between making a decision and making a better decision next time. It’s also the piece that connects AI decisioning back to CRM fundamentals most directly.
The mechanics are straightforward: a decision is made (who to target), an action is taken (a campaign is sent), and the outcome is measured against a counterfactual: what would have happened without the communication. That causal signal feeds back into the system. The model improves. Decisions get better.
Without this loop, AI decisioning optimises on correlation, not causation. A model that predicts a customer will buy socks sends them a socks email; they buy socks. But would they have bought them anyway? Without causal measurement, there’s no way to know. Equally, without the right outcome signal, AI will optimise on what’s easy to measure, clicks and opens, rather than what actually matters to the business: incremental revenue, profit, margin.
“Target and control, incremental measurement: these concepts are our opposable thumbs. They’re the things that really make us different from other marketing disciplines. The ability to put pure marketing science into what we do.”
The implication is significant. Target and control measurement, which many CRM teams treat as best practice to aspire to rather than infrastructure to maintain, becomes the essential prerequisite for AI decisioning to work properly. The irony is that none of the causal learning layer is actually AI. But it’s exactly what AI is going to depend on.
A real-world example: 40% uplift in incremental revenue
To bring this to life, here’s a real example from a major multi-category retailer Plinc has worked with over several years. The CRM schedule at this retailer is complex: a large automation playbook, a portfolio of trigger communications, and a hectic trade-promotion-led calendar mean that at any given time, a customer could qualify for multiple campaigns simultaneously. All of that was being governed by manual static rules.
Plinc built a model that drew on the retailer’s bank of causal learning, long-term target and control measurement tracking incremental revenue, always on, to determine the likelihood of incremental response for each qualifying campaign, for each customer. The logic sat centrally in the client’s domain and was designed to improve continuously through reinforcement learning.
The result was a consistent 40% uplift in incremental revenue for customers whose campaign priority had been resolved by the model. When multiple campaigns competed for the same customer, the system identified which one would actually move the needle, and kept getting better at it.
Where to focus now
There’s a provocation worth sitting with, borrowed from mathematician Richard Hamming, who used to challenge scientists by asking: you’ve told me what you’re working on, but what’s the biggest problem in your field? And why aren’t you working on that?
The version for CRM teams: we already know what the big problems are. How do we manage volume without killing relevance? How do we earn the right to grow our budget and do more of what we know works? Causal measurement addresses both directly: it tells you when more volume helps and when it doesn’t, and it gives you the evidence to defend or reallocate spend credibly. The argument to finance becomes more elevated, more concrete.
And almost incidentally, by doing that work, teams create exactly the learning infrastructure that AI decisioning is going to require. The path to AI decisioning doesn’t start with agentic workflows or algorithm selection. It starts with the things CRM teams already know they should be doing: done consistently, operationalised, and fed back into the system.

Thank you to everyone who joined us live. For those who couldn’t attend, we hope this recap gives you a taste of the actionable insights we covered.
Plinc has put together a set of resources on AI decisioning and the data foundations that make it work, covering everything from creating the AI learning layer to practical checklists for CRM leaders and a machine learning playbook. To access them, simply answer a couple of quick questions first so we can point you to the content most relevant to where you are. It takes less than a minute.
Blog posts
The Decisions Behind the Decisions: What AI in CRM Really Depends On
Insights from Plinc’s breakfast briefings in London and Manchester, where senior CRM leaders from Space NK, Secret Escapes, Currys, Secret Sales and Crew…
CRM Resources That Landed Best This Year
CRM leaders are navigating rising expectations around personalisation, prioritisation, and AI decisioning, while trying to focus effort on what will…
Inside the Personalisation Payoff Report: What Customers Really Value in 2025
Personalisation has been at the top of marketing agendas for more than a decade. Yet despite the time, technology and effort invested, many brands are still…
Personalisation in Practice: What We Learned from CRM Leaders in London
Insights from Plinc’s breakfast briefing in London, where senior CRM leaders from Moonpig, Anthropologie Europe and The Body Shop discussed how to build…
Personalisation in Practice: Key Takeaways from Manchester’s CRM Leaders
Insights from Plinc and the DMA’s breakfast briefing in Manchester, where senior CRM leaders explored the biggest blockers to personalisation, how to…
Using Real Time Personalised Content for Customer Retention
As personalisation moves from a nice-to-have to a growth driver, real-time content has become essential for keeping customers engaged. This article explores…