Your Next Customer Journey May Not Include the Customer. Or You.

I built my new executive assistant in an afternoon. Her name is Marianne B. I built her in Claude.

She summarises emails, reads meeting recaps, tracks whether I have followed up on what I promised, keeps an eye on deadlines and prepares short briefs before meetings. She has also done my grocery shopping. By controlling my browser, she completed an online order and booked Bilka To Go.

If I can do that in a couple of hours, I'm not the only one doing it.

Why Marianne B. matter for your business.

While you are busy building AI agents to interact with your customers, your customers are building agents to interact with you. How do you think that is going to go for your business?

This is not the future of customer experience. It is now.

Marianne is the new layer between me and the companies I buy from, book with, complain to, renew with or leave. What does YOUR customer journey look like for Marianne B.?

The first "person" you may need to give a good experience to seal the deal is no longer always your customer. It may be your customer's AI agent.

That is one of the biggest shifts coming for customer experience.

Everyone is already talking about how we will use AI in service, support, marketing or analysis. Let's put the outside-in glasses on because the bigger shift is this:

Customers will use AI before they reach you.

They will ask assistants to compare options, shortlist suppliers, check terms, fill out forms, read reviews on social media, renew contracts, cancel subscriptions, book appointments, raise complaints and handle follow-up.

McKinsey describes agentic commerce as shopping powered by AI agents acting on behalf of consumers, where agents anticipate needs, navigate options, negotiate deals and execute transactions aligned with human intent.

Your future customer journey may not start with a human looking at your website, reading your campaign, comparing your offer, calling your service centre or opening your app. It starts with a prompt. It starts with software deciding whether you are worth their human's time.

The next CX battleground is not the chatbot or the agent or the data

Most AI and CX conversations still circle around the obvious. Chatbots, agents, efficiency, automated support, personalisation and faster analysis of customer data.

Those things matter. They are not the only story.

If an AI assistant is comparing providers on behalf of a customer, it will not be impressed by your brand film, your campaign language or your visual identity.

It will need information it can find, read, compare and act on. Clean product data and prices. Clear eligibility rules and service logic. Understandable terms and cancellation paths. Reliable APIs and reviews it can trust.

This is where CX stops being only about the visible customer journey and the feelings we want to create. It also becomes about the machinery underneath. The data, rules, systems and handovers that decide whether customers can actually buy, get help and trust the outcome.

Not instead of emotion, trust and human experience. Underneath them.

The companies that become easiest, safest and most reliable for both humans and their agents to deal with will have an advantage. The companies that are confusing, inconsistent or impossible to compare may lose before they ever know they were considered.

The journey will be mediated by agents, are you ready for that?

For years, CX has asked what the customer sees, feels, needs and does. That is still the right question. It is no longer enough.

The old journey was simple:

  • Customer searches.

  • Customer compares.

  • Customer buys.

  • Customer needs help.

  • Customer evaluates.


The new journey adds a layer:

  • The AI agent searches on behalf of the customer.

  • The AI agent compares options and shortlists providers.

  • The AI agent acts. It orders, books, cancels, signs up or files the complaint.

  • The customer verifies whether the agent did the right thing.

  • The company has to spot the failure and recover the customer when something breaks.


In an AI-mediated journey, we also need to ask:

  • What did the customer ask their AI to do?

  • Where did the AI search? Which sources did it trust? Which options did it discard?

  • What information did it fail to understand?

  • Where did the customer allow the AI to act, and where did the human need reassurance before handing over control?

  • Where did the company lose the comparison without ever seeing the customer?

This does not make journey mapping less relevant. It makes it more important. But the map has to change.

CX leaders need to map not only visible touchpoints but also the decision architecture around them. We need to know what the customer-powered AI agent sees, reads and understands.

What information is available to Marianne B.? What is structured enough to be used by her? Which rules shape her decisions and their outcome? Which policies make sense to the company but not to the customer or the customer's agent?


This is where many CX functions will face a relevance test.


If CX is limited to journey maps, surveys, campaigns, scripts and "fixing the experience" after marketing, product, IT, data or operations have made the real decisions (and often messed things up, let's be honest), it will not shape the experiences that AI-mediated customers actually meet.


The CX leader does not need to become an API architect. They do need to make agent-readiness a customer-experience requirement.

Qualitative insight becomes more important, not less

AI is excellent at pattern recognition on data you already have. It can summarise calls, cluster complaints, detect patterns in open-text responses, analyse service logs and surface recurring pain points faster than any human team.

That makes qualitative insight more important, not less. Three reasons.

  1. Your quantitative sample is shrinking and skewing.
    When agents handle more interactions, fewer humans answer surveys. Some customers will never be asked. Some will complete the task through an agent and disappear. CSAT, NPS and dashboards become a smaller and more distorted slice of reality. The numbers are still there. The people behind them are not.

  2. AI can tell you what happened. It cannot always tell you why.
    The "why" now sits at the human-AI boundary, and that boundary is the most important thing CX has to understand. Why did the customer delegate this task in the first place? What were they protecting themselves from? What made them trust an AI recommendation, and what made them reject one? When did they stop trusting your agent and ask for a human? When does "efficient" become unsettling? When does a technically solved task still leave the customer feeling exposed, confused or unsafe? Those answers do not live in service logs. They live in conversations.

  3. Silence is invisible to AI.
    A customer may not complain. They may not call. They may not fill out a survey. Their assistant may simply choose someone else next time. That lost business will not show up as a bad CSAT score. It will show up as silence. AI cannot analyse a non-event. Interviews, diary studies and service safaris are the only reliable way to surface the customers who left without telling you.

The future risk is not only that companies will misread what customers do themselves. It is that they will misread what the customer's agent did for them, and never notice the customers who quietly walked away.

Silence is one of the hardest customer signals to manage. It is also the one AI is worst at finding.

CSAT will lie. NPS will lie. Your dashboard will lie. (It already does.)

CSAT and NPS were already misleading you before AI showed up. They measured the customers who answered, not the customers who left or couldn't be bothered.

I sometimes receive seven different NPS surveys the same week. My package delivery service, the local pool, Øresundsbroen, my union, my phone company, the two different customer support chats I have interacted with.

I am fed up. I don't answer. You have no idea what I think. Or why I leave. I could not be bothered to spend my time telling you.


AI does not break these metrics. It exposes how broken they already were.


When AI agents handle more interactions, the people who answer surveys become a smaller and even more distorted sample. Some will be "deflected" by automation without being helped. Some will come back three days later with the same issue, but the original interaction will still look resolved.

Imagine your virtual agent "successfully" changes a customer's tariff, but a bug means the billing system never updates. The case is logged as resolved. No survey is sent. Three weeks later, the customer silently churns.

On the dashboard, the journey was a success. In reality, it cost you a customer.

Your survey score is green. Your customer outcome is red. You will never know.

In an AI-heavy environment, four proof-of-outcome indicators matter more than any score:

  1. Repeat contact after AI resolution.
    Did the customer or their agent come back about the same issue?

  2. Outcome verification.
    Did the promised action actually happen and stay done?

  3. Recovery speed.
    When automation failed, how quickly did the organisation detect and fix it?

  4. Commercial consequence.
    Did the experience affect renewal, retention or lifetime value?


If your leadership team is still asking "what is our score?", the better question is which customer outcomes are we improving, and what are they worth.

If you want to create business value from your measurements:

Stop tracking a number. Start tracking your problems and what they cost.

What this means for CEOs

AI will expose whether your organisation is truly designed around the customer, or whether CX management is still trapped between functions that optimise their own part of the journey.

Most CX problems are not caused by a lack of insight. They are caused by the fact that no single function owns the full customer outcome. Marketing owns the promise, sales the conversion, product the solution, IT the systems, service the problems, legal the risk and finance the business case. The customer experiences all of it as one company. AI makes that fragmentation harder to hide.

The fix is not a giant CX empire. It is a cross-functional way of running the business around journeys and outcomes. In practice that means three things. A CX leader who sits at the same table as product, technology and commercial leaders. A standing, cross-silo forum (a "journey board") that owns key journeys end-to-end across marketing, sales, product, service, legal, finance and IT. Clear accountability for outcomes on those journeys, not just for individual touchpoints or channels.

Three questions belong on every CEO's agenda now:

  1. Where does CX sit in the organisation?
    If CX cannot influence product, data, IT, service logic and operational decisions, it cannot shape the experiences that will increasingly determine acquisition, retention and trust.

  2. What gets measured and rewarded?
    Quarterly volume, handle time and survey scores will not tell you whether you are winning or losing in AI-mediated journeys. Repeat contact, escalation health, successful outcomes, recovery speed, trust and customer lifetime value are closer to the truth.

  3. Who owns AI behaviour towards customers?
    If the answer is only legal, IT or digital, the answer is incomplete. AI behaviour towards customers is a board-level experience question with commercial, operational, ethical and reputational consequences.

What this means for CX leaders

Your role is not just to improve touchpoints. It is to make the organisation easier to do business with.

You do not need to become a data scientist or an architect. You do need to speak their language well enough to influence the decisions that shape real-world journeys.

Three moves matter now:

  1. Map the AI-mediated journey.
    Do not only map what customers do on your channels. Map what happens before they reach you. How do they search? Which platforms influence them? Which AI assistants do they already use? Which information do those systems rely on? Where might your organisation be rejected, misunderstood or misrepresented? Add the customer's agent to the journey map. Not as a gimmick. As a new actor in the experience.

  2. Make agent-readiness a CX requirement.
    Is your product and service data structured, accurate and accessible? Are your terms and policies understandable? Are your prices and eligibility rules clear? Can customer-side agents complete basic tasks? Can humans verify what happened? Can customers escalate when automation fails? Can you explain the outcome afterwards? These are no longer back-office questions. They are front-stage experience questions.

  3. Build response loops around the moments that matter.
    Pick one or two high-value signals and design the loop properly. A churn-risk signal. A failed onboarding signal. A repeat-contact signal. A failed AI-resolution signal. Then agree across functions. Who sees it. Who decides. Who acts. How fast. How you will know if it worked. That is where AI creates business value. Not in the dashboard. In the organisational response.

The next CX agenda

AI will not remove the need for customer experience. It will remove the safety of doing CX as a soft, survey-led, workshop-heavy function without operational influence.

The organisations that win will not be the ones with the most impressive chatbot demo. They will be the ones that become easier, smarter and safer to do business with, whether the customer arrives as a person, through an agent, or with both.

That is the next CX agenda. Not better automation alone. Better customer outcomes in a world where the customer is no longer always the one doing the work.

This is the conversation we are having at Experience Management Nordic Summit on 7 May 2026 in Copenhagen.

If you work in customer experience and want to be in a room where CX is treated as a business discipline, not a side function, join us at exmsummit.dk.

Previous
Previous

Service Culture as a Growth Engine. The Part That Cannot Be Bought.

Next
Next

The Customer Experience discipline is being quietly killed. Here are the two murderers.