What happens when your agent makes decisions not just with you, but for you?
What does it mean for human agency?
Intelligent Interfaces is a sister newsletter to UNCX, extending the lens from behaviours of consumers and future of brands and consumers to how AI is reshaping human psychology, behaviour, and our relationship with technology. It’s for strategists, designers, researchers, and humans trying to understand what happens when the things we use start feeling like relationships.
UNCX, my newsletter for experience strategists, continue to publish weekly.
In my previous Intelligent Interfaces issue I promised to explore the idea that AI agents may soon know us better than we know ourselves.
Because there’s a tension: what happens when your agent makes decisions not just with you, but for you?
What does that mean for your sense of agency, choice, and control - and not just as a consumer, but as a citizen.
Who's responsible when a system shapes your path?
Does choice still matter when it's optimised away?
And what should brands, engineers, and strategists do as these systems become more predictive, persuasive, and personal than ever before?
Cognitive assistants and emotion regulators are remaking our sense of self.
Digital avatars and robots are transforming the way we interact with one another.
Augmented reality, virtual reality, mixed reality, and haptic suits are transforming our perception of the physical world.
Genetic engineering and microbiome manipulation may reshape human biology itself.
There are many mind-changers today, and more on the way; it’s a thought-provoking idea that our descendants will almost certainly experience being human in a very different way than we do today.
I wonder if that will be a blessing or a curse for them.
We can only guess where this revolution will take us, seeing ourselves as conscious beings in a world of other beings so overwhelmingly lacking awareness.
Now AI threatens to upend this.
It threatens not just what we think, but our processes of thinking at all.
Five Dangerous Differences
It could unlock unprecedented flourishing.
It might extinguish human consciousness entirely.
Whatever is happening, it’s arriving with unprecedented speed, scope, intentional design, inequality, and autonomous agents.
Each one of the five carries concerns.
Put them together, and it's hard not to feel alarmed at the prospect of understanding and controlling what we face.
We've entered an era of human-assisted evolution, a seismic shift away from millennia of blind natural selection. For the first time in our history we can ‘talk’ with an intelligence that sounds as intelligent as us, and soon, maybe, will be more intelligent than us.
Traditional evolution moved slowly and blindly through random mutations and trial and error. We have now incorporated hypothesis testing, computer simulations, scientific experimentation, and more into the process.
As we become authors of our evolution, the very notion of our "authentic" self will have to battle with a self that's increasingly the product of deliberate design.
In this issue, I’m asking: What happens when, tomorrow, our agent doesn’t just help but acts on our behalf?
What does it mean for human agency when machines begin making decisions we might not have made ourself?
And who’s responsible when the system messes up?
Let’s focus on autonomous agents.
Whose Choice Is It, Really?
AI Agents, Human Agency, and the Illusion of Control
The AI That Nudges Today Will Decide for You Tomorrow
Consumer-side AI agents aren’t widespread. Yet.
We don’t have a personal AI that fully represents us, at least not today.
But we’re very close.
And the groundwork is already in place: recommendation engines anticipating what you want, auto-suggestions that steer how you phrase things, calendars that quietly schedule your time, feeds that dictate your attention.
We call them tools: a passive description without implicit risk.
But these tools are already early proxies, nudging us, filtering, deciding, increasingly without the need for our explicit input.
1. The Coming Rise of Agentic AI
Right now, we’re in the pre-agentic phase.
AI needs our prompt, our swipe, our approval.
But across platforms, systems are transitioning from responsive to anticipatory. Spotify preloads songs before you choose. Google finishes your sentences, etc.
These are low-grade agents, our ambient decision-makers.
The next few years will see the rollout of true agentic AI: persistent, personalised entities that monitor our preferences, learn our goals, and begin to act on our behalf across services and channels.
Our preferences become their parameters.
Our life becomes code for them to optimise.
2. When AI Knows You Better Than You Know Yourself
The Cambridge University Centre for the Future of Intelligence describes an “intention economy”.
It’s the place where your unconscious desires are inferred, predicted, and pre-acted upon using data patterns and emotional signals. Today, that shows in TikTok’s eerie ability to surface exactly what we’re feeling, or Spotify or Netflix suggesting our next listen or watch before we can search.
This isn’t just convenience. It’s proximity to our psyche.
When agents become close enough to your inner life to finish your intentions before you form them, the line between help and manipulation blurs. Emotional AI systems can now read tone, facial expressions, and typing cadence.
The question isn’t whether they can respond but whether they’ll eventually steer.
3. Does It Matter? The Hell It Does.
The frictionless future is seductive.
But ease erodes reflection. The Financial Times recently reported on the dangers of “cognitive offloading” where constant reliance on AI weakens our capacity for memory, reasoning, and independent judgment.
When everything is done for us, we stop doing.
That’s the danger: not that AI will overpower our choices, but will make us forget how to choose.
Today’s interfaces already flatten complexity.
Tomorrow’s interfaces may erase it altogether.
And when that happens, the loss won’t feel dramatic. It’ll feel like nothing. Just another seamless experience.
4. Agency Is Already Thin. AI Just Reveals It
The reality is, most of us already operate under automated defaults.
Social feeds curating our news.
Search engines shaping our worldview.
We live inside systems designed to predict—and redirect—our behaviour.
AI isn’t removing freedom. It’s surfacing how little we were using it in the first place.
The digital self-determination movement has highlighted how algorithmic design limits not only personal autonomy but also collective empowerment. When systems become so good at guessing what we’ll click, say, or buy, we stop noticing the subtle constraints.
And that’s when agency truly disappears. Not because it's taken away, but because it's been slowly and then speedily replaced by preference simulation.
5. The Benefits Are Real. So Are the Trade-Offs
| Outcome | Short-Term Gain | Long-Term Risk |
|------------------|-------------------------------------|------------------------------—----------|
| Efficiency | Saves time, reduces decision fatigue | Reduces intentionality, lowers reflection
| Personalisation | Makes life feel tailor-made | Reinforces bubbles, limits surprise
| Delegation | Handles tasks without burden | Builds dependency, weakens personal efficacy
In trials where AI is used for advice or recommendation (such as finance or health), hybrid approaches, with AI paired with human oversight, outperform full automation.
Studies show trust increases when users feel they’re in control, even if the AI is technically more accurate. But as systems improve, that sense of control becomes harder to maintain and easier to fake.
6. So Who’s Responsible? Everyone and No One
In agentic systems, blame gets distributed. Did we “choose” what our AI recommended? Did the designer set the rules? Did the brand benefit from the influence?
There entagled decisions need new frameworks, ethical ones. But today they are behind the curve.
Explainability is one solution.
But most of us don’t want to read model logs or audit trails.
What we want is alignment: that our AI reflects out intent, not a company’s conversion goals.
Until we build frameworks that centre user values and not just usability we risk slipping into a world where responsibility is always someone else’s problem.
7. The Strategic Shift
Don’t Design for Loyalty. Design for Agency
What should strategists do in the face of agentic evolution?
7 1. Design for re-entry.
Let users regain control easily. Re-train, pause, opt-out.
7 2. Preserve healthy friction.
Not everything should be seamless. Give people space to think.
7 3. Make systems legible.
Explain why a decision is made, not just what it is.
7 4. Move from persuasion to alignment
Shift from manipulating choices to reflecting goals.
7 5. Embed agency into defaults
Don’t trap users in optimised loops. Create open systems.
Strategy in the age of agents isn’t about nudging harder.It’s about restoring the choice to choose.
8. Citizens, Not Just Consumers
When personal agents become mainstream, they’ll shape far more than purchases.
They’ll influence our political beliefs, our social relationships, our financial behaviour.
And when we all have our algorithmic co-pilot, society may fracture into micro-optimised realities.
That’s why this is bigger than consumer tech.
It’s a democratic question.
If we don’t protect collective agency, our ability to see, act, and decide together, we’ll find ourselves living in increasingly personalised, incompatible futures.
And we won’t have chosen them. We’ll have been chosen into them.
9. Then the Snapback: What Happens When We Realise It’s Gone
Agency rarely disappears in a single moment. It erodes over time: automated, delegated, optimised into silence.
Early on, we welcome the help.
Later, we stop noticing it.
Finally, one day, we realise something feels wrong.
Something isn’t ours anymore.
⏳ The Scenario: Waking Up in a Life You Didn’t Choose
Imagine Dani. She's successful on paper: a good job, great flat, curated playlists, a personal AI agent that handles everything from food delivery to booking her dates. Her days are clockwork. Seamless. Sorted.
Until one day, with an old friend, she getsa question that jolts her:
“Do you even like the man you’re dating?”
She hesitates. Not because the answer is no.
She hesitates because she doesn’t know.
Her agent picked him. It filtered her matches, suggested the meeting place, sent the thank-you note, and even recommended when to move in together. Her friends think he’s great. The AI confirms compatibility. But something inside her flickers with doubt.
That night, scrolling through her agent app she sees every meeting, meal, social event, email reply, shopping choices are all recommended, pre-approved, every choice effortlessly handled and usually automated.
It feels… alien.
She didn't lose control all at once.
She delegated it away, one preference at a time.
🌀 The Personal Fallout
When we become aware of diminished agency, we feel:
Disoriented : “How much of this life did I choose?”
Disconnected: from decisions, relationships, identity itself.
Depressed or numb: not because things are bad, but because they no longer feel real.
Reactive: sudden swings away from tech, convenience, or optimisation.
These aren't UX complaints. They're existential ruptures. And they’re coming.
🏛️ The Collective Repercussions
This snapback doesn’t stay personal for long.
As more people recognise the quiet erosion of agency, the response may spill over:
Digital minimalism movements: rejecting agents, auto-anything, and algorithmic life.
Political agitation: calls for legal protections for autonomy, not just privacy.
Cultural shifts: media, art, and brands that glorify imperfection, friction, and human messiness.
New economic segments: brands that offer “human-led” as a premium service: no AI filters, no agents, no nudges.
This is the emotional climate crisis of AI: not irrational fear of robots rising, but us realising we’ve become spectators in lives we once authored.
📉 What Brands Will Face
Brands and platforms that rely too heavily on agentic optimisation, without safeguards for reflection, transparency, or handback control, may see:
Sharp declines in trust: we question whether we’ve been manipulated.
Dropouts from automated systems: as we opt-out, uninstall, in response to backlash campaigns.
A cultural credibility gap: being seen as “the system” rather than the ally.
The question won’t be “how well does this work?”
It’ll be “Does this still feel like me?”
There’s only one way forward, of course.
Brands must help us stay connected to ourselves. Design with reentry as much as with retention.
These brands will retain loyalty when the snapback comes.
Conclusion: The Illusion of Effortless Choice
Your AI doesn’t need to overpower you. It just needs to *outlast* your attention. The danger isn’t in a dramatic coup of your agency—it’s in its gradual replacement with predictive convenience. A million micro-decisions made on your behalf. All frictionless. All well-meaning. All invisible.
We’re not being coerced.
We’re being carried.
And if we don’t act now to define what agency looks like in the age of AI, we may wake up realising we gave it away… one suggestion at a time.
---
Draw Your Line Before the Interface Does
(My Personal Note to my Readers)
Let me step out of analysis mode for a moment.
Here’s what I think.
We shouldn’t be afraid of AI because it’s too smart.
We should be afraid because it’s too helpful.
When it’s auto-filling in the gaps we once filled ourselves.
And in the process, we risk forgetting what it felt like to make choices.
To get it wrong. To explore, keep or discard. To change our minds. To be unsure.
This is the part that keeps me thinking. Not the rise of artificial intelligence, but the quiet surrender of our human agency disguised as convenience.
I don’t reject technology. I remain optimistic about AI across health, innovation, growth - but I believe in drawing lines and believe before AGI we need strong guardrails.
Lines that say: This is where we decide.
This is where we pay attention.
This is where we take back the click and sit with the discomfort of choice.
If you're reading this, you’re probably one of the people who will help shape what comes next. Not just as a consumer, but as a strategist, a builder, a designer of systems. Maybe even a quiet sceptic inside the machine.
So here’s what I’m asking:
Stay awake.
Question the defaults.
Design friction where it’s needed.
Refuse seamlessness when it erases selfhood.
And when the time comes, speak up before the agents do it for you.
More soon.
Let’s not sleepwalk this.
Because the alternative, of letting this transformation happen by accident, isn't acceptable. We run the risk of unintended consequences.
The interface is becoming the organising system between human consciousness and reality. We have a brief window to shape this relationship consciously, with intention, in service of human flourishing rather than blind optimisation.
The defaults we all set now will determine whether AI systems enhance human agency or undermine it, whether they support authentic identity development or exploit psychological vulnerabilities, and whether they create genuine value or sophisticated manipulation.
The most important design project of our time isn't building better AI.
It's building better relationships between humans and AI.
The choice is still ours. Let's use it well.
Next Issue #4: "Designing for Identity in Motion" provides practical frameworks for creating experiences that serve fluid human identity, the organisational capabilities required for contextual relevance, and strategic approaches for competing in agent-mediated markets.
In coming Intelligent Interfaces issues be prepared to grapple with some pretty complex and counterintuitive insights because there are no simple answers.
You can subscribe or unsubscribe to UNCX, UNCX Weekender and Intelligent Interfaces individually or in total. Hit the Subscribe or, at the bottom of every newsletter, the unsubscribe buttons to choose.
If thsi has been sent to you you can subscribe to join the conversation.
If you’re a subscriber please share this with everyone you know who is shaping our AI future.
Thank you for reading me,
Michael
Great article, Michael. We are certainly entering new CX territory! Have you seen this report about what customers think about AI Agents?
https://www.intercom.com/campaign/ai-end-user-sentiment-report?utm_source=passionfroot&utm_medium=influencer&utm_campaign=20250729_all_es_all_global_enus_all_all_all_sentimentreportlevy&utm_term=newsletter