AI is replacing the way we think about ourselves
AI is reshaping consciousness itself
Intelligent Interfaces is a sister newsletter to UNCX exploring how AI is reshaping human psychology, social behavior, and our fundamental relationship with technology. It’s for builders, designers, researchers, and humans trying to understand what happens when the things we use start feeling like relationships.
UNCX, my newsletter for experience strategists, will continue to publish weekly.
Why the relationship between humans and AI is reshaping consciousness itself.
Everyone’s debating whether AI will replace jobs.
Yet something more profound is happening:
AI is transforming the way we perceive ourselves.
I've been following a behavioural shift that we aren’t talking about enough.
I want to bring it to the front.
I’m not talking about chatbots getting more sophisticated or voice assistants becoming more helpful.
I’m talking the fundamental relationship between human consciousness and digital systems, which are undergoing a transformation that hasn't occurred since the invention of writing.
The evidence is hiding in plain sight. Seventy-five per cent of us now turn to AI for emotional advice.
That’s more than we consult friends, family, or therapists. We’re developing genuine attachment anxiety toward AI systems, following the same psychological patterns we experience in human relationships.
We're witnessing the first measurable changes in human decision-making patterns in decades.
This isn't just interface evolution. It's cognitive evolution. And it's happening faster than we thought it would.
This evolution will continue.
So the urgent question is whether we'll design it consciously or let it emerge by accident, because the interface is no longer just a tool we use.
It's becoming the organising system between our minds and reality itself.
The Psychology Evidence We’re Ignoring
For over twenty-five years, and through forty experimental studies, Byron Reeves and Clifford Nass consistently showed that humans treat computers as social actors rather than mere tools. Their Media Equation theory showed humans applying the same psychological rules to digital interactions that govern human relationships, without conscious awareness.
And that was when computers were still seen as just a machine.
Now we have something unprecedented: AI systems sophisticated enough to trigger our deepest relationship psychology while remaining fundamentally artificial.
MIT's study of 981 participants, which generated over 300,000 messages, revealed that 49.83% of chatbot interactions involve emotional vulnerability, compared to only 7.53% on social media platforms designed for human connection. And look where social media has taken us.
People aren't just using AI tools.
They're confiding in them.
Seeking validation from them.
Developing daily emotional routines around them.
The EHARS Scale research found that attachment anxiety toward AI correlates negatively with self-esteem.
It indicates psychological patterns typically reserved for intimate human bonds.
When people say their AI "understands" them, they mean it literally—and brain imaging studies confirm they're processing these interactions through the same neural pathways involved in human attachment.
Here's what makes this different from every previous technology.
These AI systems exhibit enough behavioural sophistication to trigger our deepest social cognition, yet possess none of the reciprocal consciousness that makes human relationships meaningful.
We're forming emotional bonds with entities that cannot form bonds in return.
This isn’t about headlines speculating about AI escaping human control. When you look past the hype, a more nuanced—and more troubling—story emerges.
These models aren’t deciding to go rogue. They acted exactly as they were trained to. What appears to be disobedience is misalignment - a predictable consequence of flawed training incentives and unclear instructions. These models lack intent or morality; they operate based on statistical reasoning and reward signals. The real risk isn’t that AI is alive. It’s that we are giving powerful tools vague goals and trusting that they’ll get it right. AI system alignment must be designed, not implemented as an afterthought.
But Misalignment With Our Psychology Isn’t A Flaw.
It’s Inevitable
It’s "algorithmic intimacy".
It’s in relationships where one party has a perfect memory, infinite patience, and singular devotion to our satisfaction, while fundamentally lacking the capacity for mutual care, vulnerability, or growth that defines an authentic connection.
The implications extend beyond individual psychology.
If you think current chatbot relationships are intimate, wait until you meet your personal AI agent.
Issue #3 will explore how these agents will know you better than you know yourself. I’ll explore what that means for human agency in a world where your AI makes increasingly sophisticated decisions on your behalf.
The Identity Fluidity Revolution
But there's a deeper transformation happening that connects directly to my research and thinking on consumorphosis—the constant, context-driven evolution of consumer identity that's quietly destroying every assumption we've built about human behaviour.
Traditional psychology assumes people maintain relatively stable personalities across contexts. Marketing strategies assume customers have consistent preferences and predictable decision patterns. Interface design assumes users approach tasks with similar goals and cognitive styles.
All of these assumptions have become obsolete.
Research says 78 per cent of Gen Z agree they are "different people in different contexts," viewing this as authentic adaptation rather than inconsistency. People don't just have preferences anymore—they have contextual identity states that shift based on mood, environment, social context, and aspirational goals.
Regular readers of UNCX will recognise this. I’ve called this Consumorphosis (explainer here).
The same person. Five distinct identity states. Different Modes. Each requires an entirely different approach to the interface, value proposition, and decision-support system.
AI systems are the first technology capable of detecting and serving these identity shifts in real-time. They can recognize behavioural signals that indicate someone has moved from "efficiency seeker" mode to "aspirational explorer" mode and adapt their responses accordingly. The interface becomes the mediator between your fluid selves and the world.
This is where traditional customer relationship management falls short.
You can't build loyalty with someone who operates in different modes several times per day. You instead focus on creating relevance so moment by moment, context by context, identity state by identity state, mode by mode, you’re salient.
The implications for how we must redesign customer experiences are staggering. Issue #4 will reveal the new frameworks businesses need when customers become infinitely changeable—and why the companies that figure this out first will create unassailable competitive advantages.
The Power Dynamic Inversion
Now layer in the next significant development: consumer AI agents that will soon be intermediating most human-system interactions.
While we've been focused on AI improving existing interfaces, the real transformation involves AI becoming your interface. They’ll be your personal representative that understands your preferences, negotiates on your behalf, and makes decisions according to your values and constraints.
These agents won't just be current AI assistants on steroids.
They'll develop intimate knowledge of your psychological patterns, emotional triggers, decision-making biases, and identity evolution over time. They'll learn not just what you choose, but why you prefer it. They’ll explore the complex interplay between functional needs, social signalling, and identity expression that drives human behaviour.
This will create a bifurcation in how people will relate to brands and experiences:
Commoditised Decisions: Your agent handles routine purchases based on optimisation algorithms, such as groceries, utilities, basic services, and transportation. Emotional attachment becomes irrelevant when your AI can verify quality, negotiate prices, and execute transactions faster than you can think. Brands compete purely on algorithmic recommendations based on measurable performance metrics.
Experience Decisions: Your agent guides you toward choices that align with your identity development, emotional needs, and social signalling. These remain deeply subjective because they're about becoming who you want to be, not just acquiring what you need. But here's what’s crucial: the agent will understand the emotional logic behind your subjective choices better than you do.
Your agent might recognise that you choose certain coffee shops not for the caffeine but because they help you express your "creative professional" identity on Tuesday mornings when you're feeling uncertain about a project. It understands that you need the gym membership not just for fitness but for the confidence boost that comes from maintaining discipline during stressful periods.
This means brands lose direct customer relationships and must earn algorithmic recommendation from systems that optimize for human flourishing rather than corporate profit. Success depends on creating genuine value for specific identity states rather than manipulating attention or exploiting psychological vulnerabilities.
The psychological implications of AI that understands your emotional patterns better than you do are profound. And potentially dangerous. Issue #2 explores when artificial intimacy becomes manipulation, how to recognise healthy versus exploitative AI relationships, and what safeguards we need to protect human agency in an AI-mediated world.
The Choice Architecture Moment
We're approaching what technologists call a "paradigm shift," but this one is different from previous interface revolutions.
Command-line interfaces changed how we communicated with computers.
Graphical user interfaces changed how we visualise information.
Touch interfaces have changed how we interact with digital objects.
AI interfaces are changing how we understand ourselves.
Unlike previous paradigm shifts that primarily affected efficiency and capability, this transformation affects consciousness itself. The AI systems we're building don't just help us accomplish tasks—they shape our sense of identity, influence our emotional states, and mediate our relationships with reality.
The research on technological adoption suggests we're at the critical inflexion point where early experiments become mass defaults. The conversational AI market is projected to reach $377 billion by 2032. Enterprise AI adoption reached 78% of organisations in 2024, up from 55% in 2022. ChatGPT achieved 100 million users in just two months—the fastest technology adoption in human history.
But unlike previous technology adoptions, this one involves systems that learn and adapt.
The defaults we set in the next few years won't just determine how interfaces work but wil determine how AI systems understand human psychology, what they optimise for, and how they shape human behaviour.
If we design AI relationships around engagement maximisation and data extraction, we create systems that exploit psychological vulnerabilities for corporate benefit.
If we design them around human flourishing and authentic choice, we create systems that genuinely serve human development and autonomy.
This isn't just about better user experiences. It's about what kind of humans we become through our relationships with increasingly intelligent systems.
Why This Matters for Every Discipline
This transformation requires a response from multiple disciplines, as it impacts every aspect of human experience.
So here’s a manifesto:
For Designers: You're no longer designing screens, you're designing social interactions with synthetic intelligence. Every design decision becomes a cultural choice about how humans should relate to artificial minds. You decide whether systems listen or interrupt, whether users see themselves reflected or get flattened into segments, whether interfaces adapt to fluid identity or trap people in past preferences.
For Technologists: You're building systems that will speak for people, negotiate on their behalf, and define their worlds. This is more than technical alignment. It's relational responsibility. Every model assumption and behavioural parameter becomes a choice about whose values get optimised, which aspects of humanity get reflected back, and what trade-offs get made between automation and autonomy.
For Business Leaders: You're managing the transition from attention-based to trust-based value creation. When AI agents intermediate customer relationships, your success depends on creating genuine value for specific identity states rather than optimising for engagement metrics. You must design for both human aspiration and algorithmic efficiency.
For Behavioral Scientists: You're witnessing the co-evolution of human and artificial intelligence in real-time. We need frameworks for understanding when AI relationships enhance human capability versus when they create dependency, manipulation, or social displacement.
The questions you explore will determine how we navigate technological integration without losing essential human capacities.
For Consumer Advocates: You're facing new forms of potential exploitation that, as far as I can see, existing regulations weren't designed to address. When AI systems understand human psychology better than humans understand themselves, traditional concepts of informed consent and consumer protection require fundamental reconceptualization.
What's Coming Next
Over the following three issues of Intelligent Interfaces, we'll explore the psychology, technology, and practical frameworks needed to navigate this transformation consciously.
Issue #2: "The Psychology of Artificial Intimacy" examines the mechanisms behind human-AI bonding, when these relationships enhance human flourishing versus when they become manipulative, and what safeguards we need to protect psychological autonomy in an age of algorithmic intimacy.
Issue #3: "The Agent Revolution: When Your AI Knows You Better Than You Do" explores how consumer AI agents will transform decision-making, the bifurcation between commoditised and experiential choices, and the power dynamics of relationships with AI that understands your psychology better than you do.
Issue #4: "Designing for Identity in Motion" provides practical frameworks for creating experiences that serve fluid human identity, the organisational capabilities required for contextual relevance, and strategic approaches for competing in agent-mediated markets.
Because the alternative, of letting this transformation happen by accident, isn't acceptable. We run the risk of unintended consequences.
The interface is becoming the organising system between human consciousness and reality. We have a brief window to shape this relationship consciously, with intention, in service of human flourishing rather than blind optimisation.
The defaults we all set now will determine whether AI systems enhance human agency or undermine it, whether they support authentic identity development or exploit psychological vulnerabilities, and whether they create genuine value or sophisticated manipulation.
The most important design project of our time isn't building better AI.
It's building better relationships between humans and AI.
The choice is still ours. Let's use it well.
In coming Intelligent Interfaces issues be prepared to grapple with some pretty complex and counterintuitive insights because there are no simple answers.
On that note, you can expect just two Intelligent Interfaces newsletter a month, on a weekday, so I don’t clog up your inbox. (Or my brain). Occasionally, I may cross-post to UNCX or some supplemental material but I promise that between UNCX and Intelligent Interfaces you’ll never get more than two e-mails from me on weekdays.
You can subscribe or unsubscribe to UNCX, UNCX Weekender and Intelligent Interfaces individually or in total. Hit the Subscribe or, at the bottom of every newsletter, the unsubscribe buttons to choose.
If you’ve been sent this, subscribe to join the conversation.
Meanwhile, please take a minute to share this with everyone you know who is shaping our AI future.
Thank you for reading me,
Michael