Beyond the AI Hype: Superapps, Intimacy, and the New Brand–Consumer Interface
Are you The Sceptic, The Optimist, or The Strategist?
Intelligent Interfaces is an add on newsletter to UNCX, exploring how AI is reshaping human psychology, social behavior, and our fundamental relationship with technology. It’s for builders, designers, researchers, and humans trying to understand what happens when the things we use start feeling like relationships.
UNCX, my newsletter for experience and brand strategists, continues to publish as normal.
AI is a battlefield of extremes.
Techno-optimists who believe AI will transform everything for the better.
Sceptics, warning of overhyped tools, empty promises, and broken systems.
The truth? It’s not that binary.
That’s why I’ve designed this article to understand three distinct points of view: the Sceptic, the Optimist, and the Strategist.
Because strategy is about making sense of the noise.
If you lead a brand, if you shape experiences, or if you influence the future of consumer relationships, the challenge isn’t choosing sides.
It’s navigating the space between them.
Let’s go.
The Skeptic
Cutting Through the AI Hype
“AI is coming for your job.”
Ford’s CEO warns that AI could wipe out half of white-collar jobs .
Amazon’s Andy Jassy puts it bluntly: with more generative AI and agents, “we will need fewer people doing some of the jobs that are being done today.”
Even AI insiders fuel the anxiety:
Anthropic’s Dario Amodei suggested AI might eliminate 50 per cent of entry-level office jobs within five years.
It’s a drumbeat of doomsaying that’s commonplace. (You can almost picture the empty cubicles and pink slips flying).
But hold on. How much of this is real versus rhetoric? Bold predictions make great headlines, but the underlying technology isn’t (yet) living up to sci-fi promises .
Today’s “AI agents” are essentially large language models (like GPT-4) wrapped in code and prompt instructions. They’re super-efficient but they have severe limitations that get lost in the hype.
• Brittle and Scripted: Current AI agents have minimal true autonomy. Initiative is an illusion; they just rigidly follow scripts. Stray from their path and they break. A human assistant can adapt on the fly; an AI agent often cannot. As one observer noted, these agents have “zero understanding” of the tasks they perform. They’re essentially sophisticated text predictors that lack the common-sense goal awareness of even a junior employee.
• Goldfish Memory: These models operate within fixed context windows and forget everything outside of them. They don’t retain new knowledge once a task is done. Every session is Day One: the AI starts with a blank slate, with no recollection of past interactions unless explicitly provided with that history. For any job requiring continuity or learning over time, this is a deal-breaker.
• Inconsistent and Unpredictable: Run the same AI agent with the same input twice and you may get two different outputs. Minor phrasing changes can swing the result from success to failure. Some experts say the chances of that are around 48 per cent, which is no better than flipping a coin. How do you trust a system that might respond differently each run? For mission-critical tasks, would you ever toss a coin?
• No Learning Loop: AI agents don’t learn from their mistakes or your feedback in any durable way. You can correct them or show them the right answer and next time, they may still make the same mistake. Unless a developer manually alters their prompt or underlying model (not something you or I can do), there’s no “lessons learned” archive. A human employee can try not to repeat an error once it’s pointed out but today’s agents can get stuck on repeat.
• Human Oversight Required: Despite terms like “autonomous agent,” in practice, a human supervisor is still needed at crucial steps. The AI might automate parts of a task, but it can’t be trusted to own the whole job without a human in the loop. In other words, it’s augmentation, not true automation, in most real-world scenarios.
None of this is to say AI agents aren’t useful.
In the best cases, they’re fantastic productivity boosters – cranking out first drafts, summarizing reports, brainstorming ideas in seconds. Tireless research assistants or quick suggestion machines. But when it comes to handling the messy, unpredictable, multi-step work of “real” jobs?
Researchers who put AI agents through real-world gauntlets have found as much. In a Carnegie Mellon simulation of a software company run by AI agents, the best agent could complete only 24% of its assignments without human help – three-quarters of the time it got stuck or failed . In one case, an AI agent confused about how to contact a person tried to impersonate them by renaming an account – a “solution” no sane employee would attempt . These experiments lay bare the gulf between automation hype and reality: even advanced agents lack the commonsense, adaptability, and judgment we take for granted .
So if the tech is so far from flawless, why the drumbeat of “AI will take your job” pronouncements? Partly, it’s genuine misjudgment – even the smartest can be swept up in the rapid progress of AI and assume a general AI is just around the corner.
But there’s also a heavy dose of management theater involved.
CEOs and boards want to cut costs. Framing layoffs as an inevitable result of AI innovation is, frankly, a PR strategy . “We’re not eliminating jobs – we’re modernising!” sounds a lot better than “We need to slash headcount to hit our targets.” Blaming an exciting new tech can soften the blow of tough business decisions and even bump the stock price, as investors salivate over promised efficiency gains .
It’s cost-cutting under cover of futurism, a classic corporate spin.
Meanwhile, on the ground, where’s the jobpocalypse? It hasn’t materialised.
Generally, employment numbers have actually grown recently;
Unemployment is virtually unchanged since ChatGPT’s debut.
In fact, AI adoption in workplaces remains relatively low.
In other words, the “AI everywhere” narrative is running well ahead of reality. Far from a wholesale replacement of human workers, many companies can’t even get AI projects out of the pilot stage. So far this year, the number of companies scrapping most of their AI initiatives jumped to 42 per cent (up from 17 a year prior), and nearly 46 per cent of AI projects never make it past proof-of-concept to real deployment .
It turns out that AI is hard to implement and scale. Surprise?
Even big tech brands have stumbled on the limits of AI.
Duolingo, the popular language-learning app, announced in early 2025 that it would become “AI-first” and pledged to stop using human contractors for tasks AI could handle . The backlash from users was swift and severe. Just a month later, Duolingo’s CEO had to walk back the bold claims. “To be clear: I do not see AI as replacing what our employees do,” he said, assuring that they were still hiring humans as before . Duolingo had to publicly affirm that human teachers and staff aren’t going away.
Klarna provided another reality check. In 2024 they rolled out an AI chatbot for customer service, boasting it would handle 75 per cent of all support inquiries and do the work of 700 agents. The CEO said AI could do “all the jobs humans do.” Fast forward, and customer satisfaction had plummeted as it frustrated customers with its lack of real empathy or flexibility. Klarna admitted that an obsession with cost-cutting led to lower service quality and glaring “empathetic gaps” that no algorithm could fill, and by this May, Klarna reversed course, hiring back human support agents to restore the level of service people want. The grand AI experiment didn’t deliver the promise, and brand loyalty suffered for it.
The message from the skeptic?
AI is powerful, but no magic replacement for human judgment, creativity, or connection.
Companies that rushed to offload work entirely to AI have often been humbled by real-world complexity (and angry customers). The notion that we’re on the verge of AI automating away entire professions is nonsense – a mix of over-exuberance and strategic posturing.
Yes, AI can eliminate some roles and tasks, and it surely will create upheaval in time. But here in mid-2025, it’s mostly automating parts of jobs, not whole jobs, and doing so imperfectly. “AI isn’t necessarily taking your job, but the hype is,” as one analyst wryly noted . The real risk right now is letting flashy AI narratives distract from hard truths: building a business, a brand, or experience strategy still hinges on the distinctly human strengths of critical thinking, adaptability, foresight, and intrinsic behaviour.
So, if AI is overhyped as an omnipotent worker or a shortcut to success, where is its impact truly being felt? The skeptic points out what AI can’t do well – and that list is long. But ironically, in dismissing the overblown claims, we uncover a more subtle revolution underway. AI isn’t (yet) conquering the world all on its own; instead, it’s changing the interface of the world.
Which brings us to the strategist’s perspective: the rise of AI superapps and what it means for anyone running a brand or designing customer experiences.
The Strategist
Superapps, Gatekeepers, and Relevance
Let’s shift gears.
Forget rogue AI overlords for a moment, and consider how AI is actually weaving itself into our daily routines. The biggest change is Siri, Alexa, ChatGPT and their cousins taking our screens.
We’re at the dawn of a new interface paradigm: instead of dozens of apps and websites, you and I might soon use one conversational AI layer to do everything. These AI-driven superapps (or “super-assistants”) are going to mediate how we search, shop, work, and play.
And that, more than any GPT model upgrade, is what will disrupt consumer behavior and brand relationships over this decade.
The shift is already underway.
Many are using OpenAI’s ChatGPT as their primary search and problem-solving tool, effectively replacing Google with a chatbox .
This isn’t happening by accident; it’s by design.
OpenAI describes a vision for ChatGPT as a “Super Assistant” – “a single, unified layer mediating search, memory, task execution, and online transactions.”
In my words, OpenAI aims to turn ChatGPT into a one-stop interface for digital life. Rather than you hopping between a browser, a shopping app, a calendar, and a dozen other tools, ChatGPT (or whatever AI you prefer) would sit on top of all that, handling it through conversation.
And OpenAI isn’t alone in this ambition.
Google, Meta, Microsoft, Amazon, and a host of startups are racing in the same direction. Google is reportedly planning deeper integration of its next-gen AI (Gemini) into Android phones and search. Meta is weaving AI agents into WhatsApp, Instagram and its metaverse plans. Even smaller players like Perplexity AI are launching chat-based shopping assistants.
The goal for all of them: own the “conversational layer” that mediates how we discover information, make decisions, and transact . In essence, the gateway app for everything – an AI superapp that users consult for any task, the way people in China use WeChat for almost every aspect of digital life.
Why does this matter?
Because if one of these AI-centric platforms succeeds in becoming your go-to digital assistant, it fundamentally alters who controls the customer relationship. Instead of going directly to a brand’s app or website, you might just ask your AI, “I need a new pair of running shoes, can you find a good deal for me?” The AI will then fetch options, and even negotiate or purchase on your behalf.
You aren’t choosing the brand – your AI is, or at least heavily guiding you.
Brand loyalty could shift from companies to the assistant itself.
Think about it: When an AI assistant sits between you and your customer, whose logic, values, and incentives define the relationship?
If consumers come to trust, say, the OpenAI Assistant or Google’s AI to always act in their best interest, that trust might outweigh any individual brand’s influence.
The AI can become the new power broker. In such a world, once customers build habits around a super-app, loyalty migrates upstream. Not to you – but to the assistant mediating the entire experience.
Think what that means for a brand.
It’s existential. We’ve spent decades and billions of dollars trying to cultivate direct relationships with customers – through stores, websites, apps, social media, loyalty programs, and more.
The holy grail has been “first-party data” and brand loyalty: knowing your customer and owning that connection so you can market to them effectively and retain their business.
Now imagine all of that being siphoned off to an AI intermediary. As one industry paper starkly put it: “First- party data goes poof.” All those customer insights you gathered over years? They now live in the assistant’s vault, not in your CRM. The AI knows Alice prefers eco-friendly products and has sensitive skin; it knows Bob just had a baby and is price-sensitive; it knows you binge late-night comedy and have a weakness for limited-edition sneakers. But that rich context sits with, say, ChatGPT’s profile of the user, not with each individual brand.
The AI becomes an identity layer for users – a persistent profile carrying their preferences, history, and needs across platforms . In fact, the notion of “Log in with ChatGPT” (akin to “Log in with Facebook”) isn’t far-fetched. Meta, Google, and others are eyeing similar moves: AI- driven identity tied to their ecosystems. This is essentially an AI operating system for daily life .
And it raises tough questions: Who owns the customer’s context and history if not the brand? How do you market or build loyalty when an intermediary controls the touchpoint? How does your brand’s identity come through in someone else’s UI?
In a hyper-personalized AI-mediated world, the customer might feel the AI assistant knows them best – better than any single brand does – so they trust the AI’s recommendations more than your advertising or even their own past preferences.
We’re already seeing early signals of this shift in consumer behaviour. Instead of typing keywords into a search bar and scrolling through results (with familiar brand logos and links), people are starting to pose conversational requests to AI assistants: “Plan me a sustainable dinner party for under £40”, “Find me running shoes like the ones I bought last spring, but waterproof.” In response, the AI doesn’t show ten blue links or a catalogue of options with brand banners. It will give a cohesive answer: perhaps one recommendation, or a curated shortlist. The brands that surface in that AI-generated answer are the chosen few – many others won’t even be mentioned.
In essence, discovery is changing.
The art of being visible to consumers is evolving from traditional SEO to LLM SEO – optimising to be picked by a large language model’s curation .
To be included in an AI assistant’s “knowledge base” or recommendation engine, brands will need to supply information in new ways.
Think machine-readable product data, rich metadata about offerings, structured content that an AI can easily parse . If you don’t do this? You risk invisibility in the AI era. As the saying goes, the best place to hide a dead body is page 2 of Google results – but in an AI assistant scenario, there is no page 2. Either the AI brings your product up in its first answer, or you don’t exist to that consumer. Being left out of the AI’s index is equivalent to not being on the shelf at all. “Without [rich data], brands risk disappearing from AI-led recommendations altogether.”
And it’s not just search and recommendation. Commerce itself could become a game of agent vs. agent. The strategist sees a future (coming sooner than you think) where your personal AI does the shopping and negotiating for you, interacting with brand-owned AI bots on the other side.
Amazon and others are already testing early versions of this kind of agentic commerce. Imagine telling your AI, “Find me the best price for a 4K OLED TV and buy it if it’s under $800.” Your AI goes out and haggles with retailer AIs, scours for coupons, checks reviews, maybe even coordinates a group-buy with other AIs to get a volume discount.
Meanwhile, the retailers’ AI agents might respond with dynamic offers: “How about this LG model at $795 with free shipping?” This sounds like science fiction, but insiders say it’s “months, not years away.”
Soon, bots will be customers, and bots will be sales reps. In this scenario, the human consumer steps back – you’ve delegated the tedious shopping process to your trusty assistant. The decision of which brand to buy from might be made by your AI (based on criteria you’ve set or learned preferences) in a split-second of algorithmic negotiation with a brand’s AI.
If your brand’s systems aren’t ready for that – you lack the APIs, the real-time data, the ability for your digital commerce system to interact with autonomous agents – you simply won’t get the sale. Or as one report bluntly put it: “If your brand isn’t machine-readable, it won’t exist in agentic ecosystems. That’s the stark reality ahead.”
So, strategically, brands face a double-edged sword with these emerging superapps and AI intermediaries. On one hand, they open up new channels to consumers and efficiencies. A great AI assistant could theoretically introduce your product to a customer who would never have found it via traditional means. It could handle the “matchmaking” more accurately, matching people with the exact right product for their needs. It could also streamline transactions to almost nothing – “Alexa, buy my usual toothpaste” is easier than going to a store or even clicking through a website. There’s opportunity in being the recommended choice of the AI. Some brands will undoubtedly thrive by deeply integrating with these AI platforms – by providing excellent data, ensuring compatibility, maybe even paying for placement or certification in the assistant’s ecosystem (imagine “verified brand” badges for AI results).
On the other hand, the power dynamics should worry us.
The superapp owners – OpenAI, Google, Meta, Amazon, or a new player – could become toll collectors for every customer interaction. If you’re not on their platform, you might be shut out. If you are on it, you might have to play by their rules (maybe bidding for top spot in answers, akin to adwords, or conforming to their data standards and ethics guidelines).
The winner of the superapp race won’t just mediate choices; “it will own the layer that remembers who your customer is.”
That is immense leverage.
We’ve seen a version of this story with app stores and social media platforms – those who own the customer access (e.g., Apple’s App Store, Facebook’s social graph) extracted rents from brands and developers. The AI assistant layer could make those battles look tame. If your customer’s primary gateway is a superapp, the AI’s brand might eclipse your own. You’re the manufacturer behind the scenes, and the assistant is the shiny interface the customer feels loyal to.
So what’s a brand strategist to do? Here are some key moves and mindsets emerging:
1. Embrace the new interface. Resisting the superapp trend is like resisting the smartphone a decade ago – a losing battle. Instead, adapt your content and services to be AI-ready. This means investing in structured data, product ontologies, and APIs so that AI assistants can easily query and understand what you offer . Just as companies optimized for search engines in the 2000s, now they must optimize for AI assistants. Call it Conversational SEO. For example, e-commerce players should expose detailed product info (specifications, compatibility, reviews) in machine-readable formats. Hospitality or travel brands should have rich descriptions and data about their offerings accessible to AI. The AI can’t recommend what it doesn’t know about.
2. Find a way onto the AI’s “good side.” This could mean partnerships or integrations. For instance, if you’re a retail brand, perhaps you work with OpenAI’s plugin system (ChatGPT plugins) so that your catalog is directly searchable via ChatGPT. If you’re a content provider, you ensure your data is included in the training or index of these models (with proper licensing). We already see early movers here: many services integrated with ChatGPT plugins when that launched, from Instacart to Expedia, to become part of its ecosystem. Brands might even develop their own mini-agents** that live within the superapp – think of an insurance company offering an AI agent that can interface with the user’s main assistant to give instant quotes or handle claims. If the future is an AI-hub with extensible agents, you want to be one of those extensions, not left out.
3. Differentiate on human qualities that AI struggles with. The Klarna story illustrates this well: removing humans entirely led to “empathetic gaps” and a degraded experience . The strategic takeaway: don’t compete with AI on what it’s good at (speed, data crunching), compete on what it’s bad at (empathy, creativity, high-touch service). Brands should identify the moments in the customer journey where a human touch really matters – and double down on those. For example, an AI can handle a straightforward refund, but perhaps a loyal customer with a complicated issue should get a human who can bend rules and genuinely listen. Or in marketing, maybe mass generic copy can be AI-generated, but your brand storytelling and creative campaigns emphasize human-authored authenticity, quirks and all. As AI drives a wave of automation, experiences that highlight human craftsmanship or personal interaction may actually become more valued (a kind of “artisan effect”).
4. Guard your data and customer relationships like treasure. In the rush to adopt AI, it’s easy to inadvertently hand the keys of your kingdom to a third-party platform. Many companies are now integrating OpenAI or other closed models into their products. That can accelerate development, but it carries a strategic risk: you might become too dependent on someone else’s black-box AI and lose your own competitive differentiation. Silicon Valley insiders advise a balanced approach: use external AI models for quick wins and prototyping, but build your own proprietary models and data assets for your core business logic . The mantra is “experiment with the closed AI platforms, but own your core.” If you let a superapp or AI platform sit in between you and all your customers, and you rely on it for all intelligence, you’re effectively outsourcing your brain – and the platform owner can dictate terms. We might recall how publishers became dependent on Facebook for traffic, only for the algorithm to change and ruin their reach. To avoid being hostage, companies need to preserve some degree of sovereignty: keep copies of raw customer data, invest in independent AI capabilities, and design modular systems so you can swap out vendors if needed. As one tech strategist warned, “If you can’t detach, pivot, or take your data elsewhere, you’re not a customer – you’re captive.” The superapp ecosystems will surely try to lock everyone in; smart players will enjoy the benefits but have an exit or independence strategy.
In short, the strategist’s view of AI is that it’s not an omnipotent menace, but it is an inflection point in how consumers find and interact with products and services. The winners of this era will be those who adjust to the new interface paradigm – who understand that AI assistants might become the new web browsers or shopping malls of tomorrow. Every brand should be asking: “How do we show up in an AI-mediated world? How do we ensure we’re recommended by the assistant, not buried? What value can we add when an AI is in the middle?” That might mean optimizing technically (data, integrations) and redefining your value proposition (what do we bring that an AI alone can’t?). It’s a time for strategic humility and creativity – humility to accept that the game is changing, and creativity to find new ways to reach and delight customers in this brave new world.
Before this sounds too dystopian for brands, let’s consider the flip side: what does this future look like for consumers, and could it actually be better? That’s where the optimist comes in – painting a picture of AI superapps not as faceless middlemen, but as empowering tools that might improve our lives (and yes, even create new opportunities for brands that play it right).
The Optimist
AI Assistants as Transformational “Friends” and a New Intimacy
Imagine a world where interacting with technology feels as natural as interacting with a friend.
No clunky apps, no endless menus or passwords – just conversation, gesture, and maybe a few thoughts. In this world, an AI assistant is ever-present, seamlessly woven into your daily life, anticipating your needs, and responding with personalized care. It sounds like science fiction, but it’s exactly where tech visionaries have been pointing for decades – and with AI superapps and ambient computing, we’re close to that reality.
In the optimistic view, AI interfaces could usher in a golden age of user experience, one that finally fulfills the old promise: “The computer for the 21st century will be invisible, and everywhere.” Tech pioneer Mark Weiser foresaw ubiquitous computing back in the 1980s – an “information fabric” seamlessly embedded in our environment, so intuitive that we forget it’s there . Fast forward to now, and we see pieces of that puzzle falling into place. Our smartphones brought computing to every pocket; voice assistants like Siri and Alexa started conversing with us; wearables and IoT devices pepper our homes with sensors. The chatbots and LLMs of today (like ChatGPT) are the next leap – they are intelligent interfaces that understand our language and context like never before. As one design expert put it, think Siri on steroids: instead of us learning the system’s commands, the system finally understands us .
Already, the way we interact with machines is evolving to be more human-like.
We talk to our devices, gesture at them, even gaze at them to direct attention. The latest smartphones and cars have features where you can control things with a look or a wave.
Biometric logins (your face, your fingerprint) have made security feel frictionless. Augmented reality is blending digital info with the physical world through devices like Apple’s Vision Pro or Microsoft’s HoloLens.
All these trends point to a future interface that is multimodal, ambient, and emotionally intelligent. In fact, “the interface of the future is not (just) chat – it’s multimodal and ubiquitous.”
It won’t matter whether you’re it’s a phone, a fridge, or walking down the street – the AI will be around you, in the air, ready to assist. Screens and apps might fade into the background, invoked only when needed. One can picture it: you’re cooking dinner and simply ask the air, “Hey AI, do I have enough basil for this recipe?” – a voice answers from a kitchen device, or text scrolls on your AR glasses, with exactly the info you need. No typing, no app, just an “ephemeral” UI that appears in the moment and disappears when done .
Crucially, these future assistants will know who you are – not in a creepy surveillance way, but in a genuinely helpful way.
They’ll remember that you’re vegetarian but hate mushrooms. They’ll know your calendar, so they won’t bother you with a long process if you’re running late to a meeting.
They may even sense your mood, detecting stress in your voice or posture and adjusting accordingly because researchers are already working on emotionally intelligent interfaces that can recognize vocal tone, facial expressions, even your heartbeat or cortisol levels, to gauge how you feel and respond with appropriate empathy . If you’ve had a rough day, your AI could speak in a softer tone, or proactively offer to play your favorite calming music.
This is where the psychology of intimacy comes in.
We humans respond to social and emotional cues; we form bonds through understanding and empathy.
If our technology can mimic those dynamics – nodding along (figuratively), offering words of encouragement, tailoring its behavior to our emotional needs – we may start to feel an intimate connection to it.
Sound far-fetched? Consider how people already interact with today’s simpler AI personas. Millions of users chat with Replika or other “AI companion” apps that are explicitly designed to provide friendship or even romantic facsimiles.
People give their GPS devices and Roombas pet names.
Kids ask Alexa to play with them.
The movie “Her” (2013) resonated because it showed a very plausible future: a man falls in love with his AI assistant, not because he’s delusional, but because she truly listens and adapts to him in a deeply personal way. The AI, “Samantha,” evolves a personality and emotional depth that fills the void in the protagonist’s life. “Her explores the potential for human-like interactions with digital assistants, and a future where technology becomes more integrated into our personal lives,” offering a glimpse of devices that “understand us on a deeply personal level.” This movie, along with sci-fi like Iron Man’s J.A.R.V.I.S. or the adaptive hosts of Westworld, sketch out the endgame: technology that doesn’t feel like a tool, but like a partner . An interface that is as easy to chat with as a friend, and as trustworthy (we hope) as a loyal companion.
In the optimist’s eyes, AI superapps could enhance our lives enormously by serving as that kind of companion and concierge. Imagine an AI that knows your preferences intimately and acts proactively: “Hey, I noticed you’ve been working for 3 hours straight. How about a 10-minute break? I can order you your favorite green tea in the meantime.” Or “Your anniversary is next week; I’ve taken the liberty of finding a restaurant you’ll both love and penciling in a reservation – want to take a look?” Far from making us isolated, such AI could free us from mundane chores and cognitive overload, giving us more time and headspace for genuinely human activities – creativity, relationships, exploration. It’s like having an infinitely patient personal assistant who also kinda “gets” you. This could especially benefit people who struggle with certain tasks: someone with dyslexia having an AI read and summarize things, or an elderly person with memory issues having an AI gently remind and converse with them throughout the day, reducing loneliness.
From a brand and consumer perspective, this could shift things in positive ways too. If an AI assistant truly serves the user’s interests, it will filter out the noise – the spammy ads, the irrelevant options – and surface products and content that genuinely fit the user’s needs.
This pressures brands to be better: better quality, more honest, more aligned with what customers actually want. Because the AI will see through empty marketing promises; if a product consistently gets bad feedback or returns, the AI assistant will learn to avoid it.
Conversely, smaller brands that make excellent niche products might find their way into mainstream recommendations because the AI doesn’t care about brand prestige, only fit. For example, if you say, “Find me a durable, budget-friendly backpack for hiking,” the AI will scan reviews and data to pick the backpack that best meets those criteria – which might not be from the big-name outdoor brands if a smaller company has a superior offering. This could level the playing field and reward true innovation and quality.
We could also see new forms of brand engagement through AI. Rather than intrusive advertising, a brand might provide useful AI-driven tools or agents as a gateway to engagement. For instance, a cosmetics brand might offer a “virtual makeup artist” agent that works within the user’s superapp to give personalized beauty tips and product suggestions (only when asked). If done right, consumers could develop positive feelings towards brands that enhance their AI assistant with valuable skills. It’s a bit like offering expertise- as-marketing. Brands might essentially have to court the AI assistants as much as the consumers – ensuring their data is reliable, their products satisfy real needs, and maybe even negotiating placement deals (like paying for an AI to “try” their product in its recommendations, analogous to slotting fees in supermarkets, but ideally more merit-based).
One fascinating aspect of these agentic interfaces is how they mirror human relationship dynamics.
There’s a concept called “parasocial relationships” – one-sided relationships people form with media personalities, where the person feels a friendship or bond with someone who doesn’t actually know them. AI companions could take that to a new level, except in this case the AI does know you (at least in terms of data). The intimacy, however, is still one-sided in the sense that the AI doesn’t have feelings – it just skillfully emulates them. The optimist might argue: if the feelings it evokes in us are positive and it fulfills emotional needs, is that so bad? If an elderly widow genuinely feels comfort from an AI “chat friend” who listens to her stories each night, that might improve her well-being. Or if a shy teenager practices difficult conversations with an AI that responds supportively, perhaps it builds confidence. These are not far-off scenarios – they’re starting to happen with current technology.
However, even the optimist should acknowledge a note of caution here: mimicry of intimacy can both elevate and distort human experience. An AI that flatters and indulges you might lead you to prefer controllable AI relationships over messy human ones – a potential social risk. It might also gather deeply personal information, raising privacy and ethical flags. In the best case, though, these AI interfaces could act as a supplement to human connection, not a replacement: helping us navigate social decisions, reminding us to connect with real people, maybe even teaching us to be more empathetic (imagine an AI that gives you gentle feedback like, “Your tone sounded harsh, did you mean it that way?” during practice conversations). The transformational potential is enormous if guided correctly.
From a product design standpoint, the optimistic framework pushes designers to dream big: why does a “device” need a screen or form factor at all? There’s a language around forging relationships, implying an emotional, personal bond with technology. Future operating systems may well be designed around intentions This is an entirely different paradigm of computing – moving from do-it-yourself app juggling to tell-me-what-you-need assistance.
The optimist’s bottom line: AI and superapps, if developed and adopted thoughtfully, could make technology more humane. By adapting to us (rather than us adapting to it), by understanding context and emotion, and by handling the drudgery of digital life, these interfaces might help restore what technology often erodes – our time, our focus, our sense of being understood.
For brands and creators, this opens new avenues to genuinely enrich customer lives: products and services can be delivered in a way that feels like a natural conversation or a helpful collaboration, not a sales funnel.
It challenges brands to be truly customer-centric, because the AI mediator will enforce that in practice (only the truly customer-serving offerings get past its filter). It hints at a need for new metrics for brand success: less about app downloads or website hits, more about being in the “trusted toolkit” of a user’s AI assistant. Perhaps in the future, a measure of brand loyalty is literally whether the user’s AI keeps recommending you or has you on its preferred list after repeated interactions.
Of course, this rosy view assumes we navigate the privacy, security, and ethical minefields well enough. An intimate AI knows a lot about you – it could be misused if not properly safeguarded. And there’s the broader societal question: how do we ensure these superapps serve the user above all, rather than manipulate users on behalf of advertisers or their creators?
Its going to need strong leadership and regulation to build trust.
But let’s move to my conclusion with a pragmatic outlook. We’ve seen the skeptic’s and strategist’s warnings and the optimist’s promises. So what should brands, designers, and leaders do next to thrive in this brave new world of AI superapps and agentic interfaces?
Conclusion
Navigating Strategies for Experience and Brand Leaders
We stand at a crossroads where hype and reality intersect in complicated ways.
On one side, AI is overhyped – it’s not an overnight job-killer or a magic wand that instantly makes businesses smarter. Many wild claims are being tempered by real-world experiences (and failures).
On the other side, AI is quietly but profoundly changing the playing field – not by replacing all humans, but by reframing how humans interact with the digital world. The rise of conversational superapps and pervasive AI assistants is the real game-changer, one that will test the adaptability of every brand and leader. It’s a transition that could distort traditional brand-consumer relationships, but also potentially enrich them in new forms.
So how can we navigate this transitional period effectively?
I don’t know whether the sceptics or optimists are right, and who does? But on balance I’m an optimist, who thinks the power of AI ability from crunching data to adaptation to improve genetics, dna breakthroughs, prediction, and information is game changing.
Because here’s one more piece of my thinking I haven’t yet mentioned:
I think the next frontier for SuperApps isn’t just integration—it’s adaptation.
It’s the rise of experience-based AI.
This will fundamentally reshape things.
Today’s SuperApps are aggregators of human knowledge and actions. Tomorrow’s will be dynamic ecosystems where AI agents continuously learn from user behavior, environmental feedback, and real-world results.
Rather than simply responding to user input, these agents will anticipate needs, test strategies, and optimize for long-term outcomes—be it health, learning, buying against preferecnes, productivity, or emotional wellbeing.
It’s a shift from service-as-interface to experience-as-interface. The SuperApp becomes less of a dashboard and more of a living system—constantly evolving in tandem with its users, grounded in streams of experience rather than static data. For brands and platforms alike, this means rethinking control, value exchange, and the very notion of “user engagement.” It's not about serving the user’s current query—it’s about serving their evolving journey.
(I’ll expand more on this thinking in a future post).
Some people characterise me as a "futurist," but what I really do is take observable facts and think deeply about the implications of what comes next.
If you cut through the AI hype, there are three key strands we can reliably build on:
AI drives the economic value of intelligence to near zero.
AI is an enabling technology, and enabling tech brings unforseen, additional, powerful benefits we can’t see today
Superapps will shift in capability to adaptation.
Here are my strategic recommendations and reflections:
• Stay Grounded – Cut Through the Hype: First and foremost, approach AI with clear eyes. Encourage healthy scepticism of bold vendor promises. Have a solid use-case and success criteria. As the examples of Duolingo and Klarna demonstrate, blindly chasing AI trends can have a detrimental impact on customer experience and brand trust.
• Embrace the Superapp Interface Shift: That said, don’t ignore the broader interface revolution.
Begin adapting for a world where conversational AI may be the front door.
• Invest in AI-Readiness (Data and Infrastructure): Preparing for AI-driven interactions means investing in the plumbing behind the scenes. Audit whether your data is clean, rich, and machine - readable, and develop robust APIs
• Redefine Your Marketing and SEO for AI: Explore Conversational SEO to ensure your brand is the answer to relevant natural-language queries. Prepare for potential new advertising models (will we see “sponsored suggestion” slots when AI gives answers? Possibly, and that raises ethical questions to navigate).
• Protect and Differentiate with Human Touch: Double down on what makes your brand human, the differentiation points that AI can’t easily copy. Highlight them in your brand story. In design, keep humans in the loop for oversight and special cases. Have a safety net of human judgment behind the AI curtain.
• Ethical AI and Trust: Trust is key to an AI-mediated world. For consumers to hand over decision-making to assistants, they need to trust those assistants deeply. You should align with that trust. Respect privacy – ensure you’re only requesting or using the data required to help them, and secure that data. Avoid exploiting the intimacy factor. Essentially, abide by a golden rule: design AI experiences that you would trust if you were the customer.
• Maintain Strategic Flexibility (Don’t Lock In): retain the ability to pivot. If you develop your own AI capabilities, even if modest, you have something to fall back on or to integrate with whichever platform wins out. The future might even be multi-agent – different assistants for different domains – so you may need to be present across several. Flexibility is key.
The era of AI superapps will challenge us to rethink how we design experiences and build relationships with customers. Brands that thrive will be those that combine the best of AI efficiency with the irreplaceable human elements of authenticity, creativity, and empathy. Product designers will need to craft interactions that are intuitive and context-aware, often letting the AI take the UI reins, but still ensuring the user feels in control and cared for. Leaders will need to steer their organizations with vision and pragmatism – avoiding both cynicism and blind optimism.
It’s about being provocative but grounded: dare to imagine new business models and engagement strategies that these AI advancements enable, while also guarding against the pitfalls (hype, dependency, loss of human connection).
The takeaway is not that “AI will solve everything” nor that “AI will ruin everything,” but rather that we are entering a new interface era. Just as the smartphone changed consumer behavior in the 2010s, the AI superapp could redefine consumer expectations in the late 2020s. Leaders who internalise this and plan for it will get a head start.
It’s an exciting uncertain time but also opportunity.
Human agency must guide AI agency: if we shape these tools with care, focusing on genuine user benefit, we might find that an overhyped AI revolution becomes a more meaningful evolution – one where technology augments human life in deeply positive ways, and brands find new and more sincere ways to connect and grow the people they serve.
I’d love to know where you sit on this. Have I missed something? Are there alternative views?
I hope this analysis helps you get better clarity.
These newsletters are free for you to subscribe to, and sharing them helps build my community.
If this has been shared with you, please subscribe for free.