The Weekender #34
The Algorithm | What you bring to AI determines what you get back | Tech Ghost towns | Your algorithmic self
The Weekender
Welcome back to The Age of Entanglement
THE WEEKENDER
Happy Weekend!
Welcome.
I don’t ask for payment to read my writing.
I don’t even insist on a free subscription, as all my articles are also available on my substack website for free, too.
But writing something worth reading takes time and effort, so pleae consider helping me grow my audience by taking two minutes to share this newsletter with a couple of your friends who may become regular reader.
The Algorithm
We didn’t just wake up in a fractured digital reality. We were nudged there, one “like” at a time.
In 2009, Facebook introduced the button that promised to make appreciation “easy.”
In reality, it was the “patient zero” of the modern attention economy.
That tiny thumb killed the digital scrapbook and birthed an algorithm-driven machine that stopped showing us what was new and started showing us what was “sticky.”
The Death of the Network
This shift paved the way for the era of the “content creator” and the “influencer.”
We watched our feeds mutate from pics of family and friends into a relentless stream of brands, celebrities, and outrage. It marked the end of social networks and the birth of social media.
Twitter and Instagram followed suit, and TikTok perfected the trap with its “For You” feed, which some consider the most aggressively optimised system for user engagement ever built - a digital slot machine designed to keep you scrolling at any cost.
The Body Count of “Engagement”
The consequences have moved beyond mere distraction. They can be, literally, devastating:
Systemic Violence: In 2016, the UN found that Facebook’s algorithm acted as a “beast,” fueling ethnic cleansing against the Rohingya in Myanmar by amplifying hate speech.
The Radicalisation Pipeline: A leaked internal study from 2016 showed that 64% of people who joined extremist groups on Facebook did so because the algorithm recommended them.
The Youth Crisis: Amnesty International’s “Dragged into the Rabbit Hole” report revealed that TikTok can plunge a 13-year-old into a “toxic cycle” of depression-themed content in just five minutes. Within 45 minutes, they are served suicide-related content.
Australia has pulled the trigger on a total social media ban for under-16s, and Europe isn’t far behind. We are finally seeing a push for a “tech reset” to outlaw these exploitative business models.
The Algorithm Paradox
A former Facebook CTO said that while users claimed to hate the algorithmic shift, they “demonstrated that they did [want it] by every conceivable metric.”
Our defence that is: “This is what people want” is no defence. It’s handwringing in the face of knowingly harming users.
Optimising for a dopamine hit isn’t the same as fulfilling a human need. It’s simply exploiting a biological vulnerability.
From Passive Consumption to Intelligent Interfaces
This is where my writing on Intelligent Interfaces comes into play. The current algorithmic model is a “dumb” optimiser - it treats us as passive vessels to be filled with whatever content keeps them from closing the app. It optimises for time spent, not value gained.
An Intelligent Interface, in contrast, is built on the principle of human agency. Instead of an invisible “beast” deciding what you should see to keep you clicking, an Intelligent Interface acts as a sophisticated partner that understands your intent. It doesn’t trap you in a rabbit hole; it provides a ladder.
We need to transition away from interfaces that manipulate our subconscious and toward systems that respect our cognitive sovereignty.
I don’t argue we should abolish the tech, but to replace predatory algorithms with interfaces that serve us. It’s high time to prioritise connection over content and intent over engagement.
And high time brands act on this.
What you bring to AI determines what you get back.
If you bring language - half-formed sentences, vague direction, the shape of a thought you haven’t finished having - AI will complete it fluently.
It will sound like thinking. It will feel like thinking. The output will be clean and confident and completely downstream of whatever you handed it.
If you bring cognition - actual prior thought, a position that arrived through genuine engagement, something you worked out before you opened the interface - AI will surface it. The words won’t be yours. The thinking will be.
That distinction is everything. And almost nobody is making it.
The question isn’t whether AI helps you think.
The question is whether you thought first.
With thanks from Human Wayfinder on Substack
Every generation of tech builds its own ghost town.
Meta just had the budget to landscape theirs
Horizon Worlds was supposed to be Meta’s bright monument to the future: a virtual social space humming with global connection, avatars gliding through infinite digital frontiers.
In practice, now it feels more like stumbling into a provincial nightclub on a Monday afternoon where no one’s remembered to show up. The few avatars left inside deliver their ritual assurances: “It’s not dead!” one insists, “It’s great!” cries another, in the same voice people use when defending their haircut.
Their loyalty is touching, if tragic. A triumph of optimism over arithmetic.
Meta’s chief technologist recently turned up on Instagram to correct the record, after users said they were “heartbroken” about a shutdown. “VR is not dead. We’re continuing to invest tremendously,” he said, as Meta quickly reversed plans to remove Horizon Worlds from Quest headsets and keep it running for the foreseeable future for existing games.
Reality Labs, the division behind Quest and those Ray-Ban AI glasses, has lost nearly $80 billion since 2020 in metaverse and VR development.
The company’s faith borders on ecclesiastical. Every loss is repackaged as proof of long-term conviction, every retreat framed as strategic clarity, and every scaled-back ambition presented as a step toward destiny. Announcements now sound less like product updates and more like sermons for a congregation that stopped attending but keeps receiving the newsletter.
Meanwhile, we’ve all drifted off to livelier gatherings: crypto’s fevered masquerade, Web3’s speculative estate sale, and now AI’s glossy gala, where everyone’s required to attend, with no one entirely sure who’s in charge and what the surprise is.
To be fair, overpromised futures aren’t new. Google Glass arrived as the next epoch of computing and left under a fury about surveillance and social awkwardness.
Magic Leap raised astonishing sums on the promise of holographic transcendence, only to retreat when the hardware met reality.
Clubhouse briefly convinced the world that scheduled conference calls were a lifestyle, then discovered the remarkable fact that novelty is not the same as habit. The Metaverse is not an exception here; it’s merely the most expensive entry in a crowded genre.
Facebook’s rebranding as Meta was sold as a decisive leap toward that genre’s masterpiece, a renunciation of the messy old social network in favour of a unified virtual destiny.
Beneath the theatre, there was a faintly sensible idea: that digital and physical life might be stitched together quietly rather than replaced wholesale. Tiny earphones that layer context over your day, glasses that surface useful information without demanding a pilgrimage into a cartoon universe — these are the modest, almost boring versions of the Metaverse that might actually have worked. I guess Meta’s error was not that it dreamed too small, but that it insisted on transcendence when a bit of subtle augmentation would have been entirely sufficient.
The obituaries for the Metaverse are deservedly gleeful, but they risk flattening that nuance. When we bury Horizon Worlds and its wobbling avatars, we may also bury some of the more reasonable ambitions that came attached: less spectacle, more utility; fewer sermons, more tools. Tech has a habit of abandoning good ideas because they turned up at the wrong party.
For now, Horizon Worlds remains technically alive, preserved for “the foreseeable future” to support existing fans. So it hasn’t died, exactly; it’s simply staging a painfully long farewell — a bit like a faded crooner insisting on one last encore to a room that has already emptied.
Can your algorithmic self be more consistent than you?
Yes, an algorithmic you can be more consistent than the actual you it represents.
Intelligent interfaces work to stabilise a version of you by continuously compiling a profile of your tastes, fears, and needs. Because of this aggregation, this machine-built profile can often appear more consistent within the digital system than the fluid, shifting identity you actually feel in your daily life.
While your actual identity is sometimes unsteady and constantly shifting across different overlapping selves and temporary behavioural modes, the algorithmic self flattens this complexity into a stable pattern. The subtle danger of this artificial consistency is that when an interface confidently defines your state, such as declaring that you are “recovered,” “stressed,” or “productive”, that machine-generated label often carries more weight than your own internal felt experience
My new article on The Human Side of Entanglement was published midweek.
Thank you, Michael.



