The Weekender # 38
Trust | AI and new value | Natural selection for AI? | Altman vs Musk |
Good Sunday! It's May 3rd. Welcome back to the Weekender - a dive into Entanglement with a compilation of related, interesting reads.
Welcome.
We like to pretend that technology fails because it isn’t “ready” yet.
But often it’s us who aren’t ready.
Take the elevator. The technology was there long before our behaviour was.
Automatic elevators existed for decades, but we didn’t want to step into a metal box without a human in charge. It took operator strikes, safety campaigns, and design tweaks to make doors that closed gently or buttons you could see and press yourself to make us feel safe enough to ride.
The “innovation” isn’t just the machinery; it’s the choreography of trust wrapped around it.
We can see the same pattern with other technologies.
Early online payments were available from the 1990s, but many of us refused to type our card numbers into the internet. We worried about fraud, invisible intermediaries, and having no one to call if our money disappeared. Even today, contactless payments and digital wallets took off only after banks added visible safeguards, instant notifications, and “you’re protected” guarantees. The internet’s pipes were ready long before our trust was.
Or think about driverless features in cars. Automatic braking, lane‑keeping, adaptive cruise control. All impressive, all underused. Most of us leave them off or half‑on, because we’re not sure what the car will do in an edge case. That little flicker of doubt - will it brake in time? will it see the cyclist? - keeps us from surrendering control, even when the system is statistically safer. Again, the adoption curve is less about capability and more about felt safety.
Now layer AI into this. The humble customer‑service chatbot on a retail site arrives with big promises of instant answers, 24/7 support, fewer emails to the call centre. For the brand, it’s efficiency and “innovation”; Often BS, of course. And for the customer, it could mean less time on hold. But if the bot gives one wrong shipping update, refuses to connect you to a human, or can’t explain why it made a decision, trust evaporates. We go straight back to email or phone, and remember the feeling of being stonewalled by a system designed to deflect us, not help us. The technology works; what’s missing is the sense that someone will actually be accountable on the other side.
This is why brands have a structural problem.
For years, too much marketing has treated trust as something you say. Purpose statements, glossy ESG reports, green leaves in the logo. We’ve learned, painfully, that “sustainable,” “ethical,” and “AI for good” can mean almost nothing in practice.
Greenwashing and hollow promises haven’t just damaged individual companies; they’ve cheapened the language we use to talk about trust at all.
With AI, that playbook won’t work. Trust has to be visible in how the system behaves: what it shows us, what it hides, how easy it is to say “no.”
It has to be felt in the interface: clear controls, explanations that make sense, and an obvious way out when we’re uncomfortable. And it has to be humanised: real people, real accountability, not just a chatbot avatar and a 48‑page terms‑and‑conditions link.
The elevator age needed steel cables, safety brakes, and an operator we could look in the eye. The agent age needs something similar: transparent systems, honest limits, and humans we can still reach when things get weird.
Footnote: Waymo’s autonomous vehicles are twelve times safer than human-driven cars. Yet, this statistic doesn’t convince most.
People need more than numbers.
Waymo is gaining acceptance by building what elevators eventually also built: a multilayered, visible trust structure. Not better technology alone, but visible and verifiable.
To borrow words from Rory Sutherland, “If you have a great product but nobody trusts you, you don’t have a great product.
So here’s a Weekender question: if you had your own personal AI agent running quietly in the background of your life, what would it have to prove to you - over and over again - to earn your trust, not just your curiosity?
1. AI Will Not Kill Work. It Will Move the Bottleneck.
Artificial Intelligence will not eliminate the need for human labour, instead shifting economic value toward new areas of rarity.
As automation makes digital and physical production cheap and abundant, the author below suggests that scarcity moves from basic goods to human-centric qualities like trust, status, and social meaning.
While the immediate financial rewards belong to those who can automate workflows, long-term value will reside in the relational sector where human presence cannot be replicated. Ultimately, he suggests that economies are tracking systems for scarcity rather than mere job creators, meaning human roles will evolve to focus on subjective desires and unique experiences.
My takeaway: Perspective is important. He rightly reframes the AI revolution as a reallocation of bottlenecks rather than a total replacement of the workforce.
Read the full article by Anton Biletskiy-Volokh | no paywall
2. Can AI systems undergo our natural evolution process?
Some researchers believe, much like human life evolved in major steps from RNA to DNA and onwards into greater and greater complexity, that natural selection, the most powerful process driving change in the living world, can and will shape artificial intelligence (AI).
For the most potent technology humanity has invented to date, we might be about to find out.
According to a new paper published in Proceedings of the National Academy of Sciences, we are entering the era of “evolvable AI” where AI systems can undergo evolution. In turn, that might give rise to a major transition in evolution.
How major is “major”? Well, in nearly 4 billion years there have only been eight, or perhaps only seven, other major transitions. These experts think we’re in another step change that we will not and cannot stop, whether for good or for bad.
3. Sam Altman and Elon Musk Sure Dislike Each Other
Atlantic Intelligence link for the full article | paywall
This article is from Atlantic Intelligence, but behind a paywall. Follow the link above for the full article if you are a paying The Atlantic sub. Otherwise, with their permission, I’m summarising the main points here, because what’s happening is important.
## My Highlights
Musk is suing Altman and OpenAI, among others, demanding legal and financial remedies that would effectively destroy OpenAI as we know it.
In 2015, Musk partnered with Altman to create OpenAI out of concern, as they tell it, that Google DeepMind could not be trusted to create artificial general intelligence. Corporate greed would get in the way of societal progress, they claimed, so OpenAI would be a nonprofit.
Musk left in 2018 before OpenAI added a for-profit entity, and before ChatGPT became the fastest-growing consumer app in history.
In 2024, Musk sued, alleging that by putting profits above its founding mission, OpenAI had violated its founding charter and misused Musk’s initial charitable donations
“It’s very simple,” Musk testified in court. “It’s not okay to steal a charity.”
Named in his complaint are the OpenAI co-founder Greg Brockman and Microsoft, a major investor in the company.
Musk is asking that Altman be removed from OpenAI’s board, that the company convert back to a nonprofit, and for the return of allegedly “ill-gotten gains” (some $150 billion) to go to OpenAI’s charitable trust
The conflict between Musk and Altman has directly shaped the course of the AI industry. It’s the AI boom’s founding feud. The next few weeks of the trial will illuminate tensions about the development of AI that have grown only more urgent -between profit and social good, and over who can be trusted with this technology.
Now we are all living in the fallout of Musk and Altman’s vendetta.
Disagreements over the direction of Google DeepMind led to the creation of OpenAI. More disagreements led Musk to found xAI. A few years ago, Dario Amodei and other OpenAI employees split off to form a competing AI company, Anthropic, themselves trusting neither OpenAI’s structure nor its leadership to prioritise the benefit of humanity over financial gain.
And then there’s Zuckerberg, whom Musk asked about joining forces to purchase OpenAI in 2025 (apparently). Zuckerberg has since spent tens (or even hundreds) of billions overhauling the AI team at Meta in a bid to catch up in the AI race.
The very sort of AI schism that started with Musk and Altman keeps recurring.
A more cynical description of this dynamic is that the AI boom is shaped by a very small group of men, nearly all of whom claim to be the best steward of humanity while being largely dismissive of their competition.
OpenAI is Altman’s fiefdom. Anthropic is Amodei’s, and xAI is Musk’s. Grok has at times responded to Musk’s political views by mimicking his social-media posts.
Both sides have made an issue of the concentration of power. That no one company or person should control such a transformative technology. Central to their arguments. “If you have someone that’s not trustworthy in charge of AI,” Musk testified, “I think that’s very dangerous to the whole world.”
The defense say isn’t consistent with OpenAI’s core mission.”
An irony lost on everyone.
This trial will offer the clearest glimpse into an elite circle whose bickering is shaping the most expensive infrastructure buildout in human history in the name of a technology that could upend work as we know it, the end of education as we know it, and reshape the geopolitical order as we know it.
- Original Author: Matteo Wong
I’m Michael Cooper, and I think about the entanglement between our fluid identities, use of personal agents, AI, engaging with Intelligent Interfaces, and participating in culture - our entangled selves - and the impact on our behaviours, our decisions, our autonomy, consumer empowerment, and brand engagement.
If you love this newsletter and want more:
My private work, where my team and I share our original research with brands, strategists, and futurists, through provocative conversations.
My LinkedIn, where I post my ideas before they turn into essays.
My public speaking, whether chairing international conferences or at in-house company events, when I’m allowed to ‘think aloud’ about my ideas to audiences curious about the entanglement of our humanity, culture, strategy, tech, brands and the future.
Ready for more?
Subscribe


