Hi Shawn, and thanks for laying all this out. Let me try to respond point by point.
On the triangle metaphor: you’re right, plateau isn’t a "force.” I use it more as a storyline: some people expect endless growth, others expect a stall.
The tension is between those competing expectations, which end up shaping investment and policy.
On the experts: fair point again. There were experts like Marcus and Chomsky who called out the limits early, and their voices got drowned out by louder hype. I don’t mean to excuse that; just to say the uncertainty left space for very different readings, some of them badly overstated.
On emergence: I’m not talking about something mystical. I just mean that as models scaled, they started doing things their designers didn’t expect. Whether we call that “emergence”, or “capability surprise,” isn't it still something leaders have to reckon with?
On human flourishing: Yes, it’s vague. I use it as shorthand for making sure the future bends toward lives people want to live. And when I talk about hyper-individualism, it’s a description of consumer behaviour, not a prescription for what’s necessarily good.
In short, I think we agree that what matters is being precise in language and careful not to slide into either hype or dismissal. Regards
Michael
PS and no, I don't write using llms. I research using them.
Growth is never in tension with "plateau." Plateau is the negation of growth. It's not a thing that interacts with growth. That and neither of those are real terms — they refer to nothing except expectations. Expectations aren't meaningful things in tension with ai (something outside our expectations as an object).
Edit: > It doesn’t mean experts are lying or clueless.
That's not fair at all. Gary Marcus has a whole career about the limits of the scaling aspect in terms of just using patternmatching and he got trashed on by the whole ai community. A ton of linguists, including Chomsky, put out an nyt op-ed explaining this. Tons of people were very clear about this and ea went down awhile ago. In spite of that, we got front-page brow-beating and fear-mongering by experts and expert-supporters about how we should let only a few corporations be licensed and how the world may end by 2024. That and it spawned a whole genre of mission impossible nonsense about ai taking over the world. Extremely irresponsible stuff, and it wasn't intuitively impossible to grasp given we have autocorrect. It was a little too irresponsible to say they weren't clueless if they weren't being malicious.
Edit: In line with that, there is no such thing as emergence. There is no epistemic ability to even be able to grasp ontological emergence. The llm's borrow our semantics by trying to match the syntax of sentences we create. None of them have apologized and that really kills this leaning on experts for advice thing.
Edit: > The task is not to forecast exactly when or how AGI will emerge, but to cultivate the conditions where whatever emerges bends toward human flourishing.
This is the fantastical part. Humans do not know what human flourishing is. You, yourself, used ai to write this article (and probably think for you). You specifically said you prefer hyper-individualism which doesn't sound like it promotes human flourishing whatsoever.
Hi Shawn, and thanks for laying all this out. Let me try to respond point by point.
On the triangle metaphor: you’re right, plateau isn’t a "force.” I use it more as a storyline: some people expect endless growth, others expect a stall.
The tension is between those competing expectations, which end up shaping investment and policy.
On the experts: fair point again. There were experts like Marcus and Chomsky who called out the limits early, and their voices got drowned out by louder hype. I don’t mean to excuse that; just to say the uncertainty left space for very different readings, some of them badly overstated.
On emergence: I’m not talking about something mystical. I just mean that as models scaled, they started doing things their designers didn’t expect. Whether we call that “emergence”, or “capability surprise,” isn't it still something leaders have to reckon with?
On human flourishing: Yes, it’s vague. I use it as shorthand for making sure the future bends toward lives people want to live. And when I talk about hyper-individualism, it’s a description of consumer behaviour, not a prescription for what’s necessarily good.
In short, I think we agree that what matters is being precise in language and careful not to slide into either hype or dismissal. Regards
Michael
PS and no, I don't write using llms. I research using them.
Thanks for this post, Michael, I found it very useful and accessible as a non-technical expert.
Growth is never in tension with "plateau." Plateau is the negation of growth. It's not a thing that interacts with growth. That and neither of those are real terms — they refer to nothing except expectations. Expectations aren't meaningful things in tension with ai (something outside our expectations as an object).
Edit: > It doesn’t mean experts are lying or clueless.
That's not fair at all. Gary Marcus has a whole career about the limits of the scaling aspect in terms of just using patternmatching and he got trashed on by the whole ai community. A ton of linguists, including Chomsky, put out an nyt op-ed explaining this. Tons of people were very clear about this and ea went down awhile ago. In spite of that, we got front-page brow-beating and fear-mongering by experts and expert-supporters about how we should let only a few corporations be licensed and how the world may end by 2024. That and it spawned a whole genre of mission impossible nonsense about ai taking over the world. Extremely irresponsible stuff, and it wasn't intuitively impossible to grasp given we have autocorrect. It was a little too irresponsible to say they weren't clueless if they weren't being malicious.
Edit: In line with that, there is no such thing as emergence. There is no epistemic ability to even be able to grasp ontological emergence. The llm's borrow our semantics by trying to match the syntax of sentences we create. None of them have apologized and that really kills this leaning on experts for advice thing.
Edit: > The task is not to forecast exactly when or how AGI will emerge, but to cultivate the conditions where whatever emerges bends toward human flourishing.
This is the fantastical part. Humans do not know what human flourishing is. You, yourself, used ai to write this article (and probably think for you). You specifically said you prefer hyper-individualism which doesn't sound like it promotes human flourishing whatsoever.