(2025-05-07) Chin A Demand-Side Mystery

Cedric Chin: A Demand-Side Mystery. This week's Commoncog piece is free; next week's piece will be members-only: Vanguard as a Demand-Side Mystery — This is Part 3 of the Understanding Customer Demand series. At the end of the Part 2, after we walked through the Jobs To Be Done framework, we explored the setup for a mystery together. The two frameworks (Sales Safari and JTBD) that we’ve explored together actually rests on a common premise: both frameworks assume that demand is a function of pain — a customer only becomes a customer when they experience some lack in their lives; something that they want to make progress on.

But if you pause for a moment, you’ll realise that this framing is simply … incomplete. There are plenty of things in the world that have real demand, that do not fall neatly into the ‘pain and progress’ category. Facebook, Instagram, and TikTok all fall into this category, for instance

The basic premise that pain is what matters when looking at demand is why we have bromides around “sell painkillers, not vitamins” ignoring the very fact that over-the-counter vitamins outprice and outsell over-the-counter painkillers by 2.5x.

  • sales per SKU? profit per SKU?

‘demand is solving customer pain’ is ultimately an incomplete theory of the world.

Then, I present the mystery of Vanguard and the creation of the first index fund. Vanguard is a particularly good case because demand for index funds were non-existent when they first started. It took 15 years for the growth to become significant. In the meantime, Vanguard had the entire field pretty much to themselves.

A theory of demand that works for early-stage startups and early-stage products must be able to explain Vanguard

If you cannot explain Vanguard's success, then you're basically admitting that new production creation is a total crapshoot. (As I have argued in the past). ((2024-08-28) Chin The Idea Maze Is A Useless Idea)

other notes/links

Cognitive Systems Engineering as a Worthwhile Field to Steal Ideas From (for AI) — Perhaps unsurprising, if you know how much there is an overlap between Cognitive Systems Engineering and Naturalistic Decision Making. But, the tl;dr is that the armed forces has sunk in at least three decades of research into building human-AI 'joint cognitive systems', and we can probably just steal these ideas instead of working it all out from scratch

Large AI models are cultural and social technologies (sociotechnical) — justin shares a helpful paper arguing that LLMs are best understood as "a new kind of cultural and social technology" – not just intelligent agents, which opens up new avenues for exploring their impact


Edited:    |       |    Search Twitter for discussion