(2023-10-22) Cohen Metrics That Cannot Even Be Measured In Retrospect

Jason Cohen: Metrics that cannot even be measured in retrospect. Here’s some common examples so you can train your pattern-matching engine, and see how to navigate the conversations.

Impact of a single feature on the revenue of a product

There are features at WP Engine which sales team pitch because people respond with genuine excitement. But then, after the sale, customers rarely use them.

What if ten other features are also used frequently; do each “earn” 10% of the revenue?

This is why I like using a variety of KPIs, only one of which is “usage.”

Impact of incremental activities on customer churn

the crux: A year from now, will you then know the impact X had on churn? Unless X has an enormous and immediate impact, the answer is no.

If churn is 3%/mo, an initiative that reduces churn by 10%—a big impact!—will result in 2.7%/mo. How hard is it to measure a difference of 0.3%, month over month? How much does churn vary through pure randomness? Probably more than that.

Some months have 15% more weekdays than subsequent months; if most customers churns on weekdays, that could make churn vary by 15% for that reason alone. (Using weekly rather than monthly stats helps this a lot, but there can be big monthly or quarterly or annual cycles as well, depending on your business. At my last gig I often nagged people to graph the same monthly data 2 ways: a multi-year trend, and split-each-year 12mo graph so you'd see how much variation was "seasonal".)

Lag between action and reaction. (feedback)

A customer who churns today has probably been unsuccessful for a while.

So, an activity you start today is unlikely to change the trajectory of customers who have already decided to leave, and only today happened to push the red button.

Many causes of a result, means it’s hard to measure a change in any one cause

There are many reasons why people leave

So any one action you take is likely incremental

The exception is when your churn is especially bad; anything over 3%/mo is scary. Then, sometimes it’s possible to make large improvements.

Measuring the effect of small design choices on user experience... Things like color, typography, layout, and word-choice, definitely matter, but typically noise overwhelms signal in attempting to measure it.


Edited:    |       |    Search Twitter for discussion