(2019-04-11) Procopio Why You Should Build Every New Product Feature Like An MVP

VpJoe Procopio: Why You Should Build Every New Product Feature Like an MVP. Making a mistake launching a new product feature is costly, and I’ve done it at least a dozen times. Never again.

We’re all familiar with the Minimum Viable Product (MVP) strategy. It mandates that we “fake” components of a new product by making many of its processes manual at first release

strategy of repeating the MVP process with every new feature, even down to every new version.
How do we do that?

We can soft launch or A/B test by singling out a certain small segment of our customer population, turning on the feature for them, and either following the data or contacting them directly to see how they respond.

This testing is done for a couple of reasons. In the case of a soft launch, we want to avoid disaster, and making mistakes with a smaller audience is preferable to making mistakes at feature launch. When we A/B test, we’re trying to choose between options to gain greater user acceptance — Do they like it better this way or that way?

we may also be able to draw the same kinds of conclusions we might find with an MVP:

  • The new feature isn’t valid.
  • The new feature breaks under certain stress.
  • The new feature is being used in a way we weren’t expecting.

With a finished feature, this can be an expensive discovery. So why not use a Minimum Viable Feature (MVF)?

Every feature we consider should have three stages of evolution. Three is a solid number and two will work, but going beyond three is probably wasting time and may annoy our customers.

Building using a Minimum Viable Feature strategy looks a lot like building using an MVP strategy. We’ll be replacing some of the automation with small tech, a term for using the least intrusive, not-as-robust, easiest-to-integrate tech we can find. Also cheap, it should be cheap or free.

At my current startup, we have a small tech fallback network around existing parts of the business. We have an in-house support team that uses voice, text, chat, email, Slack, and even some proprietary messaging within our software to do their job. We ride this fake network when we MVF new features.

Once we prove the feature concepts are viable, we can spend the time and money to build the proper tech to replace the small tech. If the feature concepts aren’t viable, we lose next to nothing. But what we’re really hoping to gain is knowledge about the latter two MVP concepts: Where does the feature break and how is it being used/misused?

When is a feature not even a feature? In other words, some features are small enough or non-critical enough that they may even be left as a prototype. One question I always ask about feature ideas and requests is whether a fully-formed feature will add to our intellectual property or is it just a band-aid? If it’s the latter, it might not get polished.

This is especially true for internal features, like when we need “a technical way to do X.” We usually don’t need a ton of tech to solve it, so we use small tech to see if we’re actually solving the root cause of the problem or if the problem pops up elsewhere. If the problem was solved and wasn’t frequent or wasn’t critical, the need for the technical solution usually dies down once the small tech is in place.

The most important aspect we can carry over from and MVP into MVF is measurement. The same data capture mechanisms, feedback loops, and kill switches should be in place when we bring the new feature to customers. We should be continually listening, and adjusting to the patterns we see. This allows us to measure more than twice, and build once.


Edited:    |       |    Search Twitter for discussion