Testing a Business Model Like a Scientist
How do we decide what actions to take in business? Or in life?
In many cases, we base our actions on our models of the world. We think that things work in a particular way, and this determines the choices we make.
We can make a strong case for trying to make these choices more like a scientist does. Here’s an example.
The Nieman Journalism Lab pulled together a set of opinions on the new pay system that the New York Times has put in place, with thoughts from Steven Brill, Anil Dash and Megan McCarthy among others. Here is one of the points that Steve Buttry makes in his piece:
“My friend and former boss Jim Brady says that you can’t build a business model based on what people should do (and newspaper people believe in their bones that people should pay for their content). You build a business model based on what people will do. This tortured maze of exceptions and trigger points is a laughable effort to collect because people should pay but to find a way not to lose the people who won’t pay.”
This is a great point.
The really good thing about it is that we can actually set these up as hypotheses that can be tested. “People will pay for content” is an idea about which we can collect a fair amount of data.
There are a number of people that are starting to say that we should treat business models as a set of testable hypotheses. Steve Blank outlines how to do this very well in this set of slides:
His thoughts on the broader trends in start-up experience are well worth reading in full, but for our purposes, jump to the case study that starts on slide 72. He uses the Business Model Canvas to discuss the experience of a start-up called OurCrave, an online social shopping platform.
The case shows what the original business model was, and more importantly, how the assumptions underlying it were tested. Then he shows how the business model changed five times in response to data and testing.
This approach doesn’t necessarily guarantee success, but it certainly helps the odds. It demonstrates the reality that Peter Sims describes in a recent post:
“The truth is, most entrepreneurs launch their companies without an brilliant idea and proceed to discover one, or if they do start with what they think is a superb idea, they quickly discover that it’s flawed and then rapidly adapt.”
Thinking about your business model like a scientist can be a good scheme, provided you keep two warnings in mind.
The first is to beware of false precision. You want to test your assumptions with data, but you can’t always get the data you need. And the fact of the matter is that your plan will change yet again once you launch it and people really start interacting with your ideas. Usually, when testing numbers like these, you want to be within an order of magnitude with your numbers, and you want to remember that they are estimates.
The second caution is to avoid making models that say things like “social media always works” or “social media never works.” These absolute cases are never true. What we really want to figure out are the circumstances in which an approach works or doesn’t. Your business model may or may need need social media support, or six sigma in your production, or any other number of things.
Scientists are interested in finding the boundary conditions for rules – when do rules stop working? When testing business model hypotheses, you’re trying to figure out what is right in your particular case. So beware of absolute statements about what will or won’t work.
Experimenting is a critical innovation skill. If you can figure out how to experiment with your business model, you will increase your chances of success. Just remember to test it like a scientist.
Tim Kastelle is a Lecturer in Innovation Management in the University of Queensland Business School. He blogs about innovation at the Innovation Leadership Network.