Here’s the situation: you’re building a product, but it never gets done. Or it gets done, but the users can’t figure out what to do with it. Maybe you overdesigned it, made it unappealing or overly complex? Better yet — you loaded it up with a nice rich featureset and are wondering why you can’t figure out why there is no clear user behaviour pattern. It all seemed so simple on paper, right?
Coming from a mobile production oriented environment, at Martian & Machine, I deal with situations like this fairly often. You’ve probably been in my shoes, or at least experienced it from the other side of the table. It’s the story of bloating simple MVPs.
It usually starts as a simple project. Let’s assume we want to build a mobile app to test out an idea. Since we’ve done some research, we think that it clearly has a chance on the market and to prove that before getting into a full-featured product, we want to test it out. And since we’re restricted with budgets and timeframes, we want to ship the basics and figure out whether the concept stands a chance. Now we’re walking on MVP territory!
For starters, an MVP (Minimum Viable Product) is generally a fresh concept and although it is widespread across the startup community, not all folks tend to be familiar with it. It has been used for decades in various industries though, just under different names. Depending on the industry, we could also talk about focus groups, prototypes or paper models.
We start designing a simple UI and think of core elements that the app could not function without. Without wasting much time on planning and thinking how to scale it at some point — the design gets approved and ready for development. At this point the goal seems pretty clear and within reach: transition the design into a working product (mobile app) and ship it for testing as soon as possible. Since it is relatively easy to build (most of the time), it can get tested out pretty fast.
Here’s where the problems kick in. Since it’s just an MVP, call it prototype, demo, proof of concept — it serves the sole purpose of testing. It’s not impressing you with a slick UI, frictionless transitions and many features. It’s just this one simple, yet such an important feature (or features) that needs to be tested. Nothing else.
Still, under the impression of ‘let’s get this tiny sub-feature in there’ we end up with a lot more then we originally planned, and often a lot later too. The truth is, it’s never just an additional feature, and it’s never finished.
Getting additional sub-features into an MVP that aren’t really essential, are most of the time distractions. To get them into the product, we need to redo the whole process from the beginning. That means, designers think about how to incorporate it, hand assets and flowcharts over to developers. Once executed in development, it needs to be tested and polished up. It might sound like a small compromise but wait until the list expands to a dozen tiny ‘just one more’ tasks.
To be honest, we have all made that mistake at such point. I’m no exception. The thing is, having enough ideas and willpower will always end up with product improvement cycles, instead of shipping it out.
First of all, let’s ask another question. Why are improvements needed? Is it because the user had problems with using the main feature? Or do we think that he will be so blown away that he’ll use it all day and that we need to give him a dozen more options so that he stays entertained for a long long time? Quite the opposite..
What’s sure is — if you didn’t test it, don’t improve it.
To improve something means that it did function, but not seamlessly. It means that we got insight on how to remove friction, reduce steps, or even completely change the process. Sometimes killing the feature and replacing it with another one is an improvement. It’s just a matter of perspective.
Chances are, you learned this the hard way and got a product out that had very much to say but no one listened to it. Here’s the good things about MVPs. They are intended to be ruined or pivoted. It’s an easy game. A minimal featureset, combined with easy to read metrics, leads to simple insight. The ultimate goal at the end of the day is to learn how the users used your product and whether they liked it. On positive ground, you may rethink how to improve the product — but based on facts and user behaviour, not your personal opinion.
The simple trick is to build simple products. As much as we all love choices, having to much of them will not only ruin the experience, but also make your data harder to read.
Imagine you need to perform a simple task of turning the radio app on. There’s just an ON button and that’s it, the music plays. Such data would be easy to read. Either the user loved the experience and turned it on, or not. The fact is, he got through the process as we intended him to do and left some data points.
What we don’t want to do is make him think. Putting choices in front of the main feature, for example. Choices like tone balance, illumination, band selection etc. Those choices will only make your data harder to read. Reading whether the user was confused about the option at some point or just did not want to listen to the radio fades to noise. Keeping it simple and reducing the number of events we are measuring, helps us identify if the product is used in the intended way.
Prepare yourself to hit some bumps along the way and remember that the beauty of the process is figuring out what needs to be improved and what needs to be removed. And even if the whole idea failed, at least you got away with a scratch and are ready to pivot again.
The key is to keep up momentum and be ready to constantly assess, iterate and evolve, even if it means turning the project upside-down. Because, in the end, it’s really not about how many times you’ve redone it — it’s about making it happen.
You can also find this article on Medium.