In one of my recent articles I made the point that in our field of technology development, we rarely test whether the proposed solution actually adds significantly more value than if you’d replace it with random results. Today I want to discuss this from another angle a bit further. And that is from a product proposition perspective for the person using it, also know as, it “always fails”.

The human species has this really bad habbit of remembering the bad much better than the good. The good fades away very quickly, especially once it becomes commidity. Say my car, I do not remember all the times it drove just perfectly, but that one time it was kinda akward and didn’t behave quite as I expected, that I remember very vividly. This probably also comes from our tendency to habbits and patterns: once you break them, we start to notice.

But what does that mean for our products? It means something very horrible, because it means the perception of something is totally different for a user and it might be for the engineer. Let’s take my favorite example of recommendation engines. Take this amazing new algorithm that has an accuricy rate of almost 60% on a totally random, new topic. That is amazing - from an software engineering perspective. But what the users sees is “that thing is wrong two out of five times”, or in their experience “it almost never right, I could have guessed that”. And they’ll stop using it.

If your product is just a little better than chance, people will reject it. Their test period is usually way smaller (often less then 10 tries) than a scientificly okay-ish amount of subjects in a lab environment and they will see everything with an accuricy of less than 80-90% as something that “doesn’t work”. Seriously, go out there and ask people to use your product and then say, how good it works. You’ll see that if it only broke in one out of 90 cases, they’ll let you know and they remember.