Your main job is to look for signals from data
Based on these signals, you need to set the direction of the tech and product.
But, when you are new to product development, you cannot yet tell when a signal is strong enough.
You end up over, or under, reacting. In my experience, over-correcting is more common. Over-reacting to signals that aren’t as strong as you might think can be harmful.
Over-reacting makes you build the wrong thing
Over-reacting happens when you make decisions prematurely. Often, based on incomplete or insufficient data.
When you don’t have enough data, you can’t tell whether what you’re seeing is a signal or just variance.
Some examples of over-reactions
- Considering one instance of user feedback to represent a pattern
- Considering a recurrence to indicate a fundamental mistake in architecture
- Removing or disabling a feature entirely after one instance of abuse
- Interpreting a slight change in numbers as the start of a trend
- Connecting the causes of things just because they happen at the same time
- Attempting to copy what your immediate competitors are doing
- Doing things that aren’t directly related to the objective
So, you should carefully consider whether a signal is strong enough
One way to counteract this is to look at percentages more than raw numbers. Seeing that 10 orders got rejected might make you think something is wrong. But if that’s out of 10000, it’s not a strong signal.
Similarly, if you’re A/B testing something, make sure you use a calculator that tells you whether your change in performance is significant enough.
In Supplybunny, we had a rule of thumb that there should be at least three instances of the same piece of feedback before we brought it up for discussion. Then we’d look at the full picture to see how prevalent it really is.