A/B Testing Mistakes We're Sure You're Guilty Of
Jana El-Sokkary

A/B testing is essential, there are different tools to choose from and not enough reasons for you not to do it. However, like everything, there are common money-wasting mistakes companies make while setting the test up. Do you plead guilty to any of the mistakes below?

1-No Traffic, No Problem

There are rules to a working test and having little traffic is not one of them. Let’s assume that version B is performing better than A, it will likely take months to achieve statistical significance.

Time is money, and if you’re smart, it’s wealth. So, leaving your test running for 5 months is nothing but a total waste of time.

2-Google Analytics is Out of the Game

Nothing is true unless you segment the test data because that’s where the treasure is, and also because you can’t count on averages for solid facts. Send your findings to Google Analytics to get what you’re really looking for.

3-One Test is Enough

Rule of Thumb: Most first tests fail. Nobody wants to say it, but it’s the cold hard truth and to be honest, that’s how it should be. You run the test and analyze the results to see where you need to improve, work on a new customer theory and hypotheses, run in circles, run another test, then another, then another. It’s unrealistic to expect your first test to work, instead count on recurring tests, iterative tests to eventually get where you want.

4-You Don’t Have to Test Everyday

The day you don’t test is a waste. Everything is a learning process, and what’s a better way to learn other than testing? You need to be willing to risk it all for the learning process, for collecting data about your audience, knowing what works and what needs to be nipped in the bud.

You need each and every insight to amp-up your marketing strategies. Nothing works right until it’s tested, test everything whenever you can.

5-Validity Threats Don’t Matter

There are different threats to the validity of your test; the sample size and test duration help reinforce the validity but you need more than that to be certain.

Broken code effect

Bugs are waiting for you, and this one causes flawed data in the test. When you make a treatment, the first thing to do is a quality assurance test to ensure they’re displayed in all browsers and devices.

Instrumentation effect

This is the sweetheart issue. It occurs more often than you think and It’s basically due to wrong code implementation on the website and will mess up everything else. You need to be an eagle over your tests, are they working as planned? Is everything being recorded? If not, even, for one thing, pause, fix and rewind.

History effect

This is about how current events could affect your data, beware of how your company is portrayed and also of the current event that could create conflicts in how people are receiving your data.

Selection effect

This is the case of assuming that only a portion of your traffic showcases the entire traffic.

READ | You’ll Never See Email Lists the Same Again After This

No matter how efficient the tool you pick is, it can’t avoid these thinking mistake. Unlearn what you know about A/B testing and be flexible enough to allow growth to happen. Never ever stop optimizing, that’s the magician’s trick.


Previous Next