There are three possible outcomes for every A/B or multivariate testing. Either the winning variation is reached, control and treatment (variation) have the same conversion rate, or the losing variation is found. Either way, there is always a valuable lesson from the test result, only if you constantly engage in post-test analysis.

The post-test analysis ends the conversion optimization loop and encourages the need for further efforts. It helps to decide what to do when any outcome is attained. Unfortunately, many optimizers don’t engage in test analysis because they only go into testing to see whether their treatment wins or loses against the control. Little they know that implementing a winning treatment may not be a good optimization move.

In this article, we will examine post-test result analysis and how to perform it. This blog post is the last of our eight-article conversion rate optimization series. You should see other articles before reading this to understand this subject comprehensively. Now, let’s see how to analyze your test results.

What to do with your A/B test results

  • When the treatment has a higher conversion rate than the control

In a split test where the treatment (proposed variation) performs better than the control (original variation), you first need to determine the cost of implementing the treatment. Sometimes, when you evaluate the cost, you will realize it’s too much for the little improvement.

Therefore, before you proceed with the upgrade, meet with your IT and conversion team. You can also have your marketing and finance team juxtapose the financial implication of getting the page upgrade. If it is worth it and looks good on your financial statements, go for it.

If otherwise, reconstruct the hypothesis. Attempting the problem from a different point of view could lead to a cheaper and more affordable solution. Also, you should find out if the hypothesis can be refined to create a much greater impact worth the cost of implementation.

Testing should not stop for any reason. All these shortcomings and disappointments should lead to more research and optimization effort.

  • When they have an equal conversion rate

In this situation, many optimizers are usually quick to conclude that there’s no difference and move straight to the next. Though staying true and committed to your testing calendar is excellent, do not be too eager to dismiss test results that do not yield your way. They could be insights into better hypotheses.

The best solution would be to try other methods for achieving the goal and formulate new hypotheses. For instance, your qualitative analysis raised concerns about your landing page form. If informing the web visitors about the required field didn’t significantly affect you, you can try reducing the fields. A multivariate test could come in handy here.

Meanwhile, the treatment and control may differ in key performance metrics even with approximately equal conversion rates. The treatment may boost returning visitors and show a decrease in new visitors. It could also have increased mobile visitors and fewer desktop viewers compared to the control. In this case, segmentation is required to discover the new changes to add to the treatment to boost its conversion rate.

As humans, we do have preferences even when options of equal value are presented before us. Therefore, it’s normal to prefer your treatment to your control even when testing shows that they have nearly the same conversion rate.

You could upgrade to the treatment for preference reasons, even if it only represents your brand better than the control. But it’s never an option to consider when the control is better in conversion rate.

  • When the treatment loses to the control

The first thing to check here is your hypothesis. Is it faulty? Perhaps, you mixed up the qualitative and quantitative analysis when formulating your hypothesis. If your hypothesis is built on the correct qualitative and quantitative analysis, the next thing is to review your data and perform segmentation.

Also, you can unveil the problem by performing surveys and studying case studies. Since you couldn’t figure out why the treatment is performing less than the control, asking others seems to be the best option. Your audience could be of great assistance. Meanwhile, studying other people’s findings will expose you to many possibilities and solutions.

If, after all the reviews and modifications, the treatment loses to the control, you can conclude that the hypothesis is wrong and look for better ways to optimize your website conversion rate.

Conclusion

Finally, we have come to the end of the series. Make sure you perform all the tests and activities highlighted in all the articles. Each one is as important as the A/B testing, and they will all contribute to the success of this campaign.