5 tips before exercising your AB testing

Muse Chen
4 min readApr 21, 2021

As a product designer, how many times you find yourself juggle in a series of AB testings and find it quite.. daunting? It may not be the AB testing that makes the bad impression. To my experience, it could be the poor AB testing management and planning that makes people lose confidence in such an effective tool.

I mean, AB testing is awesome. It captures the real reaction from users in a totally natural flow in a product. It provides solid proof of data. If you have a significant among of user traffic, a company could correctly predict the performance within weeks thus avoid the loss!

It is not only time and patience that lead to successful AB testing though. Here are 5 tips that I reflected on after tons of AB testings experience that might inspire you in some ways as well:

1. Before an AB testing, note down your question to test.

Make sure your assumption to test is agreed upon among the team and noted clearly somewhere all your team members could easily notice before and after the testing journey.

For example, if you are testing how adding a trust element would change the user’s perception about a company? Note it down, so your team could make meaningful discussion around how the variations of design could successfully test the question, and avoid irrelevant threads of discussion before and after the test.

In this example, product designers could guide the stakeholders to focus on the question: What are the trust elements that work out? and Whether a user gains trust in the company? And avoid discussion about Why adding a new trust element didn’t increase the overall subscription rate, as it is not part of the question that’s aligned in the first place.

2. Avoid double-barreled questions and questions with multiple factors involved.

I mean, yes, you sure can test your current design with an entirely different design that has different features, layout, and flow. However, be cautious about learning little at the end of the test and obtain a pure outcome statement: it is a win or loss. Make sure the test has an only one-factor change in your control group will help your team to focus on the correlation to analyze.

In the same example: How adding a trust element would change the user’s perception about a company, and thus subscription rate? This involved two assumptions:

  1. Adding a trust element will cause a positive perception of a company.
  2. A positive perception about a company will drive the action to subscribe.

Are you testing the trust elements or how’d a certain level of trust will trigger a certain level of action? The user might build trust and are willing to recommend to a friend but pay for a monthly subscription. With a double-barreled question, you are not able to capture the correlation and thus correctly re-exercise or avoid your learnings to more places.

3. Be prepared for the learning, not win.

When a team marches toward learning, they proceed as they continuously learn something. When a team is expecting a win, they get shocked and can halt easily when the result turns out negatively (unexpectedly).

Be mentally prepared for gaining learnings at the end of the test would encourage the team to be open about the opportunities for iterations, and simply have less fear about a loss. After all, a design being AB tested should be regarded as an evaluation of users’ reaction in reality, rather than a conclusion of design being “bad” or “inconsiderate”.

4. Plan ahead.

If this doesn’t work out, then what’s next? If works out, what’s next? AB testing would benefit from being regards as one small step before a long iterating journey.

Plan ahead would help a product team to proceed. If the result is positive, you could plan on how it could apply to a greater system and experience, or continue to devote energy to maturing the features. If it is a loss, plan ahead what is an alternative variation and a different assumption to test further.

5. Testing apple to apple.

One common mistake while designing an AB testing trail is not actually bringing the original version together with the new variation. The benefit of having two versions side by side will guide your team to see clearly what is really being compared against.

For example, adding a trust element sounds simple enough right? It seems like the trust element is the only factor that is introduced in a testing trial and added to your design.

However, what’s unseen is the introduction of other factors due to the new factor addition. For example, adding a trust element will replace and de-prioritize the existing element. Thus, it causes another layer of factor kicked in: a de-prioritized element + an added trust element against no added trust element. In this case, is it the de-prioritized element or the trust element that causes the result? Be sure to control carefully what to compare by placing your design along with the new design.

An AB testing trail with no learnings causes team frustration, not to mention a “failed” result. A product designer could guide the team to be careful about the test design, and lead the discussion to be productive around the “opportunity” rather than “result”. Although AB testing does take a long time, we could still make it possible to shorten the process by managing and planning well

--

--