Running A/B tests for UX designers

Belhassen Chelbi
flowlens.design
Published in
4 min readDec 6, 2020

--

UX designers almost don’t know anything about conversion optimization. They have design and applied psychology skills which is valuable. But we lack copywriting, data analysis and testing skills skills.

The journey in the CXL institute conversion optimization mini degree is almost over. I’ve nourished my copywriting, data analysis and now my testing skills.

The why behind A/B testing

What I found in UX designers including me when I started out, we were like the chosen people. Everything we say about the audience has to be true (it was in our head) and everything we said about how a UI should be was also true and has to be worshipped. I mean User and Experience is in our job title.

However, if you don’t have data, your opinion or best practice is just another hypothesis.

And there comes A/B testing. Many people from conversion optimizers to UX designers would have ideas to how we can solve the leak from the delivery info to the payment. UX designers will come with their golden Jakob Nielsen’s rules and think they’re true. Marketers will probably focus on persuasive copywriting. But if you practice legit conversion optimization, you need not only consider all these ideas, but you also don’t have a belief that one would win.

Probably you’d root for one of them especially if it’s yours, but what will make a winner idea an actual winner is an A/B test.

What’s an A/B test

The internet is full of definitions. But it’s simply making two or more variations of the same UI and publishing them in a way that every variation gets the same amount of traffic. Which one wins? well, you focus on one KPI which generally the conversion rate and of course the one with higher conversion rate wins.

Sometimes it’s not the case, so you need to follow each experiment to the end of the funnel. After all, you’re in the business of making your clients money, so if variation B increases conversion on the landing page but the control (the original UI) make more or the same amount if money at the end of the funnel, you can’t declare variation b B as a winner.

How to conduct an A/B test

The goal of an A/B test is to figure out which is the best solution to the problem in hand.

So the first thing to do before conducting an A/B test is figuring out the problem. And this done by conducting your conversion research.

The conversion research is constituted by these steps:

  • Technical analysis of the website dedicated for conversion
  • Site Walkthrough
  • Heuristic Analysis
  • Usability evaluation
  • Web analytics

So here’s a scenario:

You open analytics and you find out that the bounce rate in the your landing page is too high. You open your landing page and conduct a technical analysis and figure out that the loading time is too high. You fix that, then the bounce rate decreases significantly, but your conversion rate doesn’t increase. Hmm.

You do a tour by yourself (heuristic analysis) and you figure out that the messaging in your landing page isn’t clear: vague words, the value proposition isn’t specific.You make some interviews to figure out if your hypothesis about the problem is true. You do some user interviews and it turns out yes users don’t understand what is this about above the fold.

Problem identified and confirmed.

What’s next ? coming out with a solution right?

Except your solution has to work and you don’t actually know if it’s going to work. So what do you do? you test your solution.

A/B Testing. A is the current landing page or the control. B is your solution or the variation. Each interface gets 50% of the traffic and after some time (that’s calculated) you see which one is the winner.

Let’s go back to hypotheses. To simplify things I just assumed you has one problem and one hypothesis. But in reality, you would have bunch of them.

So what you need to do before going to coming up with solution is to make hypotheses for your problems and prioritize them. There are many conversion frameworks like ResearchXL by CXL, the Lift model by Widefunnel and each one of them has a different way of doing it. A simple way to give a score based on potential, impact and ease. You can create a table and sort them by the final score and start with the first one.

Now you have your hypotheses prioritized. Start sketching solutions. Start with wireframe and more precisely just a pencil and a paper because not only it’s easy to use but it makes you focus on the solution not how you’re creating it.

When to A/B test

The point I want to talk about here is more of when not to A/B test and that’s when you don’t have enough traffic.

If you have a 4% baseline conversion rate, two variation (including the control) and you want to detect a 10% lift but you have 100 visitors per day, you will need 650 days (almost 2 years) to run the test.

No only this isn’t feasible but the data won’t be useful.

So no A/B testing for low traffic websites. You still can do conversion optimization. A/B testing a pillar of CRO but it’s not the whole process. So for low traffic websites, make the changes and deploy them.

Why to A/B test

The obvious reason is to pick a winner and to figure out which solution worked. But there’s also something more valuable and that is to update your customer theory.

A customer theory is what you currently know about your customers. For example: Metalheads who love to wear black and enjoy loud music. Except that after doing some research you find out that they actually don’t necessarily wear black. For example you make an A/B test for a product page.

  • A: black only T-shirt
  • B: T shirts with colors

B beats A so you update your customer theory that they actually like other colors.

This is a stupid example probably hah. But you get the idea. Peep Laja has a real one in the A/B testing course in CXL.

--

--

Belhassen Chelbi
flowlens.design

I design high quality websites and apps that converts.