We are used to thinking of A/B split testing as a way to improve UX and increase conversions. But it’s actually a two-way road.
How you design your tests should be based on UX research – not on guesswork and intuition.
This will save you lots of money and time, making your tests more predictably efficient.
A quick reminder: what’s split testing?
If you’re not familiar with A/B split testing, here’s a very basic explanation. In each such test, you create two versions of a certain element on your site. It can mean two different colors for the CTA (Call-to-Action) button, the layout of a section, the copy, a menu’s hierarchy – just about anything, really. A randomly selected half of your visitors are shown one version, while the other half see the second version.
After letting this test run for about two weeks, you compare the results. Was there a significant difference in conversions between the two? If yes, then you keep the more successful variant. You can learn more about split testing here.
Why use UX testing to plan A/B tests?
A/B tests are an extremely powerful way to optimize conversions– as long as they are carefully planned. But with literally hundreds of things you could change, what should you start with? There are 3 main ways to design such experiments:
⦁ Educated guesses. You can brainstorm alone or with your team and come up with a list of elements you can alter. Or perhaps your execs will have an opinion of their own. The problem with this approach is that as an insider, you can’t be objective. You don’t know how your visitors will see your site and where they will stumble. Some of your ideas may be correct, but others won’t.
0. Scoring models. There are also some “scientific” approaches to A/B testing prioritization – such as PXL (Prioritization by ConversionXL) by , PIE (Potential, Importance, Ease), and ICE (Impact, Confidence, Ease). You range pages based on their importance and on how easy it is to A/B test each, or on the perceived impact of the test. This is supposed to help you set priorities. There is a big issue here, though. It’s still guesswork. It’s more systematic, sure, but you still base your choices on your own idea of what’s important. And that idea can be wrong.
1.UX testing. This is the approach I would recommend. Don’t try to guess what your visitors have trouble with – go and find out. There are several approaches: mining data from the site, target groups, heat maps, online UX testing services, etc. They all carry some costs but believe me, you’ll save money in the end if you invest in UX research.
Setting your priorities: which page should you test?
It can be tempting to go for something obvious and easy, such as the CTA button or the text above the fold. But your real problem can lie elsewhere – even on a page you consider minor. It can be that users look for something on your site and don’t find it. Only proper usability testing can help you uncover the root cause.
Here are a few examples of unexpected issues you can discover:
⦁There is no FAQ page – you thought your site was self-explanatory, but users need more info. Another good idea is to add tooltips to different elements on your site to provide explanations on-the-fly.
But I can’t afford UX testing! Or can I?
As we said, the best way to identify the stumbling blocks is to involve human testers rather than automated tools. Don’t think that UX testing with real participants is only for companies with big budgets, though. There are affordable ways to do it, too:
⦁Create a test group of your friends and relatives. You don’t need many people to run a usability study.
A Google study shows that even 5 users are enough to detect key problems!
⦁Use specialized online services like ⦁ TryMyUI, Usability Hub, or UserTesting (some even provide live videos of target visitors interacting with your site).
⦁Try guerrilla UX testing: approach people in public places, like cafes, and ask them to test your site. This is easier than it sounds: as long as you are open and friendly, most people will agree to help. Don’t forget to bring a laptop, of course.
Formulating a hypothesis with more UX research
Let’s say that you’ve identified several priority pages to run A/B tests on. The next step is to collect more data and decide what to change. Here are several budget-friendly and efficient approaches (see here for more):
⦁Heat maps. They track how users move their mouse cursor around the page. You can see what attracts their interest and where they pause or struggle. There are even touch heat maps for mobile apps nowadays. Examples include
⦁Bug/crash reports. Users tend to bounce when a site crashes, so you want to know in which points of the interaction this happens. Services like
⦁BugSee, Appsee and Buddybuild include this feature.
⦁Statistics and feedback. Many tools allow you to collect data on visitor sessions, communicate directly with users, and get their feedback. You can try ⦁ Userreport, Usersnap, or Usabilla, for example.