Ultimate guide to optimizing your user experience through A/B and Multivariate Testing

Did you know that on average, statistically significant A/B tests see a 49% increase in conversion?  A/B and Multivariate testing is the best way to optimize your site or app’s user experience leading to an increase in sales and user engagement.

In this ultimate guide to A/B and Multivariate testing we will walk you through what A/B and Multivariate testing is and its benefits, how to test effectively, stir up some ideas in your head on what to test,  show you step by step how to setup tests, and walk you through best practices.

You have an idea, and you want to know if it will be successful. You want to be sure that your idea has a positive impact on the business. Here comes A/B and Multivariate testing to save the day!

What is A/B and Multivariate Testing?

A/B testing

A/B testing allows you to test variants of a user experience to determine which are the most effective using actual users on your website/app.

The idea is to surface variations of a user experience to your users and then measure the results to see which version performed better with your users. It’s a way to quantitatively decide which experience your users prefer. What sets A/B testing apart from other types of analysis (example: pre/post) is that it provides for statistically accurate results which separate out the impacts of other variables.

Multivariate Testing

Multivariate testing allows you to test different variables together. You will figure out which combinations of variables will lead to the best overall performance.

Your digital presence is made up of multiple elements which you can adjust at the same time to optimize conversion. For example, let’s say you want to test the combination of a hover state on a button and color.

Why you should test

Improve your digital return on investment

Improving your existing site is an efficient way to improve your bottom line. A/B and Multivariate testings helps you squeeze every bit of value from your features.

Major B2B services company – – One of our clients saw a 300% increase in quotes by changing their Call to Action (CTA) position / UX.

Obama’s reelection campaign – Obama’s campaign saw a 40% increase in sign-ups leading to 2.8 million incremental e-mail addresses from one of their tests

Better understand your customers

Powerful A/B testing provides more information about your customers than you may realize. You are able to segment your customers on multiple levels to gain a deeper understanding of their behavior.

Confidence to implement the right user experience

A/B and multivariate testing empowers you to make confident choices knowing they are supported by sound data. Once the test is complete, you will be able to say with confidence which experience leads to better results.

Confidently predict the outcome of pushing your test idea to 100% of your users. Using statistics built into the A/B testing application, you will be shown when the result becomes statistically significant.

Solves disputes

A/B testing, by its very nature is objective and takes subjectivity and opinion out of the equation, providing clear information and eliminating the need for a debate.

How does A/B and Multivariate Testing Work?

A/B and Multivariate testing work by splitting traffic to see multiple variants of the same user experience. This means that different users of your site will see different versions of your site. The testing tool then measures the impact of each version of the user experience to understand which performs best.

To understand it more technically, here is the basic process.

  1. A user goes to your site. This causes your A/B testing tag to fire, and pushes the user into one of the multiple variants that you have created.
  2. The user then interacts with the functionality you are trying to test.
  3. If the user completes the expected task, the A/B testing software records that. If they don’t complete the expected task, that is also recorded.
  4. You can then view the results in your A/B testing tool to choose the winning variant.

What types of things can we test?

Test your Call to Actions (CTAs) to improve conversion

When it makes sense to run this type of test – Call to action tests are an excellent and quick way to see improvements to your conversion rate. As mentioned above, a client saw a 300% increase in quotes by changing the position / copy of the CTA. If you have not tried multiple variants of your CTAs you should definitely consider this as they are easy tests to do and can generate substantial revenue.

Examples – Consider these types of changes: colors, sizes, locations, buttons vs. links, number of calls to action (CTAs) on the page, primary and secondary CTAs.

Test different layouts

Changing layouts of the page can increase conversion. This consists of rearranging content / modules within an existing page. What it is not is a full page redesign. It involves re-ordering / re-positioning items on the page.

When it makes sense to run this type of test – One way to know this type of test is in order is by looking at heat maps. If you see that there is content lower on the page which gets substantial clicks in comparison to items higher on the page, you should consider a A/B test where you move the content higher up the page.

Examples – Some example changes would be: moving sections / sub-sections from the bottom of a page to the top, or moving a module from the right side of the page to the left.

Test your promotions

Test your promotions across your digital experience to increase engagement / conversion. You will be optimizing for revenue / margin from the campaign.

When it makes sense to run this type of test – You are running a multitude of promotions which lead to a substantial portion of your revenue.

Examples –  You can test things like: layout, size, design, colors, copy, and calls to action.

Test your content

You can test versions of your content to see which resonates best with your customers.

When it makes sense to run this type of test –  You have seen analytics data suggesting that different types of content that you’ve written is better or worse. But, you’ve never been able to have the content on the same page at the same time. So, you want to find out which performs best.

Examples –  Test tone, length of content, different titles, and images. We worked with a client to test various versions of their site navigation copy. In particular, we tested “Insights” compared with “Infocenter”.

What tests are the most successful?

Different types of tests on average have different degrees of success, with copy being highest and design / copy being the lowest. When prioritizing which tests to do first you should consider this.

Deciding on a A/B and Multivariate testing vendor does not need to be a hassle. There are a multitude of vendors who specialize in testing such as: Adobe Target, Maxymiser, Convert, Optimizely, Google Optimize, VMO, and more.

Step by Step guide to setup your first test

Decide on a A/B and Multivariate testing vendor

Deciding on a A/B and Multivariate testing vendor does not need to be a hassle. There are a multitude of vendors who specialize in testing such as: Adobe Target, Maxymiser, Convert, Optimizely, Google Optimize, VMO, and more.

As with any vendor evaluation, you should follow these steps: write business requirements, define vendors in a consideration set, send request for proposal to the vendors, score the vendors, setup demos with the top few vendors, update scoring, choose the vendor, and sign on the new vendor.

Deploy the A/B testing code on each of your pages.

Example code –

Best Practices

  • If you use a tag manager, be sure the tag is set to synchronous, as you want this tag to fire and give the user the right experience immediately.
  • In many cases there are plugins available to make this process easier.
  • Be sure to place the tag in the correct part of the page, usually right before the closing head tag.

Testing Process
The testing process is composed of these steps: Define the test hypothesis, develop the variant and build the test, test the test & deploy, gather and present the results, and iterate further.

  1. Define the test hypothesis
  • Use user testing, analytics tool data, stakeholders, customers, etc to determine what you should test. These may give you an idea of what you should change and what it’s impact will be.
  • Formally define your test subject and expected outcome
  • Build a testable hypothesis. This is your guess about the effect a single change will have on your site. A sample hypothesis.
    • For example, a test hypothesis might be that “Changing the color of the buy button from blue to red will cause a 10% increase in sales”.
  1. Draft a test document based on all the items agreed to in step 1
  • Create the variant. This either requires you to develop it or possibly build the variant in the A/B testing tools GUI.
  • Assuming your site has enough traffic and you expect to see a big change between the variants you can even test 3,4,5 or more variants at once if you would like.
  • Before putting the test into production, you must thoroughly test it for accuracy to ensure that the A/B test will run smoothly and deliver clear results.
  1. Test and Deploy the test
  • You want to ensure that no other changes or A/B tests go into production that may interfere with the test.
  • Test that the tracking in both your analytics tool and A/B testing tool are working correctly.
  • Check the experience in a pre production environment to be sure it is behaving / looks as expected.
  • Push the experience to production (turn on the A/B test).
  1. Preliminary results will arrive almost instantaneously
  • Once they have reached statistical significance, one variant will emerge as the better performer.
  • Report out to stakeholders on the results.
  • Assign the highest performing variant as the default. (in the A/B testing tool).
  • Develop the variant to be default (and shutdown the A/B test).
  • Document the results of the test (so you don’t run the same / similar test again by mistake).
  1. Iterate
  • You have now answered the question and used the findings to improve your site’s performance.
  • However, there is always room to optimize further.
  • It is time to consider the next iteration and the next burning question you would like to answer.

Implementation Best Practices

Manage impact on SEO

  • Don’t try to spoof search engines into not seeing your testing. Don’t have your tag automatically send a search engine to a specific user experience. This could cause a reduction in ranking with search engines.
  • If your test redirects from a URL to a new URL be sure its a 302 (non permanent) redirect
  • Be sure to shut off the test eventually and code the new experience. It will improve site performance.

User Management

  • Ensure proper accountability by defining the minimum number of people / roles who can publish to the web as this person has direct access to make UX changes to the site. This is important to be sure that only the right things are tested. Do keep in mind that tested does still mean that real users of your site will see the experience. Thus, it is important to treat new tests with the same rigor as any feature going to production.

Change History

  • Be sure to manage the change history to identify any issues.

Define the size of the test pool appropriately

  • If the risk is high that the test may not be successful, be careful in sizing the test pool. The test pool dictates how many users will see your test. Maybe 50% is too high. Try 5% until you start seeing preliminary results.

LET US PUT THE POWER OF INGENIOUS ANALYTICS TO WORK FOR YOU.