A/B testing is not an independent process. It is part of a multi-stage effort to optimize the website. Marketers do it as part of an ongoing Website Optimization Process in which they improve conversion rates by focusing on improving one or more elements on a page at a time.
The conversion optimization process looks as follows, where A/B testing is just one yet very important step in the overall optimization effort.
Each step has its own significance in helping you design web pages that convert. In the absence of a data-backed approach, you can end up investing your precious time and resources at the wrong places, which can further decrease the web page’s performance or produce a minuscule effect.
Step 1: Ask Questions and Do a Background Check
If you want to find out what needs to be fixed, you need to begin by measuring the performance of various web pages and comparing it to the website’s average.
This step includes collecting information about the website’s performance to determine elements that need to be tested for optimization. You can gauge the efficacy of various aspects of the high-performing pages to build a working hypothesis.
Gathering information requires several different methods such as surveying tools, session replays, heatmaps, analytical data, etc. You can use the tools at your disposal to find the necessary data.For example:
- You can use Google Analytics to find data about landing pages’ views, the bounce rate of the pages compared to the site’s average, page value, number of transactions, etc.
- Use survey tools like Qualaroo to collect user feedback, understand their preferences, uncover page issues, and gather actionable insights from the visitors.
- Use tools like SessionCam or Fullstory to utilize the heatmaps and session recordings and find out the most interactive parts of the webpages. It allows you to understand how visitors navigate the page and uncover the most clicked page areas.
Step 2: Create Hypothesis
Once you have the data, you are ready to create a data-backed hypothesis and begin designing the variations.
For example, let’s say you embed an exit-intent survey on the web page to determine why people are not clicking on the CTA. You can ask different questions to collect in-depth feedback, such as:
- Does this page contain the information you were looking for?
- What's preventing you from starting a trial?
- Is there anything preventing you from completing your purchase?
- Do you plan to start a trial?
- Is our pricing clear?
Next, you combine the feedback with the session data and find out that most visitors do not scroll down to the section where you have placed your CTA.
From this information, you may observe that the CTA button’s position needs to be higher on the page to prompt more customer engagement. Alternatively, you may want to add more information on the page to clarify the features and benefits to the visitors. Or, you may want to test two different page headlines to see which drives more traffic.
Creating a hypothesis allows you to identify the best-suited element for A/B testing based on the available data and set the goals for the test. Interestingly, a good customer feedback tool can help immensely in this process.
Step 3: Design the Variations
The next step is to create variation(s) of the ‘control’ page. The ‘control’ is the original page design, and the variation is the slightly altered version of the control with the desired element changed to run the tests.
For example, you may want to find out which CTA design on your landing page results in more clicks. To determine this, you can set up an A/B test with two different variations of the button. In this hypothetical example, variation A of the button might have the label ‘Learn More’ while the variation B of the button might say “See How.”
Step 4: Run the Test
Once the variations are ready, you need to test them at the same time. Distribute the number of page visitors randomly among the variations. You can also set the percentage of visitors for each page. In the simplest case, let's say 50% of visitors see variation A and 50% see variation B.
Using an A/B testing software, you can see which button variation gets the most clicks over time. Suppose the results of the A/B test are:
In this example, variation B performs twice as well as variation A. Variation A is thus called the “winning variation” or “winner.” Assuming that the results are statistically significant, it shows that you have successfully doubled the clicks by using the label “See How” on the CTA button.
Step 5: Learn, and Iterate
Now that you have the result of the A/B test use the data to validate your original hypothesis.
- If the results are conclusive, like the example in the previous point, you can set up another test to see if you can improve upon the “See How” button variation.
- If the results are not statistically significant, you can analyze why the test failed and make necessary changes to run a successive test, like increasing the sample size.
- If you are happy with the results, you can move on to the next element or another page to build an A/B testing hypothesis and run a new test.
This is the essence of Conversion Rate Optimization and the role of A/B testing in the entire optimization process.