Home  ›   Blog   ›  Top 20 A/B Testing Mistakes People Make but You Won’t in 2024

Top 20 A/B Testing Mistakes People Make but You Won’t in 2024

Top 20 A/B Testing Mistakes People Make but You Won’t in 2024

A website visitor leaving mid-journey is like having a customer walk into your store, shuffle around a few things, make eye contact with you, and just walk out of the store as soon as you are to greet them.

It’s a little sad, isn’t it?

But it gets even more heartbreaking if you have a website and see a big chunk of your website visitors bounce off to your competitors.

A/B Testing is a remarkable marketing strategy to help reduce bounce rates and the one you won’t have to take with a grain of salt. It offers phenomenal results, but only if implemented correctly.

So, if you don’t want your website visitors to leave without completing a goal, check out these A/B testing mistakes you should avoid while you are at it.

Let’s discuss these common mistakes in A/B testing, which take your resources down the drain. But if you have already made a few, it won’t hurt to look at the solutions you can employ and avoid repeating them in the future. 

Instead of simply presenting these split-testing mistakes in a list, we went a step forward to categorize these A/B testing mistakes to avoid based on when they occur: 

Before, during, and after A/B testing.

Let’s get this show on the road!

  1. Basing Your Test on Invalid A/B Testing Hypothesis
  2. Conducting Tests On A Development Site Instead of a Live One
  3. Testing the Wrong Page
  4. Copying Split Testing Case Studies Left and Right
  5. Testing Way Too Early
  6. Making It All About Conversions
  7. Split Testing Too Many Elements at Once
  8. Creating Several Variations
  9. Showing Different Variations to Different Audiences
  10. You Got the Timing Wrong
  11. Testing with the Unsuitable Traffic
  12. Altering Parameters in the Middle of Split Testing
  13. Calling It a Day Too Early on Your Tests
  14. Ignoring Periodic Radical Testing for Incremental Changes
  15. Testing Multiple Tests Simultaneously
  16. Not Measuring Results Carefully
  17. Not Understanding Type I and Type II Errors
  18. Using an Incompetent A/B Testing Tool
  19. Not Considering Small Wins
  20. Wasting Your Time Testing Insignificant Elements

Mistakes Made Before A/B Testing

1. Basing Your Test on Invalid A/B Testing Hypothesis

Hypothesis building plays a pivotal role in determining how well your A/B testing will perform and whether the results will be viable. 

In simple words, an A/B test hypothesis refers to speculations on why you are getting specific results on your website, for example, the bounce rate on your product page, high traffic with low conversions, etc.

If your hypothesis is invalid, so will the results from your A/B tests. It’s because the changes you made in your testing process are not valid.

How to avoid this A/B testing mistake

Create a valid hypothesis. 

But how? By following these steps –

Image Source

Here are A/B testing hypothesis examples and steps:

Step 1: Understand the problem – why are customers not converting? Is there anything missing on your website that is crucial for customers? You can get answers to these questions in three ways: 

  • Ask the customers directly: For this, you can use survey feedback tools like Qualaroo that help you create pop-up surveys which collect in-context feedback without disturbing the visitors’ experience.
  • Use analytics: Explore Google Analytics to know how many people bounce from your pages, if they are clicking on your CTAs, after how long they bounce, and more.
  • Analyze user behavior: Choose tools offering features like Heatmaps and session recordings for user behavior. Qualaroo offers SessionCam integration that has both of these features.

Step 2: Once you know what’s affecting the customer behavior, for example, inaccurate CTA, low conversion, etc., you can come up with corrective changes you think will work. These changes are the elements you’ll tweak on your website or mobile app while A/B testing.

Step 3: The last step is analyzing which changes will bring the results you need. For example, will you have more conversions if you change the CTA? 

In this step, you also need to identify the measures to evaluate the success of your testing process. 

For example, you hypothesized that adding more information on your product page and updating the CTA text will increase customers’ time on the page. 

So, you A/B tested this hypothesis by adding relevant product information and updating the CTA button.  

If you can reduce the bounce rate on your said page, say by 15% by the next month, that will mean your testing worked.

CASE STUDY – HOOTSUITE

Hootsuite, a social media management platform, enables its users to access and manage their social media accounts in one place with its tailored solutions.

The company’s landing page was not user optimized and had a high bounce rate. They launched surveys on their website to understand the reasons behind the visitor behavior. 

They found that the page did not have the information users were looking for. Since the team now had a viable hypothesis, they could perform website A/B split testing to see the most needed information.

They revamped the page to include better product information like screenshots, pricing structure, detailed features, and testimonials to help visitors make better decisions.

On running the A/B test, the new landing page saw an increase of 16% in the conversion rate compared to the original page.

2. Conducting Tests On A Development Site Instead of a Live One

Okay, this A/B testing mistake might come as a shock to you, but sometimes, people actually test the wrong website. 

Wrong as in not an entirely different website but that the developers forget to switch to a live website and continue testing on a site that’s still being developed. 

Not that it’s a bad idea to test marketing campaigns on a development site. The only thing is that you won’t get viable results since the only people visiting the website are developers and not your target audience.

3. Testing the Wrong Page

It may sound like a not-so-common error and something obvious, but this A/B testing mistake occurs very often. Which page you should A/B test depends on your objective, and that’s where things go south. 

For example, say visitors land on your product page, browse for a few minutes and then opt for a demo. Once they land on the demo page, they leave the website. 

In this case, you will see a high bounce rate on your demo page and you will end up A/B testing it to understand the reason. But in reality, the product page is failing to convince the visitors to convert. 

So, the best course of action would be to understand the nuances of a buyer’s journey and A/B test your product page to boost lead conversion.

How to avoid this A/B testing mistake

Generally, businesses A/B test their best website pages, i.e., pages with the most traffic, to get the best results.

If you aim to improve conversions, you should A/B test the pages recommended by HubSpot, like the About page, Home page, blog, and Contact Us page.

4. Copying Split Testing Case Studies Left and Right

You shouldn’t copy A/B testing strategies from other case studies. This is one of the very common A/B testing mistakes people make. Your business is unique, so copying others won’t bring the best results from your efforts.

How to avoid this A/B testing mistake

Analyze the case studies and derive inspiration and ideas to create an impeccable A/B testing strategy tailored to YOUR business.

Closely analyze case studies to figure out what A/B testing strategies others employed and for what purpose so that you can make them suitable for your business.

5. Testing Way Too Early 

It’s a thing, really. There is a right time to perform all tests, which goes for A/B testing too. Don’t just A/B test for its sake; you need to have enough data to compare your testing results. 

For example, you created a landing page for your new marketing campaign. No matter how excited you are to make the landing page more interactive and attractive to boost CTA clicks, don’t start preparing for the testing just yet. 

How to avoid this A/B testing problem

You need to wait for the right time until you have reliable data to draw a valid hypothesis and compare the results.

6. Making It All About Conversions

You need to realize one thing: Conversion isn’t the main focus of A/B testing. So, if you are solely focusing on it, stop right now. 

Here is why: You might see an increase in conversions, but it may affect other areas of your business in the long term. 

Let’s absorb this A/B testing mistake better with examples.

Example 1: If you add a new copy to your website, you may get a higher A/B test conversion rate, but if the converted leads are of low quality, the higher conversion rate will not benefit the business. 

So, although you see more conversions now, it may not be as beneficial for your business in the longer run.

Example 2: Say you changed your annual plans to monthly plans during A/B testing and instantly see a surge in conversions. 

But after a while, you’ll find you are losing money since the customers you are attracting are low-paying customers who tend to quit after a few uses. Whereas with annual plans, customers are more likely to stick around.


 💡 FUN FACT
A study conducted by Profitwell on 941 SaaS companies concluded that a higher % of annual contracts and plans led to a lower churn rate.

Image Source

Mistakes Made During A/B Testing

7. Split Testing Too Many Elements at Once

Sometimes, people think it’s wise to test variations of more than one element while A/B testing so they can save time and resources. 

Let us just put it out there: IT’S NOT

We’ll tell you why that is.

If you A/B test multiple elements at once, you’ll have to create several variations (which is an error we will discuss right after). But that’s not the worst part. 

You won’t be able to identify the root element for your results. What variation of what element suddenly increased the conversion rate? It irrevocably defeats the whole purpose of conducting A/B split testing, and you’ll have to start from scratch.

How to solve this A/B testing problem

The way to bypass this A/B testing mistake? Multivariate testing.

You can modify multiple variables during multivariate testing and also track the performance of each variable. This way, you’ll know which variables had the most impact during the testing.

8. Creating Several Variations

As we briefly discussed above, several variations for website split testing don’t guarantee more valuable insights. If anything, they add to the confusion, slow down results, and risk false positives.

And the domino effect continues since the more variations you have, the more traffic you’ll require to get viable results. And to do this, you’ll have to run the test for a longer duration.

Now, the longer you run the test, more are your chances of getting exposed to cookie deletion.

”Within 2 weeks, you can get a 10% dropout of people deleting cookies, and that can really affect your sample quality.”– Ton Wesseling, Founder, Online Dialogue.

It’s highly probable that participants will delete their cookies after 3 to 4 weeks, which is usually the duration of long-running tests.

Such a situation will adversely impact your result since the participants that were a part of a different variation may end up in a different one.

Another con of multiple variations is the decrease in the significance level. For instance, the accepted significance level is 0.05. Now, suppose you are running ten variations. 

💡FUN FACT

At least one will be significant by chance out of these variations, i.e., 10*0.05. As the number of scenarios increases, the significance by chance will increase, resulting in a false positive.

In 2009, Google tried to test several shades of blue to see which shade of blue generated more clicks on the search. 

This A/B test is notoriously known as “Google’s 41 shades of blue” since they decided to test 41 shades (here’s your A/B testing mistake). The chance of false positives was a whopping 88%, with a 95% confidence level. 

If they had tested only ten shades, the chances of false positives would have dropped down to 40% and only 9% with three shades.

9. Showing Different Variations to Different Audiences 

Just like comparing Apples and Oranges doesn’t make sense, comparing results of different variations from different audiences won’t offer you anything tangible.

To avoid this A/B testing mistake, it would be best to always show the test variations to the same audience for comparable results. 

If you are showing a variation to a specific demographic, say only the traffic from the US, then the other variations should also be visible to the US audience only. 

How to avoid this A/B testing mistake

Have a common denominator for comparison.

10. You Got the Timing Wrong

There are multiple split testing mistakes related to timing, and we will get into each one to ensure you don’t end up making them. 

As you know, timing plays a pivotal role in deciding the quality of the A/B testing results. Here are the errors you should look out for:

  • Concluding your test too early

Running tests for too long isn’t ideal, and it’s not beneficial to stop the tests too early. 

To achieve the industry-standard confidence rate of 95%, you need abundant data, which you can only get by running your A/B tests for a considerable time such as a week at the least.

  • Comparison of different periods

If you want reliable results, you need results from a comparable period. For example, you cannot compare results from a seasonal boom period with regular days. 

Another example is, say you see the most traffic on weekends. Then, you should only compare results from other weekends.

Also, say if you started running your test on Tuesday, you need to run your A/B split testing for a whole week and end it on a Tuesday. This way, you’ll get the data for all the days, including the weekends. 

  • You are testing different time delays

This A/B testing mistake consists of trying different time delays at once on your test. 

For example, if you show your website visitor one variation after 5 seconds and the other after 15 seconds, the results are not comparable. 

It’s because most customers generally stay longer than 5 seconds compared to 15 seconds. Thus, the results in this scenario are not the most accurate or reliable.

11. Testing with the Unsuitable Traffic 

Along with ensuring you get the timing right, you also need the right traffic for your A/B testing strategy to bear fruit. 

There’s right traffic that is qualified and interested in your offerings and will buy from you and then there’s wrong traffic that won’t convert.  

How to avoid this A/B testing mistake

Identify the right traffic and focus your A/B tests on it. 

For example, you can segment the results into visitor types to check if the changes you made are actually helpful to your target audience.

12. Altering Parameters in the Middle of Split Testing

Changing parameters such as traffic allocation in the middle of a test is a recipe for unsuccessful A/B testing and a big NO

For example, if a customer entered Variation A, they should see this variation for the entire test duration.

Changing parameters mid-test will lead this customer to see Variation B. Due to this A/B testing mistake, the integrity of your data is compromised since this customer is involved in both tests.

Also, to give all your variations a fair chance, you need to distribute the traffic equally to get the most authentic results. Anything else besides this will only adversely affect your results.

For example, say you divide the traffic as 70% to Control and 30% to variation. Ideally, this ratio should remain the same. But if you change it to say 50-50, then all the users will be mixed up.

Another parameter you shouldn’t change is the variations themselves. Don’t make further changes in the already running variations, making it impossible for you to find the cause for the results.

13. Calling It a Day Too Early on Your Tests

Going into something with the wrong expectations starts a series of mistakes. If you expect your first tests to knock it out of the park, we hate to break your bubble, but it’s unlikely to happen.

Every big company performing website split testing didn’t achieve the results in one go, and they certainly didn’t give up on it

No test is a failed test since you always get data to learn something and do better with the next one.

How to avoid this A/B testing problem

If you don’t feel good about your first tests, you can always launch another right when the first test ends. 

So, don’t make this A/B testing mistake and keep testing your page to further improve it. The more you test, the more you’ll learn where you need to improve to boost conversions. 

For example, TruckersReport had to perform six rounds of A/B testing to improve the conversions by 79.3% on their landing page.

14. Ignoring Periodic Radical Testing for Incremental Changes

Incremental changes consist of changing small elements on a page to enhance the customer experience and improve conversions. For example, changing the text of a CTA, the color of the CTA button, tweaking design elements, etc. 

A/B testing is more than just focusing on incremental changes and one of the biggest A/B testing mistakes is forgetting this. 

Such changes may benefit large organizations with insanely high traffic, but changing a button’s color may not significantly contribute to most businesses.

That’s where Radical Testing comes into play, however rare. You only need periodic radical testing when incremental changes don’t produce your desired results. 

For example, if you see a massive dip in the A/B test conversion rate and the small changes aren’t working, you may need a website overhaul to boost your conversions.

But it’s not as easy as it may seem. There are a lot of factors to take into account.

Firstly, periodic radical testing requires considerable resources, investment, effort, and commitment. It includes significant changes to your website, so you need to rethink and reconsider every decision to see significant results. 

Secondly, radical testing makes it challenging to pinpoint which change brought positive results since you change many things at once.

Here are a few tips on how to create a hypothesis for your radical testing to bypass this A/B testing mistake:

  • Challenge the current design. Instead of simply testing everything in your existing design, test elements based on hypothesis and challenge the existing assumptions. 
  • Don’t try to think on the behalf of your customers. The best way to know what needs improvement is to simply ask them. You can use online surveys to collect real-time and contextual feedback.

15. Running Multiple Tests Simultaneously

Running multiple tests means simultaneously testing different pages (for instance, the home page and checkout page) with multiple versions (Home page variations A and B and Checkout page variations A and B). 

How to avoid this A/B testing problem

You shouldn’t test multiple tests with the majority of the traffic overlapping and also when and if the interactions are strong between the tests.

Mistakes Made After A/B Testing

16. Using an Incompetent A/B Testing Tool

Did you know that even a second’s delay in page load time decreases page views by 11% and conversion by 7%

Unfortunately, you may become a victim of these A/B testing statistics if your A/B testing tool isn’t efficient. 

Let us explain: A faulty split testing tool can also slow down your website and significantly affect the testing results. So, don’t make this A/B testing mistake and find a reliable tool to do the job.

💡 FUN FACT

During one of his A/B testing experiments, famous entrepreneur and marketing influencer Neil Patel discovered that the A/B testing tool showed significant differences. 

However, he found no significant change in conversions upon implementing the new page. Later upon close inspection, it was deduced it was a faulty tool.


We understand how overwhelming it can be to dive into an ocean of A/B testing tools to find the right pick for your business. 

For this reason, we meticulously created a detailed list of the 25 best A/B testing tools you can refer to and choose the one that suits your requirements.

17. Not Measuring Results Carefully

It’s remarkable if you didn’t encounter any A/B testing problems from above and ran successful testing. 

But don’t celebrate just yet. Many A/B testing mistakes are made at this stage while measuring and analyzing the A/B testing results.

Once you have viable data, you need to analyze it the right way to benefit from the whole A/B testing process. 

Tools such as Google Analytics help here. To understand if your A/B testing strategy worked, you can see the changes in conversions, bounce rate, CTA clicks, etc.

If your tool shows averages, you can’t be too sure about the data since averages are wrong all the time.

Get a tool that lets you transfer your data to Google Analytics. You can use the Events and Custom Dimensions features to segregate the data and create custom reports for deeper analysis.

18. Not Understanding Type I and Type II Errors

You may think you have done everything right, but these A/B testing mistakes silently creep into your testing: Type I and Type II errors. 

So, you need to be on the lookout to detect them early in the process before they skew your results.

Type I error: This error is also known as Alpha (α) error and false positives. The tests seem to be working in this error, and the variations show results. 

Unfortunately, these lifts are only temporary and won’t last once you launch the winning variation universally for a significant period.

For example, you may test some insignificant elements that may show positive results for the time being but won’t get you tangible results.

Type II error: Also known as Beta (β) errors or false negatives, it happens when a particular test seems inconclusive or unsuccessful, with the null hypothesis appearing true.

The null hypothesis here means a hypothesis you are trying to disprove but in Type II error, it appears true.

For example, you conclude that certain elements you tested did not fetch positive results, but in reality, they did.

In reality, the variation impacts the desired goal, but the results fail to show, and the evidence favors the null hypothesis. 

You, therefore, end up (incorrectly) accepting the null hypothesis and rejecting your hypothesis and variation.

19. Not Considering Small Wins

If you think a 2% or 5% increase in conversion is insignificant, then look at it this way: These numbers are only from only one test. 

Say you saw a 5% increase in conversion rate, but the cumulative annual conversion lift would be much more than this.

These small gains are the reality of A/B testing, which can translate into millions in revenue, however insignificant they seem. So ignoring them is one of the biggest A/B testing mistakes you can make.

Major lifts such as 50% are only possible when you either perform periodic radical testing on a poorly designed website or overhaul it.

20. Testing Insignificant Elements

Changing a small element, such as the CTA button color, may not bring small companies or startups any significant results, which is a waste of time and resources. 

So, you need to analyze what elements are crucial and will bring significant results. 

For instance, instead of changing fonts on your product page, you may benefit more by adding more high-quality content and media. 

In conclusion, choose your elements for testing wisely. Here are some recommendations:

  • The headline of your website; make it catchy and expressive of your brand.
  • Pricing; make it transparent and suitable for your target audience.
  • Call-to-action; make it stand out from other elements in the UI.
  • Product description; explain in detail the features of your product to make it easy for customers to make decisions.
  • Media such as images, videos, etc.

21. Perform A/B Testing the Right Way

We hope you now have a little more clarity on the dos, don’ts, and A/B testing mistakes you should avoid. Remember, it all starts with the customers and ends with them too. 

So, before basing your hypothesis on assumptions, make sure to listen to the voice of the customers and analyze what changes will enhance their experience and, ultimately, your conversions.

Frequently Asked Questions

People face many A/B testing problems and make errors during testing, such as testing the wrong page and insignificant elements, running tests at the wrong time, creating too many variations, etc.

Some of the most common mistakes in A/B testing are: not testing on a live website but on a website under development, giving up too early on the test, testing multiple elements at once, etc.

The minimum recommended time for running an A/B test is at least a week and a maximum of three to four weeks.

One common AB testing mistake people make often is testing too many elements at the same time. It makes it harder to analyze the results and understand which element made the most impact.

In AB testing, a Type 2 error (also known as a false negative) occurs when you fail to reject the null hypothesis, even though the alternative hypothesis is true. 

 

In other words, it happens when you mistakenly conclude that there is no significant difference between the two variations being tested (A and B), when in fact, there is a difference.

FREE. All Features. FOREVER!

Try our Forever FREE account with all premium features!

About the author

Shivani Dubey is a seasoned writer and editor specializing in Customer Experience Management. She covers customer feedback management, emerging UX and CX trends, transformative strategies, and experience design dos and don'ts. Shivani is passionate about helping businesses unlock insights to improve products, services, and overall customer experience.