If you want to double or triple your conversion rates over the next 9 months, A/B testing is the foolproof way to do it.
Any funnel, any business model, any marketing channel.
You could easily double your customer counts within the next year.
All without having to increase your marketing spend or get more traffic.
That’s the magic of A/B testing.
There is one catch though.
A/B testing is easy to skew up. It’s counter-intuitive and goes against many of our business instincts. Even worse, it only takes one bad decision to ruin all the progress from an entire testing program.
Over decades of A/B testing ourselves, we’ve put together a set of rules that our teams always follow. If you follow these rules too, you’ll avoid the bad calls. Then it’s only a matter of time before you double your business.
#1: Allow your test to run for at least 7 days
The first is to allow your test to run for at least seven days.
The reason is that A/B tests can change very quickly. One variation may jump out to an early 350% conversion boost by day two and even be ruled statistically significant by your A/B testing software only to cool down to a 15% boost by day five. To account for these changes, you need to make sure and let your test run for at least seven days.
We’ve seen countless test flipflop over the years.
They start out as winners and then end up as losers after a few days.
The first week is especially volatile. Try not to even look at the results during that week.
Another reason to test for a longer period of time is that website traffic varies from day to day. Saturday traffic, for example, can be very different from Monday traffic. Based on that, you want to make sure to get results from every day of the week before calling a winner.
You should also keep in mind that even seven days is really a short time period for an A/B test, and you may be better off letting it run for several weeks. You’re looking for a winner that will get long-term results and don’t want to pick a winning variation too soon only to find out it doesn’t actually boost conversions or revenue.
It’s also a good idea to allow tests to run until you have at least 100 total conversions. More than that is even better and less can work, but running until there are at least 100 conversions will help to give you more confidence that the outcome is accurate and will deliver the results you’re looking for.
#2: Run tests until you have a 95% confidence level
The next rule to follow is to run your test until there’s at least a 95% confidence level for the winning variation.
The reasons for this rule are the same as those for rule number one. First and foremost, you’re looking to pick a winning variation that will give you better results for the long term. This means you want to make sure the results are statistically significant and that you don’t pick a winner prematurely.
Another reason is that test results can change dramatically over the course of an A/B testing period. I’ve personally seen a variation jump out to a 105% boost in conversions after a day and a half only to lose when the test is called 10 days later. This makes it even more important to wait until your A/B testing software says the results are statistically significant.
To get a better idea about how long this will take for your test, use this simple A/B Test Calculator from Neil Patel:
So keep your test running until you hit 95% statistical significance on the calculator.
You’ll also want to keep in mind that the smaller the conversion boost, the longer the test will need to run, and vice versa. As such, if the improvement is only 5%, then you’ll need to run the test much longer than if it’s a 50% improvement.
#3: Big changes lead to bigger results
Another rule of thumb to keep in mind is that bigger changes have a greater chance of leading to bigger results.
If you change the button copy on your homepage, for example, you might only improve conversions by 5%.
For most testing programs, chasing small wins like these isn’t worth the time.
Making drastic changes, on the other hand, greatly increases the odds that you’ll find a 50% win.
Say that you have a SaaS business but don’t currently offer a free trial. You’ve gotten support requests asking for a trial but have never made the jump.
So you go to work, set up a way for people to sign up for a free trial, and then run a test to measure the results. After one month of testing, you find out that free trial improves conversion tp pay subscription by 74%.
We’ve had a number of test results like this over the years. Yes, they take a lot of work to build but they also have a real chance to catapult your business.
Bigger changes like this have a greater likelihood of getting big conversion wins.
#4: Start by testing your headlines
The headline is the most important element on every page.
Get it right and new visitors will keep paying attention. They’ll be open to what you have to say.
Get it wrong? Everyone bounces instantly.
Simply by changing your headline, a single sentence, you can increase conversions by 30%.
When running A/B tests for the first time, I go straight to the headline on the homepage. I almost always see a 30% boost in my first test by trying 3-5 different headline variations. This is the one silver bullet in A/B testing.
#5: A/B testing doesn’t mean just making one change at a time
This is probably the biggest misunderstanding I see people have when it comes to A/B testing. They think you need to measure the difference every little change makes which means you need to test one small change at a time, but this couldn’t be further from the truth.
The reason is that you’ll never be able to get anywhere if you just make one small change at a time. Yes, you won’t know as well whether factor A, B, or C impacts the results, but you’ll never be able to test big changes that get big results if you don’t test more than one change at a time.
One way to fix this is to run an A/B/n test. Instead of just running variation A against variation B, you can also add in variations C and D to see how they impact results. You can test just a headline change in variation B, a headline and sub-title change in variation C, and a new headline and sub-title change in variation D. You can have as many different variations as you’d like, just keep in mind that each new variation will require your test to be run X% longer before you find statistically significant results.
Multivariate tests are another way to test more than one change at once, but you’ll want to make sure you have enough experience with A/B testing before attempting to tackle a full-fledged multivariate test. You’ll also need to make sure you have enough traffic to select a winner because multivariate tests require a lot of traffic to select a winner.
#6: Macro conversions are more important than micro conversions
In the end, you always want to be measuring the results that are the most significant for your business, i.e., macro conversions.
Let’s say, for example, that you’re attempting to further improve conversions at the SaaS company mentioned above. The sign-up involves three critical steps: 1) Clicking “Start Free Trial on the homepage, 2) Entering information on the sign-up page, 3) Eventually signing up for a paid account.
Which of these do you think is the most important? Obviously, it’s getting customers to sign up for a paid account. This means you don’t want to just test whether or not the headline and homepage copy convinces people to click the Free Trial button. You also want to know whether it gets more people to sign up for a free trial and gets more people to sign up for a paid account.
Based on this, you want to measure the impact on both free trial and paid account signups whenever possible. This may seem counter-intuitive because you might think, “If more people click through to the second step, doesn’t that mean more people will sign up for a free trial, and if more people sign up for a free trial, doesn’t that mean more people will sign up for a paid account?”
The answer is no, and I’ve seen multiple tests where one variation increased conversions from step one to step two, but a different variation increased improvements to step three which was the final step in the conversion funnel.
This seems counter-intuitive, but you want to make sure to measure macro-conversions for test results because the winning variation from step one to step two won’t always be the winning variation for the final leg of your funnel.
#7: Testing eliminates assumptions (and disagreements)
One of the best things about A/B testing is that it eliminates assumptions and disagreements. You may assume that headline A will improve conversions when, in fact, headline B gets better results. In the same way, a colleague may hate headline B and ask why it would even be tested, only to find out later that it gets better results.
The lesson here is to always be testing. By doing so, you’ll be forced to test your assumptions and to make sure each change improves conversions.
You might be certain that a new pricing page will boost conversions, only to find out it doesn’t, or you might argue for three weeks about the best headline variation with your co-workers. A/B testing is the best way to solve all of these problems and to make sure you consistently make your site better.
The Value of A/B Testing
In the end, A/B testing is one of the most valuable places to spend your time.
Acquiring traffic is so so hard. And expensive.
Why not get more customers from the traffic you already have?
As long as you follow the A/B testing rules above, you can absolutely double your conversions in the coming months.
Even better, Crazy Egg has an entire A/B testing tool. It’s so easy to use that you can get your first test running today. It’s designed for small teams and folks that are new to testing.
Start by testing the headline on your homepage. You could have a 30% boost to your new customers by the end of the week.