If you’re reading this post, you already know how CRO (“conversion rate optimization”) can help you increase revenues and create better customer experiences.
The problem now is: how do you decide what to test?
Successful testing is almost always strategic in nature. You can’t just fire up your testing tool, plug in a couple of page variations and expect to meet your business goals – at least not consistently.
Instead, you need to plan out a long-term strategy, prioritize tests based on business goals, and develop an execution timeline.
This, in essence, is the testing roadmap. And when done right, it will help you save time and plan your resources better.
In this post, I’m going to show you why you need a testing roadmap and how to create one for your business.
Why You Need a CRO Testing Roadmap
As I like to tell clients, “hope is a not a marketing strategy”.
Yet, I’m amazed at the number of businesses who run a few haphazard tests and hope to see results.
Here’s the bald truth: successful testing requires long-term planning.
Consider an example: suppose you want to increase your email opt-ins.
As far as conversion goals, this one is fairly straightforward. Yet, to reach this goal, you’ll have to test a number of variables:
- Opt-in form placement
- Opt-in form design
- CTA (“call to action”) color and copy
- Opt-in form copy
- Opt-in bribe (or lead magnet)
Without a testing roadmap, you’d have to test each and every one of these variables without any idea how they might affect the eventual outcome.
Testing this way is reactionary and entirely tactical. It can, at best, meet short-term goals (say, increasing CTR, the click through rate, to a landing page) but can’t meet long-term goals (like increasing MRR, or monthly recurring revenue/monthly run rate). You have no insight into the next stage in the testing process, nor do you know which tests to prioritize.
Broadly speaking, there are 5 reasons why you should use a testing roadmap:
1. Prioritize tests
Consider a simple product page
Image Source
On this page, you can run several tests, each of which might (or might not) impact conversion rates:
- Adding UGC (“user generated content”) of product pictures
- Adding “hover to zoom” feature
- Adding “Customer Q&A”, like on Amazon
- Adding a size/fit rating, like on Amazon
- Making CTA even more prominent by increasing size and adding an icon
Without a testing roadmap, you have no real way to prioritize these tests. You’ll have to go by “feel”, which isn’t really a way to approach something as analytical as testing at all.
2. Do better resource planning
When should you hire a copywriter to jazz up your landing page copy?
Should you get a full-time designer or find a freelancer to re-do your product page?
Can you borrow Steve from IT to create a new landing page, or should you just buy a template?
These are questions you’ll have to answer throughout the testing process.
A roadmap helps you know exactly when to hire resources for maximum impact.
For instance, if your roadmap shows that you won’t be testing copy for the next 3 months, you can hold off on hiring a copywriter.
This can save you from over-hiring and under-hiring, improving results and your bottom-line.
3. Focus on business goals, not tactical goals
“Increasing CTR by 21%” might sound nice on paper, but in real-world terms, it doesn’t translate into anything tangible. You might double CTR and still end up making less money than before.
This is because “increasing CTR” is a tactical goal, not a strategic business goal. It improves a specific, isolated metric (CTR) without taking long-term goals into consideration.
A testing roadmap essentially frames these tactical goals in perspective. You know exactly how short-term, tactical goals tie into long-term strategic goals.
4. Run more complex tests
Simple tests – changing a button color or tweaking a headline – are fairly easy to pull off. You can likely make the changes yourself with a few lines of code.
However, more complex tests – changing checkout page design to focus on free shipping or tweaking pricing table price points – require a number of resources working together to run the test.
In larger businesses, you also need multiple people to sign-off on a complex test before running it.
With a testing roadmap, you’ll know exactly who you’ll have to bring onboard to greenlight a test.
For instance, if you need the sales team to sign-off on a price increase test, you can bring them onboard at the appropriate time.
5. Align the rest of the business with your goals
CRO often requires large scale changes to your business, website, and sometimes, even your business model (such as testing a one-off payment plan vs. a subscription model).
A testing roadmap makes sure that every part of the business has visibility into these changes.
At the same time, it also helps other departments plan their resources around your changes.
If, for instance, you plan to promote your customer helpline more aggressively in a test, your customer success department will have ample time to ramp up its resources when needed.
This is particularly useful in larger organizations.
How to Create a CRO Testing Roadmap
The first thing you should know is that there is no fixed “recipe” to CRO success; the actual roadmap will vary from business to business and industry to industry.
The second thing you need to know is that CRO is a still-evolving field. There are multiple frameworks for developing a testing roadmap and consultants often use their own custom approaches.
That said, there are a few steps you can follow to develop a CRO testing roadmap of your own:
1. Clarify your business goal
Start by answering a rather simple question: what long-term business goal do you hope to reach with testing?
More often than not, this will be related to a metric that directly impacts your business. For an e-commerce company, this might be “reduce shopping cart abandonment”, while for a SaaS startup, it might be “reduce churn rate”.
It’s important that your business goal be a) measurable and b) that you have consensus on how the goal is measured.
For instance, if your goal is something vague – such as “increase brand engagement” – you must have some agreement on what constitutes brand visibility (such as number of social media shares).
This business goal would be the guiding principle behind every test you run. Don’t run any test that doesn’t help meet your goal in some way.
2. List strategic goals to meet business goal
Drill down further: what are some strategies you can adopt to reach your business goal?
For instance, if your e-commerce business goal is to “reduce shopping cart abandonment rate”, you might reach there by:
- Improving cart recovery email conversion rate
- Increasing checkout completion rate
- Increasing product price (while maintaining existing conversion rates)
Think of these as different paths that lead to the same destination. They essentially answer a simple question: “how can you meet your business goals?”
Make a list of all such strategic goals. You can use Excel or a mind-mapping tool to do this.
3. List tactical goals to reach strategic goals
Your tactical goals are actions you can take to reach your strategic goals
For example, suppose you run an e-commerce store. One of your strategic goals is to “remove distractions from checkout page”.
To get to this goal, you might take several different actions, such as:
- Reducing outgoing links
- Showing checkout progress
- Limiting form size
Thus, you might have something like this:
Each action in the path above represents a tactical goal.
When you’re in the planning stage, it’s a good idea to follow a hierarchy of complexity when listing these tactical goals. Start by listing simple goals first before moving onto more complex, broader goals.
These tactical goals will be the foundation of your testing idea brainstorming session.
4. Record existing performance and establish a baseline
Before you can start brainstorming testing ideas, you need to establish a baseline for your performance.
To do this, start by running a CRO audit. Sherice Jacob had a great article earlier on conversion audits.
This data should tell you the key metrics for each page. At the least, you should have hard numbers for:
- Bounce rate
- Primary CTA & CTR
- Exit rate
You should have this data for different devices (desktops, mobile, tablet, etc.)
Make a note of them in a separate Excel spreadsheet, like this:
5. Brainstorm testing ideas
Your next step is to brainstorm ideas to meet each tactical goal.
These ideas should be exact and single-element focused. Think “change CTA copy on product page”, not “change product page design”.
List them against each testing goal. A mindmap tool is particularly useful here.
For example, if your tactical goal is to “use social proof on product page”, you might brainstorm ideas like:
- Add testimonials from customers on product page
- Show total number of social media fans
- Show number of purchases in real-time
You can list out ideas against each tactical goal, like this:
6. Prioritize tests
The fourth step is to figure out which tests to run first.
There are two approaches to do this:
- Resource-first approach: With this approach, you take stock of your existing resources and figure out which tests you can run with them. For instance, if you have an in-house copywriter, you can prioritize testing headlines and sales copy before testing other elements.
- Results-first approach: With this approach, you prioritize tests based on their expected results (based on similar case studies and historical data) and go about acquiring resources accordingly.
Regardless of which approach you choose, you will need two things here:
- Ease of implementation
This is simply a measure of how easy the test would be to implement based on
- Your existing resources
- Your ability to acquire future resources
- The technical complexity of the test
Give each test a numerical value on a scale of 1-10 with 10 meaning “very easy” and 1 being “very hard”.
For instance, if you have an in-house copywriter, testing a headline would be a “10”. If you had to hire a freelance copywriter, your difficulty score would be “6”.
More complex tasks, such as adding a highly customized pop-up, would de-facto get high difficulty scores.
- Impact
This is a measure of the expected impact of the test on your target metric based on:
- Historical data: Whether similar changes to your site in the past have yielded a net positive impact on the target metric
- Case studies from similar businesses in the same industry.
- Test target: If you’re testing to improve CTR for a CTA, changing CTA copy would have a more immediate impact than adding a disclaimer to the page footer.
Score the impact on a scale of 1-10 with 10 being “very high impact” and “1” being “low impact”.
For instance, if historical data and case studies show that adding social proof to an e-commerce product page improve results considerably, you’d classify it as an “8”.
Ideally, you want to target tests that are easy to implement and have a high net impact on your success.
Once you have this data, you can use a color-coded scale (Red = high priority, Green = low priority) to denote priority:
7. Develop your testing hypothesis
A hypothesis, as the scientific-minded among you would know, is an assumption made from evidence that serves as the starting point for any experiment or examination.
Every successful A/B test usually has a hypothesis associated with it. This hypothesis has three parts:
- Your assumption: What you assume to be true based on past experience and data. For instance, you might assume that CTR for your button is low because it doesn’t stand out on the page.
Your assumption must be rooted in data. If you’re assuming that you can’t get people to read a page because your target audience is busy, you must have some insight into your target audience or customer persona (say, busy executives with no time to read).
- Your experiment: The changes you’re making to test your assumption. This experiment should be focused on a single variable (such as button size or button color). It should also be logically consistent with your assumption.
You’ll often have several experiments for testing a single assumption. For instance, if your assumption says that you can’t get higher CTRs because your button doesn’t stand out, you can test it by:
- Changing button size
- Changing button color to a more contrasting color
- Changing button style
For each hypothesis, choose only one test.
- Your expected result: Your assumed impact of the element based on past experience and existing data.
This need not be exact but it should indicate the expected trend (positive or negative). It should also state the exact metric the experiment will affect (which you can use to measure the impact of the test).
Think something like “positive impact on button CTR”, not “2.5% lift in conversion rates”.
I like to follow an a fill-in-the-blanks format for developing my hypothesis, like this:
For instance, you might have a hypothesis like this:
“Changing CTA color from black to orange (experiment) will have a positive impact on CTR (result) because we believe elements that draw attention get more clicks and orange stands out more on this page (assumption)”
Each one of your testing ideas should have a hypothesis associated with it. This will help you greatly in developing your tests and drawing conclusions from it.
8. Bring it all together
If you’ve followed the steps so far, you would have:
- A strategic goal
- A list of tactics to meet strategic goal
- Prioritized testing ideas for each tactical goal
It’s now time to bring them all together into a testing roadmap.
At the very least, you should note the following:
- Experiment name: Use an exact but descriptive name. Ideally, this should include a) the page you’re testing on, and b) what you’re testing. Say, something like “Product Page CTA Color”.
- Experiment description: Describe the test in detail. Note what you’re doing on the page and how version A/B differ.
- Hypothesis: Describe your test hypothesis. If you’re testing a new CTA color because the existing one doesn’t stand out, say so in this section. Look below for an example.
- Collateral: List any marketing collateral, such as version A/B design or copy here.
- Target page: The page you’re running the test on.
- Target device: The target device for the test
- Ease-of-implementation: Use a numerical value to note how easy/hard the test is to implement. Use the priority table you created in step #5 to do this.
- Impact: Use a numerical value to note the net impact of the test.
- Current stage: Where are you in the testing process (tested, not-tested, in-progress)
Besides these, you can also include your baseline and target for the metric you’re testing (such as CTR or bounce rate).
Note all of this in a separate spreadsheet. You might have something like this at the end of your exercise:
It’s important to keep this document fairly open-ended and flexible. You want to be able to use incoming data to modify future tests. You also want to be able to use insight from other departments, freelancers or consultants to modify the hypothesis of existing and future tests.
9. Plan your tests
Once you have a spreadsheet filled with testing ideas, their ease of implementation and expected impact, it’s time to start running some tests.
Pick tests that are easy to implement but also have a high impact on your target metrics.
Make sure that you update the testing roadmap with the results as they come in.
What to do Next
I know this process looks intimidatingly complex, but it is also crucial for long-term, strategic gains from your CRO campaign.
Your next step would be to:
- Run a CRO audit, if you haven’t done so already.
- List your strategic goals and tactical goals to reach them.
- Prioritize tests based on their ease of implementation and net impact.
- Test!
About the Author:
John Stevens is specialist CRO and marketing strategy consultant. He runs HostingFacts where he helps businesses make better marketing decisions. You can find his tweets @hostingfactsj.