Data driven design to maximize conversion rates

What percentage of design or product decisions in your company is done intuitively or based on experience, but without representative data behind?

Why website redesigns rarely achieve their goal, leaving you with an expensive new website that generates less revenue and lower conversion rates?

And how even small data-driven design changes can lead to up to 86% total conversion rate uplift?

While decisions made on pure subjective opinion can work for MVP or for early-stage products, it’s not the case for website and product teams that worked on conversion rate optimization before, and know that 90-99% of all UX changes don’t increase the bottom-line revenue while having significant chances to drop conversion rates.

So it’s crucial to understand why the intuitive redesign approach is flawed, what is a data-driven design and how it can become your primary growth strategy.

What is a data driven design?

Data-driven design is a process of making decisions based on a combination of quantitative and qualitative analytics, contrary to the more intuitive approaches that most designers follow. Also, data-driven design process leverage A/B testing to measure the impact of design decisions on needed user behavior metrics, like conversion rates, retention, etc.

Why (even data driven) redesigns fail

1. Design solutions aren’t validated

In the redesign method, data is used to identify problems in the user experience. Once identified, a design solution is chosen and implemented without real-world validation to determine whether it has the desired result. This is a problem because expert opinion can only take you so far, and best practices only work in certain circumstances.

Even with the experience of more than a thousand A/B tests, we at Conversionrate.store achieve a conversion improvement only in 29-38% of experiments that are based on representative data on the reasons behind the conversion barrier. And if the design decisions were done without solid data then the success rate drops to 10-23%. Meaning that in 77-90% of the time, design solutions that are not-data driven have no (or negative) impact on conversion rates.

In a redesign scenario, you are making hundreds of changes simultaneously. If you are lucky, 20% of them will increase conversion rates, but you’ve also implemented 80% of ideas that have no impact or decrease conversions. According to our data, implementing designs without A/B testing can lead to a drop in revenue of up to 42%. The main reason is that it’s hard to identify that 80% of changes that has a negative impact as the conversion rates fluctuate and are impacted by a number of factors.

So using data to inform and validate design changes is crucial, so you know which ones to implement and which to avoid.

While some redesigns use small-scale user testing on prototypes before launch, it’s unlikely all changes will be validated. Nothing beats a statistically valid A/B test in an unbiased real-life scenario.

2. The slow redesign approach leaves revenue on the table

Let’s say your team can turn around a website redesign within six months (and I’m being very generous here); all the opportunities for conversion uplifts will have been identified at the beginning of the redesign. Still, the potential solutions aren’t implemented until your “go live” date. That’s six months where you could have increased revenue but didn’t.

Instead, what’s needed is a continuous optimization approach where you realize incremental revenue as soon as it’s validated.

3. Consumers and competitors change faster than a redesign

While I suggested a redesign could be completed in six months, the reality for most large organizations is years. Your customers’ wants, needs, and expectations will have changed during this time. New challenger competitors may have entered the market offering a radically new approach that makes your redesign outdated before it’s even live.

Consider the impact Covid-19 had on the information shoppers wanted about packaging hygiene or changes in the audience demographics using your website. While these are prominent examples, consumers’ expectations change frequently based on all digital experiences they encounter (from your competitors or otherwise).

The above illustrates why the redesign approach is inadequate for driving conversion improvements, revenue, and staying competitive. But the cycle of redesign misery doesn’t have to continue. After all, when was the last time you saw global leaders like Amazon redesign their site? Hint: the answer is never. Instead, these digital-first giants employ a data-driven design approach called conversion rate optimization.

Increase website conversion rates

A truly data driven design approach

Conversion rate optimization (CRO) consists of an ongoing cycle of research, testing, and implementation—the main distinction from a traditional redesign approach is that designs are informed and validated using data, as well as the quick implementation time (weeks, not months or years.)

Below is a brief overview of the steps involved in a data-driven CRO approach and how each stage helps to deliver better results.

1. Conversion research

Without research, design hypotheses increase conversions 12-18% of the time—this increases to 30-37% of the time when data is used to identify the problem areas in your experience*.

It’s necessary to conduct qualitative (e.g., heuristic analysis, user testing) and quantitative (e.g., analytics analysis, heatmaps) research to identify the ‘what,’ ‘where,’ and ‘why’ behind your conversion blockers. For real-life examples of conversion research, check our 14 conversion rate optimization case studies that resulted in 9-86% conversion rate uplifts.

2. Hypothesis development

Identifying conversion blockers is only half the task. Once you know the problem, you need to develop an alternative design that will solve the problem and improve conversion rates. Understanding UX design principles and user psychology help here, as does user testing to help narrow down the design options for A/B testing.

3. A/B testing

With the CRO methodology, design changes are split-tested against your existing design, so you can say with a degree of certainty whether the change improves conversion rates.

Not only this, but you can also monitor the impact specific design changes have on other key metrics. For example, a test promoting free shipping might do great things for your conversion rate, but you need to observe other key metrics, such as return rates; otherwise, you might implement something with a negative ROI.

4. Iterate & implement

Once the design variation is validated as a winner, it can either be iterated on to eke out further improvements or implemented straight away so you can reap the rewards without waiting.

5. Repeat

Optimizing your CRO process to increase the speed and volume of tests helps you deliver greater ROI. You’ll quickly see that CRO is a growth lever for the business rather than a cost center.

How to get started with CRO?

To properly implement a data-driven CRO process, you need a multidisciplinary team that covers data engineering, data analysis, statistics, UX research, design, copy, development, and QA. And if your goal is to increase conversion rates or another specific metric then this team should be dedicated only to optimizing this metric for full-time, otherwise, the process is slower and less effective.

Such an in-house full-time team can cost $74.532 per month in the US according to average salaries based on Glassdoor data from 2022. And it can take at least 6-9 months to hire and train this team until it starts to bring some results. That’s why it’s more efficient to hire a CRO agency that could be both more experienced and way more cost-effective and don’t have a ramp time.

Our performance-based CRO agency can provide a dedicated team to make data-driven design decisions and also develop A/B tests to measure its impact on conversion rates. And you can pay only for actual conversion rate uplift results

If you’d like to understand what Conversionrate.store can achieve for you, schedule a conversion rate optimization consultation where we can share our data-driven design process and prepare an actionable plan on how to achieve your goals.

Glib Hodorovskiy, co-founder -- Conversionrate.store --

Glib Hodorovskiy, co-founder Conversionrate.store

Conversionrate.store is a performance-based funnel conversion rate optimization agency that worked with 3 NASDAQ-listed clients (Microsoft, GAIA, CarID).

Schedule a consultation.

26 typical A/B testing mistakes that can lead to up to 42% annual revenue loss

Booking.com’s loss from unsuccessful experiments is 2% of annual revenue. Let’s consider this as a benchmark. What revenue loss might a less experienced team have?

When you think about very costly experimentation mistakes, the first thing that comes to mind are things like “Buy” button bugs. But it’s not the most dangerous thing though, as it’s very evident, easily recognizable, and short-term.

The worst story I heard was of a 42% of annual revenue drop as a result of deploying a feature based on false positive experiment data.

We at Conversionrate.store have developed ~7200 A/B tests for 231 clients including Microsoft …. and 72% of our first 100 experiments had mistakes we realized only 8 months after starting A/B testing.

Here are 4 common problems related to experimentation that can dramatically decrease revenue or slow down the growth:

  1. Implementation of false-positive results
  2. No A/B testing at all for critical changes
  3. Direct revenue loss from underperforming variations
  4. Not maximizing the volume and velocity of experiments

All those issues are interconnected, so let’s go through 26 typical A/B testing mistakes that we see time and time again:

  1. Hypothesis is not focused on the main bottleneck
  2. Guessing reasons behind the main bottleneck
  3. Guessing how to fix the cause of the drop-off
  4. Holding the wrong metric like conversion-to-purchase as a goal
  5. Data tracking not at least 90-97% accurate
  6. No event mapping for all elements on A and B
  7. Testing more than one hypothesis per experiment
  8. Stopping the experiment only based on statistical significance
  9. No MDE and pre-test sample size planning
  10. No QA of alternative versions after experiment is launched and no monitoring of experiment session recordings
  11. No regression QA of the control version during an experiment
  12. No QA of experiment data tracking
  13. Not eliminating the “novelty effect”

    Increase website conversion rates

  14. Implementation of false positive results
  15. No anomaly detection
  16. Outliers not cleaned up
  17. No preliminary A/A or A/A/B tests
  18. No analytics or tracking of long-term impact of implemented winning versions
  19. No in-depth post-test research and documentation of results
  20. Targeting irrelevant traffic segments together in one experiment
  21. Not checking for sample-ratio mismatch (SRM) for 100% of experiment traffic or all meaningful segments you want to compare
  22. Experiment data set not visualized
  23. Deploying winning versions to a different audience than in the experiment.
  24. Low experimentation velocity due to lack of in-house resources or absence of 100% dedicated experimentation teams
  25. Not leveraging parallel experiments when there is enough traffic
  26. Not speeding up experiment time with CUPED or similar techniques that leverage historical data on sensitivity of metrics.

Any alarm bells ringing? Even one mistake from the list can spoil experimentation results or slow down your growth rate.

Schedule a free A/B testing consultation where we can go through your experimentation process, discover its bottlenecks and consult on ways to maximize it’s volume, velocity and uplift.

Glib Hodorovskiy, co-founder -- Conversionrate.store --

Glib Hodorovskiy, co-founder Conversionrate.store

Conversionrate.store is a performance-based funnel conversion rate optimization agency that worked with 3 NASDAQ-listed clients (Microsoft, GAIA, CarID).

Schedule a consultation.

UX research plan and A/B/n testing framework that defines an effective CRO program

A/B testing process

How to create an effective CRO program that consistently results in more than 5% total revenue per user growth from a winning experiment?

If you ask to choose only one most critical factor of a conversion rate optimization program, that would be a depth of analytics and UX research plan. So that you back up your hypotheses with strong, statistically valid data on the actual reasons behind the main conversion barriers.

Michal Parizek, Growth Product Manager at Smartlook formulated that in a way more clear way:

Michal Parizek, Growth Product Manager at Smartlook formulated that in a way more clear way

Every CRO agency claims a depth of data behind hypotheses and CRO plan. And we are no exception! But what does “scientifically arriving at a hypothesis” actually look like?

Well, turn Slack and email notifications off for a couple of minutes and read our checklist of UX research plan, and A/B/n testing process needed to build an effective CRO program.

UX research plan

  1. Data tracking setup audit.
  2. Event mapping.
  3. Marketing analytics. Top performing traffic sources and its scalability bottlenecks.
  4. Keywords and user-intent analysis.
  5. Top performing creatives and insights for UX.
  6. Top landing pages.
  7. Segment analysis.
  8. ABC analysis.
  9. Funnel analysis.
  10. Cohort analysis.
  11. LTV, usage and transaction frequency.
  12. Personalization opportunities.
  13. User journey map.
  14. User flows.
  15. Main drop-offs and bottlenecks.
  16. Behavioral patterns.
  17. Event correlation and feature usage. Regression analysis.
  18. CTR analysis.
  19. Competitor UX analysis.
  20. Competitor UVP and features analysis.
  21. Competitor A/B tests, product and website changes.
  22. User personas. Persona testing.
  23. “Jobs to be done” research. User tasks research.
  24. Audit of UVP and it’s perception.
  25. NPS analysis. Customer satisfaction survey questions.
  26. First time user experience (FTUX).
  27. Bounce-rate analysis.
  28. Relevancy of user intent, keywords and ad messages to landing pages.
  29. Loading speed analysis and correlation with conversions.
  30. Screen sizes, cross-browser, cross-device and conversion correlation.
  31. Onboarding audit.
  32. AHA moment and conversion to activation analysis.
  33. Conversion barrier research.
  34. UX content audit.
  35. Unanswered user questions.
  36. Conversion barriers.
  37. User rejections, fears and concerns.
  38. Core purchase motivation and triggers research.
  39. UX heuristic analysis.
  40. Usability audit.
  41. UI audit.
  42. Form analytics.
  43. UX tests. User testing questions.
  44. Video session recordings.
  45. Online polls. Open-ended and closed-ended questions. Poll targeting and triggers.
  46. User interviews. Respondents recruiting based on data, user poll answers and visitor session recordings.
  47. Heatmap analysis. Scroll depth, correlation of scroll and funnel progression.
  48. User feedback analysis.
  49. Customer support feedback.
  50. Sales team interview and questionnaires.
  51. Audit of business model and monetization tactics.
  52. Potential pricing experiments. Price elasticity.
  53. Upsell, cross-sell and down-sell opportunities.
  54. Post-conversion behavior research.
  55. Thank You page marketing audit. Referral tactics optimization.
  56. Technical audit.
  57. QA and bug detection.

Increase website conversion rates

Sounds like plenty of homework, right?

Based on our experience of running A/B tests on 127 million of our clients’ users per month, we see that hypotheses without direct interconnection of cause and effect data tend to have 2-10x lower growth and win rates (if you follow cro program best practices listed above).

But how do we actually come up with designs and content for alternative versions?

Based on our experience, depth of UX research has an inverse correlation with the uncertainty of how to design alternative versions. Meaning, if the 57 steps for creating a successful cro program are done right then it’s obvious what and how to test.

UX research process

A/B/n testing framework

Ok, so let’s assume we already did the research, prioritised a backlog of hypotheses and finally have a CRO action plan. Obviously, the most important thing in implementation of a cro program is to A/B/n test the hypotheses in a most efficient way.

Let’s go through the process as if we were launching a very first experiment.

  1. Define a macro conversion metric that best describes impact on your revenue growth. We typically define that based on frequency of usage or purchases. For transactional companies like Airbnb or e-commerce stores where users typically make one transaction less than every couple of months, the best metric is average revenue per user (ARPU). For subscriptions or products with long term usage, we define a leading indicator that forecasts LTV, like a 2nd month subscription payment. If you already have a North Star metric, then just choose that.
  2. Define secondary metrics that should not be dropped like bounce rate, refunds, additional operational costs, specific retention or a usage metric. Such metrics may not necessarily be reflected in short-term revenue but may cause long-term risks.
  3. Estimate the needed sample sizes and minimal detectable effect of the winning experiments. Define if it’s enough traffic for A/B/n testing or it’s better to go with A/B tests.
  4. Launch an initial A/A test to check, validate and calibrate the A/B testing tool or in-house traffic split solution and data tracking setups. You can also run a bunch of A/A/B tests if you have sufficient traffic and want to have additional confidence in statistical significance (for example if you want to establish trust with a CRO agency).
  5. Estimate opportunities for parallel testing where users take part in several experiments at the same time. You may hear that it’s forbidden to test that way in most popular CRO blogs, but companies like Microsoft, Booking, Google, Netflix and LinkedIn do that to run 10,000-50,000 experiments simultaneously.
  6. Estimate opportunities to cut the time that’s needed for statistical significance like the CUPED method or targeting the test only on users that actually have a different UX (for example if the change is on the 3rd screen of the landing page then only run the test on users who scrolled till the 3rd screen).
  7. Create an A/B/n testing calendar with approximate estimated times to stop experiments and develop the new ones. Avoid pauses without any live experiments. If we think of growth as a number of experiments then one week without tests means 25% slower monthly growth (and even slower when compounding the decline of each month together).

Increase website conversion rates - 2

  1. We assume that the 57-steps of UX research plan was done and the hypotheses are maniacally prioritized, right?
  2. Choose a statistical formula that works best for your specific metric, type of dataset and its distribution. Lots of teams just blindly use statistical calculators after reading a bunch of blog posts on A/B tests statistics. Take time to understand the nature of statistical concepts. We recommend the book “Statistical Methods in Online A/B Testing” by Georgi Z. Georgiev as a good foundation for that.
  3. Prepare an automated dashboard that monitors all needed statistical metrics, sends notifications on significant drops, tracking and splitting issues, and recommends when to stop the test.
  4. Allocate a dedicated A/B/n test development, QA and analytics team that works on nothing but the experiments. If you don’t feel like doing that or don’t have the resources, read step 64 again – if the whole team is not 100% focused only on growth, it will be inevitably slower. If it’s still hard in terms of resources or it’s hard to hire and build more growth teams you can outsource the A/B test development to a CRO agency. It’s safe, secure as it’s no impact on the actual source code and access to that if done through client-side A/B testing tools like Optimizely and Google Optimize.
  5. Develop the test and conduct manual QA.
  6. Set up additional data tracking if any new elements are planned on alternative versions.
  7. Launch an experiment on a small portion of traffic that’s significant enough to check the correctness of tracking, experiment targeting and help to identify bugs and technical problems.
  8. Ask the QA team to watch visitor session recordings of the alternative versions to detect bugs that were not found during manual QA or by quantitative metrics. That will also help to uncover the use cases and flows that should be tweaked to polish the hypotheses before the final launch.
  9. Steps 61-70 should be done every time… and in fewer than 7-14 days to avoid days with no testing.
  10. It’s time to launch!

Increase website conversion rates - 3

  1. Check the experiment metrics in the dashboard and sit tight until the needed sample size is collected or it’s evident that there is a significant issue or drop, or the experiment is likely to never be significant.
  2. When it looks like it’s time to stop the test, check the outliers and define a method to clean them up if any. Visualize the transactions on the plot to visually understand the nature of the data set. This will help to choose the best way of dealing with outliers like filtering with 3 standart deviation, defining the theshold or replacing transaction volume to average numbers.
  3. Time to stop the test!
  4. Conduct post-test analysis to specifically understand why the test won, lost or made no impact by looking at micro-conversions and segments that were impacted by the alternative version. This step is critical to CRO research and learning things for the next hypotheses or creating a tweaked version of the current one. This step makes sure that the experiments actually had no mistakes as you’ll get more data than in the initial pre-launch.
  5. Check personalization opportunities by looking at separate segments that have statistically valid growth.
  6. Choose a way to estimate the actual long term impact after implementation. You can check cohorts of A and B after 1,2 and 3 months after stopping the experiment. Or implement the changes on 90% of traffic instead of 100%. Define the amount of traffic and frequency of rolling up new versions based on the needed sample size for significance. Another way to do that is to repeat the winning experiment before implementation or to run B/A some time after implementation. Repeatability of experimental results is a main feature of true scientific knowledge!
  7. It’s time to implement the winning version and repeat the process time and time again!

To sum up, these 81 CRO program steps could dramatically increase the growth and success rate of your experiments if done right. If you’re launching a CRO program for the first time or for a new product, then it’s critical to go through all of the steps.

Product managers often don’t have the resources, time, patience or expertise to execute that to a full extent, leading to hypotheses with a bunch of unknowns, like no exact data on cause and effect, no exact quantitative prioritisation, etc.

When you guess the biggest problem, then guess the reasons for that, and afterwards assume the solution, then the probability of winning is lower than when you have the exact data on this.

So “Arriving at hypotheses, scientifically” as Michal said is the core thing that defines an effective CRO program.

Glib Hodorovskiy, co-founder -- Conversionrate.store --

Glib Hodorovskiy, co-founder Conversionrate.store

Glib Hodorovskiy is a CRO strategist that conducted thousands of experiments on hundreds million users.

He meditated 3000+ hours, teaches mindfulness and is passionate about neuroscience of attention and decision making.

Schedule a free CRO consultation.