A/B Tests

The hidden reason why your A/B tests aren’t as effective as they look

Anyone who knows me will probably tell you I am extremely opposed to the idea of A/B testing. Someone will come to me to discuss about their A/B testing plans, or in quite a few cases, their optimisation strategies, and my reaction is the same:

This has happened enough number of times for it to have become a trend, and so I can understand why this perception has started to form about me.

However, it is not entirely true.

A/B testing works. Optimising your execution of any of your marketing campaigns and initiatives is a great idea. But. (And I am sure you could smell that But coming a mile away.) There is a time and place for it.

If you are an early stage SaaS startup getting hardly a couple of dozen visitors to your website every day or week, no amount of A/B testing or optimisations are going to help you fix things. It just doesn’t work that way. You are more likely to impede your growth than accelerate it, because you tried optimising something that wasn’t yet ready to be optimised.

And then, there is the fact that even if you did everything right, and that is a huge if in itself, the results are not going to be spectacular.

To truly optimise something, you need a few things, the most basic of which is volume and velocity. And we will talk about those some day. For today, let us assume you have it all. You have a lot of transactions happening on your system every hour of every day. So, that takes care of the volume and velocity problem. That frees us up to talk about the next problem on our list. Today, we are going to be talking about “purchase intent”, and how A/B testing that doesn’t factor in transfer of purchase intent isn’t able to yield the results you expected out of it.

Does a 10% improvement in your conversion rate result in 10% improvement in revenue?

It doesn’t.

So naturally it gets confusing, and the question arises - why? Why is it that your experiment was able to improve the performance of your conversion funnel by 10%, but you did not witness the same jump in your revenue, paying customers, or any other metric that matters to you?

The answer lies with the purchase intent.

Out of all the traffic coming onto your website, only a small percentage of them have a high purchase intent. Most of them have low or no purchase intent (which essentially means they are at best window shopping for now). And then there are some who are on the fence. They have an intent of taking up a solution for the problem you are solving, but they either (a) haven’t yet zeroed down on which product they would be leaning towards, or (b) haven’t yet set up a timeframe by which they would be finalising the solution.

Bottomline, only a handful of visitors are likely to become transacting customers. How handful? Well, if you are an early stage startup, that number could be as low as 0.5% - 1% of your overall traffic.

When you are improving the performance of your conversion funnel, you are measuring its improvement over the entire traffic you are getting on your website/landing page. So, the traffic that passes through this stage of the funnel is once again made up of a disproportionate mix of high-intent and low-intent users. So while you see your funnel improving, the revenue doesn’t show the same jump.

Why is it important to understand this difference, and this behavior?

Disappointment and confusion.

That is what you feel when you feel your experiment worked, but it doesn’t seem to have improved what you primarily wanted fixed.

Disappointment often leads to you starting to lose faith in the process, eventually scrapping it altogether.

Confusion, on the other hand, affects you in a worse manner. You believed it would work, you can see it didn’t work, but you just can’t put your finger on why it didn’t.

And that is why understanding this whole process becomes extremely crucial. All of it. The purchase intent, the mix of buyers categorised based on purchase intent, and the translation of purchase intent from one step to the next. It sets your expectations straight, and gives you direction.

When most businesses talk about optimisations and A/B tests, it revolves around a few areas:

  • Making the process simpler
  • Reducing the number of steps involved in the process
  • Aesthetic changes
  • Optimising text copy
  • Playing around with pricing

Most of these areas lack the ability to bring about a tectonic shift in your buyer’s purchase intent. So, while you may be able to get more of your low-intent users pass through, you don’t see it affecting your bottom line by much, if at all.

Do not look at results from A/B tests as your single source of truth

There is a reason I am considered the anti A/B testing guy. Just look at this image from one of the many SaaS products that help you perform A/B tests.

buyer intent 2

On a first look, the only thing that is portrayed as having been changed is that horizontal strip, which has gone from red to green. Things like this is what makes an impact, or so will you be told by many A/B testing products. Along with slight changes in your copy, headline, placement of buttons, color of the buttons, what it says on the button etc. Do these things matter? Sure! Can they improve the conversion rates of your landing page? Absolutely!

But do they impact the bottom line of your business? Are they able to infuse additional buying intent in your target audience? Are they able to convince them on why they should be buying your product and why they should be buying them now? That is the million dollar question.

My biggest problem with the whole A/B testing process though is not in how businesses generally focus on the aesthetics of it, it is the fact that unless you have enough velocity and volume in your number of transactions being processed, your A/B tests will just end up giving you blinders and you will miss out on the big picture, looking at inconclusive data that you would now be treating as data-driven insights.

Businesses typically run A/B tests on their landing pages, their emails, their lead forms - all of which, at best are items related to the top of your funnel. At the absolute top. So, whatever performance jump you are witnessing is most likely your lowest purchase intent buyers just sticking around a bit longer. Why? Because you are able to pique their curiosity slightly better than before.

So, when you are looking at the results from A/B tests, do not look at the performance of the item you ran tests on. Look at the overall performance. Of the whole funnel.

Unless you are able to increase the % of users having high (or at least medium-high) purchase intent, what you will witness would be less conversions down the funnel. So, while the top of the funnel had more users flowing through it because of the A/B test you ran, more and more users would be dropping off at every step of the process. Every single step with friction in terms of buying intent would burn off users who are low on it.

The result? While you would be cheering for the great results your A/B tests delivered, the business overall would have nothing to celebrate.

So what do you do to fix it?

First things first, look at the results objectively When running any test, be very clear on the actual metric you are trying to drive an impact to. Whether that metric is revenue, engaged users, driving up usage of a particular product feature - whatever that metric may be, map out how your test should ultimately translate to its realisation and measure the performance of different steps involved.

This becomes even more important when you are working with freelancers, service providers or agencies. Your vendors tend to present the metric that has been impacted the highest in an effort to reinforce their expertise and value-add to your business. Don’t let them pull wool over your eyes. When you know what business metric is mission critical, you would be able to take a peek under the hood and see what’s really working, what’s not, and to what extent.

Always be optimising for increasing intent If you follow the traditional marketing methodologies, processes and flows, you will find that there is only a finite amount of purchase intent in your ecosystem at any point of time. Ideally, you would want to change that.

You need to identify the reasons behind high intent users not moving all the way through the funnel. What is it that’s blocking their movement? How can you facilitate a smooth transition for them? I mentioned the example of a new hosting service I became a customer of, and have since churned out. I had every intention of being a long-term customer of theirs, but there was a lot of friction at each step and I needed to figure stuff out on my own - with absolute zero documentation by the company. That blocked my journey.

Movement of your high intent customers is not that different. They have every intention of moving all the way through your sales funnel, so if they aren’t doing so, there is something in their way. You need to figure out what it is, and get rid of that obstacle for them.

This helps you prevent purchase intent from leaking out as your users move through the funnel. But, can you increase the amount of intent available to you? This is where empathetic marketing comes into play.

The lower a user is on the purchase intent scale, the harder it is to get them to transact. So, with a little bit of work, you should be able to convert users who are in the medium, and medium-high purchase intent brackets. And, how do you do that? By helping them relate to pain-points from their day to day lives and workflows and help them visualize your product being able to solve those pain-points for them.

Keep your low-intent users engaged in the right areas to increase their intent You are trying to sell your product to someone who is not looking to buy in the first place. So, naturally this will get tricky. You will need to be quite creative about it, and at the same time, act fast and help these users achieve a sense of achievement and gratification early on.

Getting them to try out your product by engaging with a live actual demo, having them try out the product without committing to it are some quick ways of activating a user, and they build up purchase intent. Look at how CrazyEgg does it. It presents the promise of showing you a heatmap for your website instantly. We all love low-friction processes, so by doing this, crazyegg is able to get even the low-intent users engage with their product a little bit.

buyer intent 3

Wrappin’ up

Purchase intent is tricky. Measuring it, treating users with different levels of purchase intent differently, generating/building it. All of it. It is a tricky business. But it is the purest and surest way to reflect how well your product marketing is going for your business, and the impact it can potentially have on your bottom line.

To do it the right way, you have to be both analytical and psychological. You need to follow a data-driven approach, along with acting on your understanding of the human psyche, specifically that of your audience. And as we saw early on, you cannot and must not assume an improvement in one place will cascade throughout the funnel accordingly.

How do you decide when to start running A/B tests for your SaaS business? What all do you optimise for? What are the parameters for your experiments? How have the results been? I would love to know more about it, drop me a line.

That’s it for today, see you tomorrow.

Cheers,

Abhishek

Subscribe to Benne Analytics Blog

Get the latest posts delivered right to your inbox