The use of qualitative and quantitative research

by David Mannheim

posted on: December 1st 2015

As conversion optimisers we use two facets of data; qualitative and quantitative.

Qualitative research answers the ‘why’ questions. It can come in many different formats from heatmaps to user testing and we can use up to 21 different qualitative research techniques in our discovery period. What are they thinking? Why are they doing what they are doing?

This in stark contrast to the quantitative research which answers the ‘what’ questions. Using tools like Google Analytics will give you the ability to understand what users are doing. For example, we might find that users have a high exit rate on product pages. Fantastic but what does that mean? Why do they have a high exit rate on product pages?

We hear throughout arbitrary blog posts that we should be more ‘data’ driven as an industry which is true. But this term breeds a belief that if you look into your Google Analytics account and pluck some data out of the UI and make a decision on that, that you are being ‘data driven’. No. One point I want to make within this blog post in that of validation - how can you validate your data? If we know that users have a high exit rate on product pages how can we prove this? Is this because of PPC traffic? Is it just on mobile? Is it because there is a load of spam coming from a specific range of IPs? How can we prove that, indeed, our initial assumption is true. Because that’s all it is until it’s proven; an assumption.

In the example above we speak about look at Google Analytics and validating this with qualitative data. Here comes the crux of this blog post - what is better? Knowing we have to validate, do we first utilise qualitative data and validate via quantitative data or is it best the other way around?

Tip: the answer is both as you’ll get different outputs of insight from utilising both methods and not just one.

Qualitative first approach

Using qualitative data first is an approach preferred by Peep Laja if his Conversion XL 2015 presentation is to be believed. It’s a great approach. You’re understanding what users are doing and highlighting the issues which you later prove or disprove by data. If heat maps indicate that users don’t scroll to the call to action on a subscription page, we can validate that by seeing how many users click it. If recording sessions show that users revise their search term more than once, we can hop over to validate this too in Google Analytics.

Perhaps more importantly, we can see, not just whether it is indeed an issue, but whether it’s an important issue. How many users does it affect? Therefore how much of a priority is fixing or improving this (X) over Y? If we notice a user in IE8 becomes unstuck because of a layout issue on the basket page, we can then ask how many users are using IE8 and what the associated ecommerce conversion rate is, specifically of those users who visit the basket.

I’ll let you into a little secret too, that I’ve found. A novel way to spot an amateur conversion optimiser, dare I even say someone who doesn’t know what they are doing, is when they start their process with a very strong focus on user testing. I’ve personally interviewed tens of CRO folk and when asked the question “how do you start your conversion optimisation process with a client” or “what is your process for hypothesis generation”, I can’t believe the amount of times I hear “user testing”. It’s a great method, don’t get me wrong. We believe in it so much here at User Conversion that we own our own user testing tool - UserTest.io (BETA). But when it’s the sole focus and impetus for hypothesis generation it’s quite amateurish. It’s both sampling and low level research without validation.

Quantitative first approach

Qualitative first seems to be the way to go, right? Well not necessarily. It certainly helps provide focus to the optimiser as a quantitative first approach can often be quite blind in it’s structure. If we just look at data we could be there for days with such a web of data to review; analysis paralysis I believe it’s called. However, applying a structure to it can ease methods and actually create some interesting quantitative-first insights. A simple technique is looking down the left hand panel of Google Analytics. Start with audience, then acquisition, then behaviour and end with Conversions. Look for disproportionate red herrings or elements based on experience that stand out slightly more than others. Just asking the question ‘why’ can get you quite far and will get you further down the rabbit hole, funnelling down to the core issue.

An example might be you find a category page with a higher than average bounce rate. You analyse this and your first thought might be a mixture of acquisition and device. Woah. How come the mobile version of this page has a 25% increase in bounces? Ah, it’s because it’s acting as a landing page for PPC traffic and call to action to view the products is pushed further down the page. I wonder if we moved this call to action further up the page. Let me check the sequencing of this page where users go when they land on it (of those that don’t bounce) and how long they are spending on the page… and so on and so forth.

Tactically it’s unstructured, but at a top level it is structured. In the example above would you have noticed that users were bouncing on this page? No you might have noticed, heuristically, that the call to action was further down the page than anticipated but you’re assuming at this stage. A data-first approach can breed insight if you know how.

Data can also answer questions there wasn’t even a problem to. In the example of an IE8 user becoming unstuck at the basket page above, how would we know this without seeing a user on an IE8 machine? A very tactical approach but these are instances where something is broken that we need to fix. We don’t need to test whether or not the site would work better with or without this element being fixed, we just need to fix it and measure it.

So which approach is better to address first when optimising?

At User Conversion, we take both approaches, separately within the team. Two team members will handle the qualitative breakdown of the site, where one team member will look at the site from a quantitative first approach. We’ll then break out and spend at least a day discussing our findings and validating each others findings.

In summary, yes, qualitative research to quantitative research is easier and, arguably, preferred. However, because of it’s inherent flaws it shouldn’t be the only process of discovery. It is recommended, from experience, that both approaches are undertaken to obtain different view points.

avatar for author

David Mannheim

David is an experienced conversion optimiser and has worked across a series of core optimisation disciplines including web analytics, user experience and AB & MVT testing.

The use of qualitative and quantitative research

by David Mannheim Time to read: 5 min