• There are no suggestions because the search field is empty.

A guide to agile and effective ad testing research

Published 01 Feb 2022 11 minute read

Market Research
Concept Testing

Defining ad testing.

Advertising is everywhere, in case you haven’t noticed. Everywhere. The average person is exposed to up to 10,000 advertisements every day.

As a consequence, people have gotten good at ignoring ads that don’t resonate with them. It’s just the way our brains work—we can only process so much information at a time.

Ad testing research is a process that attempts to increase an ad’s resonance. You want your ad to be noticed and acted upon, and that won’t happen if everyone ignores it. So before you spend a full budget on putting an ad out there in the world, it is tested with a small sample of your target market. Their feedback is used to improve it.

Agile ad testing solutions are an accelerated version of this, typically mediated by digital technology. An ad or ad set going through agile testing can receive many rounds of revision in a short time, using data from real-life interactions. Some of the improvements might even be automated. More detail on this below!

The tangible benefits of testing ads

1. Bottom-line impact.

Tested ads are almost always more effective than ones that go out into the world raw. This means more people notice them, more people click, and more people take action, which is usually tied to a purchase. Good ads drive sales.

Often, exact ROI can be calculated. If your digital ads have tracking that is tied to an eCommerce site, you can make accurate statements around ROI. Such as: “Spending $15,000 in ad testing resulted in a 1.75% lift in conversion, which translates to almost $45,000 in revenue—a 3x return.”

2. Scalability.

You can adjust the scope and cost of testing and still get results, whether your target audience is 10,000 people locally or 10,000,000 people nationally.

3. Data-driven decisions.

Proper ad testing uses disciplined methodologies and a mix of qualitative and quantitative metrics. You get numbers and data as results, which are much more powerful than subjective arguments. Too many creative decisions are made by executives going with their gut. Data can curtail this often counterproductive practice.

4. Audience insights.

While trying to fine-tune an ad through ad testing, you will often pick up incidental information about your target audience as well. You may discover something highly relevant by accident - such as a new subgroup or a previously unknown consumer preference.

Getting started with ad testing: Market Research basics

Ad testing is a form of market research, which is a huge topic in and of itself. But the basic elements are pretty straightforward. They are summarized here, and we will dive deeper into some of them later in this article.

1. Understand the goal

The bottom-line impact is almost always the ultimate goal, but ads can do a lot of things besides lead to a sale. Based on previous analysis, you may be looking to simply improve click-through rate or even basic brand awareness. Or the goal may be more exploratory - you may be seeking to understand your audience better.

2. Determine your metrics.

What are you testing for? Is it the number of clicks? A subjective “likeability” of the ad’s words, or colours?

The more metrics you pile into a test, the more complex the interpretation and methodology. It’s often advisable to limit the focus to one key metric.

3. Identify the target audience

The more specific your target audience, the more relevant the test. You’re going to be taking a representative sample of this audience, and it’s easier to do this if there’s a specific definition.

4. Determine your methodology

This is the heart of the ad testing exercise. How are you going to run the test? There are many choices here, from agile A/B testing to intense neuromarketing techniques. We will explore several examples below.

5. Recruit or prepare a sample

The audience and methodology will guide your sample selection. For in-person or highly qualitative testing, a group of people need to be found, contacted, recruited, enrolled and guided through the test. For more information on how to read recruit for your qualitative research read our Definitive guide to recruitment for online qual.

For agile or digitally-driven methodologies, the sampling may be handled by software, and you’ll need to provide it with the proper information.

6. Run the test

Your ad or ads will be exposed to the sample, and feedback will be collected.

7. Interpret the results

Depending on your emphasis on qual or quant, this may be the job of a creative resource, an analytics resource, or a blend of both.

8. Update the ad 

It’s rare that the original ad will need no changes. (If this was common, ad testing wouldn’t be nearly as valuable.) 

Launch, or test again. Either you’re ready to go, or you go back to Step 6 (or sometimes Step 5) and run it all again with the updated ad(s). These rounds of revision are much faster and less expensive with agile ad testing methodologies.

Many of the steps above can be templated, automated, and/or built into a process. If you’re brand new to ad testing, there is certainly some upfront ‘heavy lifting.’ But once things are set up, these nine steps can start to run smoothly and efficiently. This is especially true for agile ad testing, which leans heavily on automating many of the steps.

With a proper ad testing system in place, the most effort will be in:

  • Creating or updating ads 
  • Recruiting representative samples (if you focus on qual)
  • Results interpretation

Getting started with ad testing: Choosing your metrics

There is no “best metric” because different metrics serve to answer different questions. And the questions you’re asking depend on either your goal or a problem you’re trying to solve.

That said, an ad’s ability to convert has special status, as it is a metric closely associated with the ever-relevant goal of revenue. The catch: conversion is complex, and complexity in testing is expensive. More on this in a moment - for now, we’ll review some common questions and their associated metrics.

1. Ad testing metrics for awareness

Awareness is a ‘top of the funnel’ issue that assesses if a person knows your brand, organisation, offer, etc. Awareness is usually best tested when you’re not actively showing someone the ad itself (it sort of misses the point). For this reason, awareness testing often uses a two-part methodology.

Market researchers will use metrics like recall or recognition to test awareness. In a two-part testing methodology, research participants will be shown your ad, likely alongside other irrelevant ads or pieces of information. At a later date, the participants will be asked if they remember seeing the ad or remember what it was about (recall) or if they can associate the brand with the ad, or elements of the ad (recognition).

Another way to go about this is to never show the ad to the participants, and skip right to the questions.

Because ads need time in-market to generate (or fail to generate) awareness, it is a challenge to run these kinds of studies quickly. Online surveys or communities can give you a speed boost though!

Awareness can also be measured indirectly through impressions. An ad impression is defined as the number of times the ad was “served,” where served generally refers to the act of being “being out there in the market for people to see.” An ad serve or ad impression does not guarantee that anyone saw it or noticed it. A billboard may generate ten thousand impressions as cars go by at rush hour, and a website could serve a million ad impressions a minute.

You can reasonably predict that some small percentage of people do notice, so as impressions increase, so should the awareness.

2. Ad testing metrics for resonance

Resonance is a broad term that implies that the ad stood out, or was compelling in some way. Because resonance is subjective, you may need to break the metrics down into several qualitative factors and ask your sample to rate them on an agree-disagree scale (e.g. a Likert scale).

  • Credibility or trustworthiness. The ad seemed legitimate.
  • Uniqueness or novelty. The ad is different from what they’re used to seeing.
  • Relevancy. The ad felt like it spoke to me or offered something I want/need.

On top of the ratings, you can also ask participants to provide a short explanation for some or all of their answers. This will require more investment into analysis and interpretation, but you often glean excellent insights from your customers’ free-form answers.

There are also qualitative metrics you can use to test an ad’s ‘stopping power.’ The easiest ads to do this for are digital ads. Depending on the platform, you may be able to get data that tells you if a person stopped their scroll to look at your ad. Online video ads will usually show you how long people watched for, or if they skipped your ad or not.

These kinds of quant metrics are tougher to gather for non-digital ads, but not impossible. For example, print ads could be placed in a magazine, and you could measure to see which ones participants look at the most. New technology like eye-tracking software can also come into play in this realm.

purchase intent ad testing

3. Ad testing metrics for purchase intent

As we get closer to the bottom of the funnel, the metrics will get more complex. Purchase intent can be measured qualitatively (e.g. ask participants if the ad makes them want to purchase the thing). But the best measure of this kind of intent is in real-life action.

A click on an ad may signal interest. Therefore, clickthrough rate (CTR) is a very popular and important conversion metric. If someone clicks, and then puts an item in an online cart, then that’s an even stronger signal. Finally, if they actually buy it - case closed!

The problem is: most people don’t have a straightforward customer journey from seeing an ad to making a purchase. They might see several ads first. They might do some searches, look at reviews, and then text their mom. They might forget about it for a while, then remember once a sale for a related item appears.

An ad conversion rate is another key metric, but there are a myriad of ways to define a conversion rate:

  • Divide impressions by purchases
  • Divide unique clicks by total purchases
  • Divide impressions by clicks (this is CTR)
  • Divide views by clicks

Each ratio takes a slightly different view of the total funnel. But even the most comprehensive ratios can’t account for that text to mum for her recommendation, or the time spent thinking about a purchase.

Because this can get so complex so fast, it is often advisable to focus on a few key conversion rates. It helps to understand what role your ad plays in the grand scheme of generating revenue. Often, it is the case that the ad’s job is only to get people to a landing page. The ad is not responsible for “making the sale,” but rather to get a person onto the landing page. In this scenario, CTR is a fantastic metric to focus on.

In the cases where complexity of analysis can’t be avoided, the practice of attribution modeling comes into play. This topic won’t be covered here, but in a nutshell, it is when you use math and probability to make good guesses at how much credit you give to the ad (or landing page, or eCommerce site, or SEO) for making a sale.

Ad testing metrics, summarised

There are dozens of other metrics that can be considered for ad testing. But it is good practice to narrow it down to just a few key metrics. These metrics should answer important questions and align with your methodology. This keeps ad testing from getting complicated and expensive.

Getting started with ad testing: Common methodologies

Metrics are the “what” of ad testing.

Methodology is the “how.”

Like metrics, there is no single best methodology. The choice will depend on what you’re trying to do and what resources you have at your disposal. Rather than go through individual methodological setups, let’s look at components of ad testing methodology to compare pros and cons.

1. Qualitative ad testing methodologies

Qualitative research is all about the subjective: you’re not using direct, numerical measurements but rather collecting opinions, feelings and thoughts. These things can later be codified for numerical analysis later, but the underlying data is not objective.

Qual research tends to be slower and more expensive, but it makes up for that by offering significant opportunities for gathering strong insights (especially creative insight). Qual testing might include some or any of the following:

  • Focus groups
  • Feedback surveys
  • Research communities

The costs crop up in the time taken to recruit participants, run the tests, and code the responses. However, an online qualitative research platform like Further’s 'Together' platform can meditate a lot of the costs by leveraging digital tech. You can learn more about that here.

2. Quantitative ad testing methodologies

Number and data are the main inputs with quantitative research. Something is measured with precision, with no subjective opinions allowed to muddy the waters. It might be eye-tracking, clicks, time spent looking at an ad, etc.

Because there’s less human input involved, and because digital ads can deliver huge amounts of data in real-time, quant ad testing can come with some significant cost efficiencies. The tradeoff is in the richness and quality of insight, without the human element, it can be easy to miss important details.

Quant testing might include some or any of the following:

  • Setting up experiments in an online ad dashboard
  • Comparing metrics between vendors
  • Controlled experiments that measure behaviour

3. Agile ad testing methodologies

The idea of agile ad testing applies chiefly to the digital realm. As the name implies, the key is in speed. The sampling, the testing, the interpretation and the ad changes can all be accelerated with the right technology and approach.

  • Speeding up sampling. Large organizations like Facebook (now called Meta) have access to a huge amount of user data. In under ten minutes, you can define a very specific sample (demographics and psychographics) and be set up to run your testing.
  • Speeding up testing. Because so many people are online, it’s simple to record a huge amount of ad interactions in a short amount of time.
  • Speeding up interpretation. This is where quant shines. The inputs are all numerical. So long as the calculations are built correctly, the power of modern computers makes “running the numbers” a trivial exercise. Pair this with a good dashboard or give an analyst access to a spreadsheet, and insights can be surfaced fast.
  • Speeding up changes. This might be the trickiest part. Making creative changes to ads is where humans completely outperform computers… for the time being. But there’s a workaround: give the computers a large volume of options to choose from in advance, and then it’s easy for them to try out changes.

If you can reduce the speed burden in some of these areas, you gain cost efficiencies. And if you can get all four sped up, then the loop is complete. A computer can work through the iterative steps (sample, test, interpret, tweak, repeat) over and over. This is true agile ad testing.

An agile ad testing example

Your advertising campaign target market is 100,000 people: women over the age of 35 who live on the coast and who own a vehicle that’s less than 5 years old.

You create a master ad concept, as well as 25 variations that the creative team thinks might be important: different images, calls to action, including a button or not, etc.

Your goal is to find the best CTR—these ads’ job are to send the target to a campaign page.

The variants and sample data are plugged into the computer (likely some kind of software or platform as a service).

The platform starts the agile ad testing experiment.

It starts by grabbing only 2,500 of the 100,000 people, with the following rules (aka algorithm):

It randomly serves one of the 25 ads to this sample, until each person has had at least 10 impressions OR the average CTR per ad falls out of a range.

If the ad has a high CTR, it’s promoted to the next round.

If the ad has a low CTR, it’s marked for elimination.

The platform does a kind of ‘round robin’ for three cycles, moving through 7,500 people.

Before it does a final round, you’re given the results so you can make any manual changes.

A final run is done with 2,500 people.

After this process, you have a ranked list of ads that did well, ads that did okay, and ads that the platform stopped using because their CTR was too low. You also have data on which segments of your sample gravitated towards which ads.

The key to all this: you’ve only gone through 10% of the total target. The overall winner(s) can be served to the remaining 90%... or maybe you’d like to refine even further.

These were all live tests, so while doing this valuable experiment real customers were going to real landing pages, and you’ve generated some sales along the way.

This example demonstrates the power of agile testing, but it also makes the case for ad testing research in general. Even if this doesn’t happen as quickly, and even if it doesn’t happen in a live setting—the ability to refine an ad so that the best version of it can go out there to the masses is almost always worth the investment.

Other ad testing methodology considerations: Multivariate vs univariate

Regardless of whether you used quant or qual, agile or not—the number of ads and how you present them has an impact on complexity, costs, and depth of potential insight.

  • Multivariate means “multiple variables.” Univariate means “one variable.” Guess which one is easier to deal with?
  • A univariate test would show a single ad to folks. Data or responses would only come from that ad. Nice and easy.

In practice, most ad testing is multivariate: you test multiple ads. While it is more complex to manage and analyze, it makes sense to get as much as you can out of a sample, especially if gathering that sample was costly.

A multivariate ad test might involve showing or serving:

  • Different ad concepts, one at a time
  • Different ads, side-by-side
  • Different ads to different subgroups of the sample
  • Very similar ad concepts, in all the above configuration
  • Ads in sequential order, or random order
  • A mix of ads and other content, such as product ideas related to the ads

The reasons for these configurations varies - again, it depends on your goals and the metrics you’re after. The questions you pair in qual testing can also vary from step to step. 

Clearly, it’s easy to get into a mess of complexity when building out a multivariate ad test. This is why folks go to university to study research design! :)

Getting started with ad testing: putting it all together

When it comes to ad testing, there is a lot to consider and take in. But it doesn’t have to be complicated or a massive undertaking. Remember to:

  • Ensure your goal is clear
  • Pay attention to how you get your sample
  • Understand tradeoffs between quant and qual methodologies
  • Focus on a few metrics vs. trying to capture everything
  • Leverage technology, and consider an agile ad testing approach

With these core ideas, you’re ready to start planning your first ad tests.

If you need help in designing a qual research study in particular, Further offers professional market research services and project support tailored to your needs, delivered by seasoned experts. 

We also run the online qualitative projects and research communities on ourTogether™ platform, which is a one-stop shop for creating and running comprehensive online qualitative research to capture deep contextual insights from your target audience. You can learn more about our ad and creative testing approach here. 

 

Discover our platform and services

Platform-only

The insight platform for online qual, research communities, digital diaries, ethnography and more.

Services & Support

A range of expert research services and resources to help you deliver your projects with ease, speed and reach.

Expertise

Human insight with impact; leveraging our academic and industry experts to uncover insight, create impact and make confident decisions.

Conde-Nast
Conde_Nast_logo-copy

We were amazed at the level of insight we achieved in just a week. Further opened our eyes to new ways of researching and understanding our staff

We helped Conde Nast International define a new global mission and vision statement

van-tay-media-Kab_-4M4I74-unsplash
Vouch-for-Me-Logo

Further really understood the brief and were extremely proactive. We are now very confident that we’re taking the right products and proposition to market.

We helped this insuretech startup tailor their customer value proposition for the UK market ahead of a planned launch

Keyhouse
Keyhouse

Further's expert team pushed us to clarify our assumptions and to think harder about how to communicate the value of our products and services

We helped Keyhouse enter a new market and understand what target users of their case management software needed and how to position their offer

FTH001_Mother_2_children_tablet
UNICEF-Logo

Working with Further was a refreshing and eye-opening experience…...the qualityof their output which was excellent.

We helped Unicef generate insights to support the development of a mass market, sustainable fundraising product.

Zwift image
Zwift-logo

Strategically, Further’s insights provided clear and directional answers that will guide us through our next phase of growth

We helped Zwift understand users and non-users needs and wants so they could prioritise their innovation pipeline

Chillys image case study
Chillys-logo

We helped Chilly’s leadership team consider new ways to understand and co-create with their customers’

We helped disruptive pet insurance company Waggel develop customer personas and map out the current and intended customer journey.

What next?

Browse our site, download our resources, request a demo of our platform or speak to one of our experts

Browse our work
Contact Us