Read this before your next marketing experiment

A/B testing costs more than you think.

At first glance, marketing experiments seem free. They’re a data-driven way to validate your assumptions and understand what customers want, before going all in.

But when we talk about the benefits of marketing experiments, we don't usually talk about the costs associated with those experiments. When you factor in the complexity and overhead involved, it's very possible an experiment isn't worth your time.

This is a wake up call for folks who think testing everything is a good idea.

The next time someone asks you to “do a quick test,” think about whether the experiment will generate insights worth the investment.

Run experiments that will generate actionable insights

Let's say you’re testing subject lines. You realize one headline performed 13% better.

That’s good, right?

Except you can't really describe the difference between the headlines...

And you have no insights about what made one outperform the other.

So your takeaway is, "Hmm that was interesting."

Yikes. If there’s no actionable insight, that was a waste of effort.

You can avoid this wasted effort. Before you start, assert what you think you'll gain from the experiment. Then decide whether you should do the experiment at all.

Ask yourself:

"What am I going to do with this information?"

Avoid FYI, nice-to-know, vague statements. An experiment might help you “gain more insight about your audience,” but that’s a low bar. You don’t need more FYIs. You need actionable insights that will inform what you do next.

Aim for the right level of fidelity

There are two general buckets here: hard science or going in the right general direction.

If you’re a product manager at Amazon or Facebook, you’re likely more in the hard science bucket. It makes sense for you to test everything to the last detail. 

  • You have large sample sizes

  • You have defined levers

  • You have direct access to primary data and user behavior

  • A three percentage point difference could mean incremental gains of millions of dollars

  • Even a small optimization (that's not repeatable) is worth it

But what if you’re launching a new product or creating a new category?

When you’re testing out a new idea, your experiments are not a hard science. You’re aiming for insights that point you in the right general direction.

Why A/B testing looks different when you’re building a new product

When you’re building a new product, you get to define all the variables. This is both a blessing and a curse.

Here are your variables:

  • Product: What is this thing?

  • Positioning/Messaging: How do we want to talk about it?

  • Audience: Who are we marketing to?

  • Channels: Where are we showing up?

  • Timing: When are we showing up, and what else is going on in the world at this time?

Changing one of these factors, but holding the rest constant, can change your results.

There are simply too many variables to control for to test every combination and permutation.

You can’t test your way into certainty about what people want. If you try, you risk spreading yourself too thin. You risk being too attached to noise in the data, when you should be honing your own point of view. You risk missing the activation energy required to make a project work.

Instead, when you’re working with a new product or idea, think of testing as a way to get closer to what’s working. At this point, you just want to be learning about where there might be product-market fit.

You want to go in the right general direction. It’s like a game of Marco Polo where you want to say “warmer!”

If you’re testing your positioning and story:

Are you making people’s eyes light up? Or are they falling asleep?

You’re turning a little right, a little left, a little right, and heading toward a general positive reaction from your customers.

How to set up a marketing experiment

There are many types of experiments, but let’s say you want to test an offer for your audience.

Here are the high level steps:

1. Define what you want to test (in this case, form your point of view on what offer you think will convert well)

2. Clean the data: remove duplicate entries, make sure things are tagged correctly, spot check

3. Set up your audience: 10,000 people get email A, 10,000 people get email B, and the rest get email C

4. Launch the actual experiment

5. Keep track of results (sometimes throughout the day, once a day, or once a week) to jump in to make sure it's working properly

6. Troubleshoot links that inevitably break or weren't set up correctly in the first place

7. Analyze test results and deciding how the results will inform your future actions

Each of these steps takes time, effort, and mental bandwidth to set up. If the upside isn’t worth it, you’ll have spent a lot of time setting up an experiment for very little payoff.

Should you A/B test or not? Here’s how to decide:

To recap, before you set up an A/B test, ask yourself these three questions.

  1. Will the results of this A/B test change your actions?

  2. Do I have a strong point of view and hypothesis going into the experiment?

  3. Is this directional or a hard science?