Stop A/B testing everything: Only run experiments that inform your future actions

There's a lot of hype about data-driven marketing. But almost everyone forgets to mention an important downside: overhead.

I've run hundreds of growth and marketing experiments over the years, both formally and informally. In hindsight, some experiments were literally not worth my time.

Why? Setting up an experiment requires you to define what you want to test, clean the data, track results, troubleshoot... Each of these steps take effort to set up. If you leap in head first, you might spend hours setting up an experiment with little payoff. By prioritizing what to experiment, you can skip useless experiments altogether, conserve your energy, and focus on experiments that will lead to meaningful leaps forward.

Before you A/B test, ask yourself:

“What will I do with this information?”

Think about whether the trade-off in your time, mental energy, and maintenance will give you a big enough reward.

It's not enough to have a takeaway that dies with one subject line you tested. You need a takeaway that informs your FUTURE copy, FUTURE messaging, and FUTURE strategy.

Run experiments when the results will inform your future actions and behavior. Start at the end. Assert what you think you'll gain from the experiment, then decide whether you should do the experiment at all.

“But Wes,” you say, “Don’t companies like Amazon and Facebook optimize everything?”

It makes sense for Amazon to A/B test because moving a button seven pixels to the left can make millions of dollars.

But for folks (like you) building something new, micro-optimization--the act of eking out a few percentage points of improvement in your funnel--is probably not worth your time.

Let's say, at the end of your experiment, you realize one headline performed 7% better. Great! Except you can't really describe the difference between the headlines. And you're not really sure what could have made one perform better than the other.

So your takeaway is, "Hmm that was interesting...."

Yikes. No actionable insight. No change to your approach. No nugget you can apply to your next experiment. If that was all you got from setting up an experiment, that’s low ROI for all the effort.

You could make dozens of incremental improvements on your copy, imagery, or phrasing. But it won't lead you to a fundamental change in strategy if that's what you needed all along.

The danger of being too obsessed with testing is you're always working 5 feet above the ground. You might miss the 10,000 feet view of whether you should be doing something else entirely instead.

Takeaway: Before you set up an A/B test, ask yourself: "What will I do with this information? How will the results influence what I do next? Will I be more highly leveraged working on something else?"

Want free essays like this in your inbox 1x/week? 👉 Subscribe