HOW VALID IS YOUR A/B TEST **REALLY**?

I ask, because some A/B tests *aren't* valid.

And even a lot of the valid tests aren't the "scientific proof" we so often think they are.

I know, because I spent almost fifteen years leading high-speed, data-driven direct-response marketing teams.

My first Corporate America jobs post-MBA were senior analyst jobs in large companies' direct-response teams, coding in SQL & SAS, developing multivariate experimental designs, running and interpreting analyses, managing modeling projects and training junior analysts.

It was smart and fun work. Only, the error that Younger Me made was to believe that our “data-driven” methods were fundamentally superior to the more qualitative approaches taken by creative professionals.

I thought “well, we get to the truth of what works; those creative people can only guess.”

I was more than a little annoying in my superiority complex.

But I was so sure that our “scientific” approach was much better at determining truth.

And I was wrong.

I thought of this while reading C.S. Lewis’s essay “De Futilitate” in which he writes:

“If popular thought feels ‘science’ to be different from all other kinds of knowledge because science is experimentally verifiable, popular thought is mistaken. Experimental verification is not a new kind of assurance coming in to supply the deficiencies of mere logic. We should therefore abandon the distinction between scientific and non-scientific thought.”

It got me thinking about the ways that we marketers routinely and sometimes wrongly feel “scientifically assured” by our A/B test results.

Here is a partial list of the way in which a lot of direct response tests don’t prove what folks think they prove:

  1. "It’s Not Statistically Significant But It’s Directional"

  2. Temporary Bumps

  3. Non-Replicable Results

  4. Invisible Invalidity

  5. Predicting Beyond the Data

I'll dig into each of these further in my next posts.

What's the point of this?

It's certainly not to invalidate direct response.

Hardly. SUBLIMITY launches nearly every audience experience we co-create with our awesome clients with digital, direct response promotion, after all.

But I do want to help correct this idea that A/B tests "scientifically prove" much.

They really don't.

Remember, the results of a valid A/B test (and many aren't valid) simply show that "on these specific days, these specific people responded to these specific marketing treatments differently."

That's it.

Now....that's not nothing! It can be very helpful indeed. Thus why we do so much testing.

But we so often make soooo much more out of it than that.

And we shouldn't.

A/B Testing Part II

A/B Testing Part III

Photo by CDC on Unsplash

Never miss a SUBLIMITY insight! Subscribe to our occasional Newsletter here!

Allen Thornburgh

Allen Thornburgh is a creative innovator who loves developing new audiences and new experiences for bold organizations determined to dramatically grow for maximum impact. To this end, Allen has an eclectic background of insights-driven Human Centered Design work, in-depth marketing analytics, nonprofit strategic leadership, expert co-creation facilitation and segment-driven direct-response marketing. As Sublimity's Principal and Executive Producer, Allen believes that we are in the early days of a revolution in nonprofit growth strategies. This revolution is focusing on new audiences and experiences as intensely as our sector has long focused on platforms and channels.

https://www.sublimity.co/team
Previous
Previous

AVOIDING A/B TEST ERRORS

Next
Next

EACH HUMAN IS INFINITELY PRECIOUS