A/B TESTING & EPISTEMOLwutwut???
The final three A/B testing errors to avoid:
1. Non-Replicability
If you repeat the same test multiple times, sometimes the results will flip.
But many folks take the first test result and prematurely run with it as “proven”.
2. Invisible Invalidity
This is when the test measures as valid and statistically significant. But…it still might not be “true”.
This is probably the biggest problem.
And if you read Jesse Singal’s great book The Quick Fix, you’ll discover that this is also why your favorite TED Talk social science experiment is probably junk (sorry).
Nearly all the tests that true data-driven marketing pros run are at quantities and confidence levels that check all the boxes for statistical validity…and yet.
When is the one time when we amp up the quantities “just to be extra sure”? Mail acquisition.
We run that first mail acquisition test at a sensitivity level comparable to our normal tests.
But since making an error in such massive mailings would be truly financially destructive, we run one more, extra-high-quantity test just to be super-de-dooper sure about those initial test results.
Commonly, the previously observed successful test blurs away into nothing.
So what does that mean about all the other times that we don’t do that extra high quantity testing? Valid? Invalid?
We don’t *really* know.
3. Predicting Beyond the Data
You might remember how insanely inaccurate the overhyped models in the early days of the 2020 pandemic proved to be.
That’s because models can never be validly used to predict outcomes that don’t exist in the source data upon which they are built.
This is regression modeling 101.
The direct response industry is fairly safe from this error (not so the environmental industry), but it’s related to the reason why response models are so much more reliable than lookalike models.
Now, what’s the point of picking on A/B testing?
I use A/B testing, after all.
I'm really picking on our imbalanced overreliance on data and testing.
Simply put, A/B test results often aren’t the rock-solid, “scientific” “proof” that we think they are.
Are they useful guidance? Yes.
But they don’t exist in some special category that is “definitely more true” than every other way of knowing.
We need to correct our imbalanced epistemology (how we know what’s true).
We need to balance out our marketing epistemology by also considering indirect attribution, informed intuition, experiential senses, creative discernment and much else.
These are all important ways to discover what’s true. We need to stop letting A/B testing dominate all.
Use A/B testing? YES! Let it dominate? No.
Never miss a SUBLIMITY insight! Subscribe to our occasional Newsletter here!