We’re in the business of turning big data into big ideas. Specifically, marketing ideas.
But it’s all too easy to be seduced by the first idea that comes along. After all, time is short, and the board are waiting to hear about your new initiative, not to mention the projected returns it will deliver.
You feel pressure to deliver data-driven innovation at the same pace as every other department and team. Weekly sales reports, monthly targets, and daily trading updates are all part of the drumbeat that runs through every retail operation. Your desk is no different.
So who can blame you for wanting to build a narrative, take a run at the testing, and hope it all comes out in the wash? After all, big data is the hot new thing and you are at the controls, with your own budget line.
When Steve the FD pops his head around the marketing office door and asks “how’s that data stuff going?” you want to be able to surprise and delight, right?
Several months down the line you want to be able to show Steve and the rest of the board that you have a confident set of predictions on future behaviour, based on past testing.
So, here are a few key concepts, our ‘three amigos’ to familiarise Steve (and his ilk) with:
Your new social targeting based on enriched audience data worked a treat and you’ve got a 22% uplift in sales. But before you decide which fancy Powerpoint transition to use to announce it at the team meeting… run it again. And one more time for luck. Still getting around 20%? Now you have a result worth bringing along with the doughnuts and craft beers at 4pm.
Multiple changes at once can muddy the waters. Did that new 10% off voucher go live at the same time as the web team reduced the number of questions on the cart page? If four things happened together (not to mention the seasonal ebb and flow) you’ll need to re-run the test to work out which was the main driver of sales uplift.
You’re not alone here. This is one of the key issues in statistics, and one of the main sources of error, especially in very complex problems like medicine and marketing. It’s an area where Machine Learning, given enough data, can help to isolate patterns and symmetries, to help find out the root cause of a change in performance.
3. Twyman’s law
Any figure that looks unusual or interesting is probably wrong. This is a favourite at Vuzo towers. We find a suspiciously high number of people born on 1st Jan 1980 who harbour resentment about filling in unnecessary DOB fields. There are plenty of glitches like Potwin’s Farm to mess up your geo-targeting. And that’s before you run into Simpson’s Paradox or a hole in your Google Analytics visitor stats from the clocks going forward an hour.
Identifying systematic errors is not something humans are terribly good at – hence tools like Bayesian statistics and ML can help decide what is really a glitch and what is something we just don’t like the look of (incidentally, one of the main sources of destruction in stats is dropping data points because you think they are an outlier).
Once you’ve helped poor Steve get his head around that lot, without making him look foolish (or he won’t sign off your expenses next month), then he’ll understand why you’re taking the time to do things properly.