Amazon Ads feature a reliable attribution model, but that model has two major downsides.
We often lose the ability to measure the impact of top-funnel advertising on sales when lower-funnel advertising intercepts customers on the way to purchase. Thankfully, Amazon Marketing Cloud solves part of the issue with Path to Conversion by Campaign, which lets us visualize the chain of events leading up to a purchase. But another issue remains, namely our inability to prove the incrementality of ads just from the ad-attributed sales number. In other words, Amazon Ads attribution doesn’t give us an answer to the following question:
Measuring the incrementality of Amazon Ads can be done through experiments, which isn’t a viable option for smaller advertisers who can’t afford to spend on generic ads run against a control group.
What’s the alternative?
What if we could prove a correlation between the amount of advertising spend against a product and that product’s Amazon sales? Evidence that increases of spend lead to growth, and decreases lead to a decline in Amazon sales, could serve as proof of correlation, and therefore incrementality. One may challenge this with the fact that spending on ads featuring Product 1 can very well lead to sales of Products 2, 3, and 4. However, where incrementality is concerned, we need to assume that increasing exposure of Product 1 via ads should mainly lead to the growth of that product and its effect on the other products can be downplayed for this test.
To test the above, one would need to obtain daily advertising spend numbers by featured product and line them up against the daily Amazon sales by product, which is a feat in itself. The two sets of data are obtained from different sources, require harmonisation and formatting, and is not something you’d want to do daily. In addition to that, you’d need to follow a rigorous naming convention across all of your Amazon Ads (Sponsored Ads, DSP) and be able to split product-level spend by your advertising hierarchies, such as campaign type, strategy, keyword, audience, etc.
Now that we have the necessary data broken down by date, we can use Pearson Correlation Coefficient (r) to analyse the data. It measures “the strength of association between two continuous variables” and gives it a value from -1 to 1, with -1 representing a perfect negative correlation, and 1 being a perfect positive correlation. A 0 value would indicate there’s no correlation whatsoever.
Doing this at scale allows identifying whether our ads affect product sales, but also which elements of advertising have the highest impact. In our tests we’ve seen the coefficient go as high as 0.9 on ads featuring recently launched products with no organic sales. We’ve also seen it go as low as -0.12, indicating that the ads had little to no impact on Amazon sales of the featured product, even leaning towards a negative effect.
What does one do with this data?
At Amerge, we have automated the above process and are continuously rerunning the analysis to identify incremental advertising practices and those that aren’t. When something shows a strong correlation, we try to scale it out. When something doesn’t – we either optimize or remove that piece of advertising. We’ve found that advertising to customers who show no signs of organic interest or have not displayed such signs recently leads to the highest correlation numbers (despite a low advertising ROAS). This method has also proven a strong positive correlation (0.25 – 0.6) for top-funnel advertising, such as Amazon Streaming TV and Fire Tablet run via Amazon DSP – something we struggled to demonstrate using standard attribution.
What do you think about this form of analysis, and what would you do with such data?