Tactics in A Vacuum: Reviewing Carryover Effect with Machine Learning – Part 1
At Sentier, we spend much of our time analyzing and working with marketing data that pharmaceutical companies collect. These data assets stem from the many tactics and transactions that brand teams use to promote their brand. Some of this data is digital (clicks, impressions, sessions), some of it is related to personal promotions, and then there are a litany of other tactics to drive day-to-day consumer brand awareness. The ads that end with “Talk to your doctor if you think [your favorite drug here] is right for you” account for a lot of the marketing spend, but it also generates a very usable trail of data.
While our approach is to “let the data tell us,” it is not uncommon to have to overcome objections and findings simply because of long-held, but never really proven, assumptions and conclusions. Carryover Effect — the amount of time between exposure and response to a marketing tactic — is one of those assumptions. We also refer to Carryover Effect as Lag or Advertising Adstock, but the concept is basically the same.
A single television ad is presupposed to have a lasting effect on viewers; the assumed Carryover Effect for an ad (depending on the industry) can range from three weeks to three months. This means that while someone is watching TV, he or she sees an ad for a brand and remembers that ad for over three months.
But is this really the case?
Making the Case for Carryover Effect
Our conversations with brand teams and leaders often center on these long-held assumptions about Carryover Effect and how it often contrasts with the results we pull from our marketing mix models and analytics. It’s more complicated than that.
The traditional methods for assuming the lasting effect of a marketing tactic have contributed to how companies’ fail as they perform their marketing mix analysis. Tactics in a vacuum many times do not even have a substantial Carryover Effect. Yet, when modeled correctly we can show that Adstock values are dynamic and will vary significantly when combined with other tactics. More than that, the frequency and timing of correlated tactics matters.
What is significant here is that even in the new world order of basing decisions on rigorously modeled data, concepts like Carryover Effect still hold a lot of currency. However, nothing is sacrosanct in data science, and we have proven Carryover Effect to be elastic with our machine learning-driven models.
Which raises a lot of questions.
In Part 2, we will look at how Carryover should be determined, and how marketing decisions are changing in light of reduced and dynamic lag times.