How precisely to quantify and place a value on viewer engagement, is still, at best, an inexact science. Advertisers, networks, investors and content producers would all very much like to know what an engaged viewer is worth to them. It's a question I've personally pondered in some of my own research, at school and at work.
My usual Monday morning haze cleared when I found "What's the Value of an Engaged Viewer?" in my daily scan of Advertising Age online. I read it with eager anticipation, but walked away with more questions than answers.
The article describes the results of some new research by Omnicom Group's OMD that was presented to an Advertising Research Federation forum in June. According to the article, the research concluded "[o]ne engaged viewer is worth eight regular viewers", as "engagement with media and advertising drive sales, but it could also drive sales more than media spending levels". The study also found that factoring engagement into advertising analysis for the 3 financial services companies involved in the study increases ROI between 15-20% over models that rely on ratings. To come to these conclusions, they used a "proprietary engagement measure" to assess engagement with media and copy-test results to measure engagement with advertising.
The article also includes a pie chart, showing three factors the researchers say influence brand preference: engagement with the ad (49%), how much the brand is liked to begin with (20%), and engagement with the media where the ad was seen (31%).
The article paraphrases a disclaimer of sorts by OMD execs that broader research is needed to prove a causal link between engagement and sales or determine the relative importance of media weight, media engagement and advertising engagement.
Sounds pretty good right? It did to me, until I started to think about it. I haven't seen the actual research, but I am still left with nagging questions about the interplay of media and advertising engagement in generating sales and the measurements and methodology used in the study.
In essence, what does the 1:8 ratio really mean, how valid is it, and does engagement in programming plus engagement in advertising really generate more sales?
My first question: how exactly do media and advertising engagement play off one another to lead to more sales? The article indicates that engagement with the media has three times and engagement with the advertising eight times the impact of media weight alone on sales. However, two important points are left unexplained.
The first is the cumulative impact of engagement with media and with the advertising, which would be an important point to demonstrate their conclusion that a small media spend in media with engaged viewers could "work wonders" because of their engagement with the media would sort of rub off on the advertising. Although the pie chart breaks the origin of brand preference into percentages, there's no indication of what this relationship is to actual sales. In other words, some of the findings suggest that the collective impact would be greater than engagement in the programming or advertising alone, but there are not figures presented to back that up. Perhaps I could get the 3x result from the engaged viewers by placing any ad in any commercial break, but that isn't as compelling as collective impact, for networks or advertisers. "Engaging" advertising with 8x impact is great, but if someone's Tivo-ed the program, and the engagement in the show itself isn't driving engagement in the ad, none of the networks' problems will be solved.
The other point that is somewhat ambiguous is how engagement was actually measured and then linked to sales. According to the article, OMD used a "proprietary engagement measure" - which included how often people said they watched, but no word on what else - to gauge engagement in the program, and copy test result to measure the advertising. Then the data was plugged into a model to analyze "how much engagement with programming and with ads themselves drive sales."
Engagement measure is of critical importance in obtaining the findings, and there is not much information available about what is in it. The factors that determine "engagement", and even more importantly how those factors are measured, and their relative weights may be proprietary, but it is impossible to really evaluate the findings without having any sense of what went into obtaining them or how they were selected to be included in the index. It also makes it impossible to for other parties to repeat the experiment and validate the results. Before this is done, claiming validation of a model may be a little premature. More on that in my concluding post on the subject, later today.