See my first post on this subject here.
Another problem is how advertising engagement was measured. Using copy test results doesn't measure engagement, but how much people liked an ad, which is not the same thing. I've liked plenty of ads for products I don't need, don't really like, and don't buy. I have also seen a number of studies that have suggested that people may like an ad - and the copy test results may therefore be good - but that doesn't mean they will remember what the product being advertised actually was, or necessarily buy it because of their engagement in the ad.
This brings us to the sales data, which is vitally important in demonstrating that causal link. Ideally, one would track the buying patterns of the people in the study to establish a causal link between those patterns and engagement with the ad. Some qualitative data from the group would also be helpful. However, I'm not sure what sales data was used. General sales data would be extremely problematic in a model, because it would not reflect the control group from whom the original data was collected, and therefore not demonstrate a clear link between engagement and behavior.
Particularly in general sales data, but in any case, one would also need to control for a host of other variables that drive people to buy: price, convenience, availability, compliments and substitutes, family preference, etc. One of the researchers rightly commented that how much advertising they had seen before (media weight), was an important factor that needed to be investigated in further research.
It seems that this would be particularly difficult to do with financial services because there are so many variables. Many of the products' features (and possibly a customer's preference) are driven by personal circumstances: different customers may get very different lines of credit, for example. Choosing a product that is basically homogeneous, with no switching cost, which everyone needs to buy with some regularity, and where brand can be a key driver (Dish soap? Toilet paper?) would make for cleaner and, I would argue, more accurate results. Ideally, you would also look at this information over a period of time with a defined group of at least a few dozen people to look for consistency in responses to the advertising/programming, but it is unclear if this was done here.
So, going back to my original questions, what does this 1:8 ratio really mean? Without more information, I'd have to say not much at this point. There's little doubt that someone who likes a brand will be more likely to buy a product, but the linkages between engagement and purchasing behavior are not "proven" here. More research is definitely needed to observe the relationship of the two over time.
Moreover, networks and advertisers should not, in my opinion, rejoice just yet. The ratings debate we saw at the upfronts is far from over. Engagement in a program still doesn't mean people will necessarily sit through an ad, find it engaging, or buy the product as a result. Similarly, I don't think this study, from what I have seen, shows that liking an ad is a big factor in purchase decisions. All old questions that are extremely tough to answer.
OMD also claimed that the results validated their proprietary measure. For all of the reasons outlined here, I am a bit skeptical about that, but very interested in what future research reveals.