August 20, 2007
MTVN, Second-by-Second Measurement, and Accountability

A few interesting stories have came out in the past couple of weeks relating to shifting advertising structures. First, for those who love the idea of quantifying things down to the nth degree, you might be interested in the story that circulated earlier this month about MTVN's decision to break viewership down to "second-by-second" measurement.

As you all know, MTV Networks is a partner in the Convergence Culture Consortium, so we like to think that might be evidence that they are interested in reconceptualizing the way the industry works, since much of the work we do is about understanding new ways of organizing the system, new ways to tell stories, and new ways to understand, interact with, and respect viewers. Second-by-second measurements are intended to create really deep ways of understanding viewership patterns, partiuclarly during advertising breaks.

In Brian Steinberg's Advertising Age piece about MTVN's deal with TNS for second-by-second data, Colleen Fahey-Rush makes the point that even minute-by-minute ratings don't give advertisers specific metrics on their particular ad, lumping them in at best with another 30-second spot. Providing the second-by-second measures is one way in which TNS is proving itself.

Also as important, though, are all the caveats provided in the article: that the data is only from digital cable subscribers in the Los Angeles area, for instance, and that those 300,000 digital cable subscribers--who are with Charter Communications--cannot be measured when they are doing DVR playback, only that the program in question has been recorded.

The whole situation actually makes me think back to an experience I had back at Western Kentucky University, where a big controversy broke out about grading. I agree with the sentiments of Dr. Ted Hovet, a consulting researcher with the Convergence Culture Consortium, when he said that grades were somewhat arbitrary markers that only give a minimal amount of useful data, and that what would be truly better would be a letter from each professor a student has along the way explaining the strengths and weakness of their performances in a class. However, Dr. Brian Strow--an economics professor who I had and greatly respected--was making a strong push for the university to adopt a plus/minus grading system.

How does that anecdote relate? I feel much the same about second-by-second measuring. I still think it's important to realize that these measurement numbers, taken on their own, are somewhat arbitrary and only provide a limited amount of insight, but it is generalized and quantifiable insight that makes shows and commercials comparable, which has definite value. In a perfect world, there would be a way to avoid the generalizations that come along with quantification, but we don't live in a perfect world, and there has to be some way to make comparisons.

So, if numerical ratings or measurements are going to be at the center of the business model for television advertising, we should seek to make them as in-depth and detailed as possible, just as plus/minus grading is more valuable than A, B, C, D, and F.

We haven't talked with MTVN about these second-by-second ratings, and a lot of our research about ratings is ways in which ratings can be supplemented or conceptualized beyond the core data, but I have consistently followed efforts to make ratings more accurate on the blog. In short, I think that anything that makes the data our industry runs on more accountable should be praised, since we have to fix the system as we go. As I pointed out in the comments here, tomorrow's always another working day, and the media industry doesn't stop for a tune-up.

For more on the subject, look here and here.


On August 21, 2007 at 10:29 AM, Ted Hovet said:


I think your analogy is quite apt. While this is a bit too simplistic, what we have in both grading and measurement of audience response is a division between the "hard" data of quantifiable measurement and the "soft" data of story telling. To evaluate a student, or to evaluate the impact a particular piece of media is having on an audience, we can tell a story or we can try to crunch it down into numbers.

The problem is that numbers have no meaning outside of the story that we then tell about them. While a B+ might be more precise than a B, it still has no meaning in and of itself. Knowing exactly how many viewers stay with a commercial and how many leave it gives us more precise numbers about viewership, but again mean nothing without a narrative constructed around it.

In my view, a narrative might be more persuasive or compelling with some data to accompany it, but the power of the story we are telling about that data rests in the power of the narrative itself.


Perhaps it boils down to this: numbers without context have little meaning. Context without scale has limited validity. I understand the need for generalizing, but to do so sometimes at the detriment of what is being analyzed is a shame, especially when it leads to artificial knowledge that an industry continues to run on, whether that be academics or the media.