As many regular readers of our blog know, one thing that interests several of us here at C3 is audience measurement. There are a variety of debates about audience measurement; a couple of us are quite invested in our own individual projects at looking at how just measuring quantity of views--impressions--is severely lacking in understanding the qualitative relationships people have with that content. But we also often cover a problem that Louise Story examines in today's New York Times: discrepancies in counting.
Story is looking particularly at Web views. She points out that various services have drastically different numbers for page views on sites, which makes standardizing Web advertising rates problematic. And it's hard to tell at this point if the industry wants to search to find the numbers that best reflect reality or somehow get all the systems closer to one another to have a metric to do business off of, whether or not the measurement is accurate. That is, after all, the consensus that has been agreed to for the Nielsen television ratings by many, that we need a number to do business on, and the consensus of that number is more important than its accuracy.
The article points out all the groups that may lead to these discrepancies, including minorities, college students, and those using the Internet in the workplace (which, experience tells me, is not an insignificant number). They point out that this is especially the case for news sites and other places that people may visit frequently if they work at a computer.
See several of our recent posts about audience measurement here. The problems that we've written about so many times before, such as the difficulty in trying to measure Web viewing with any single metric, is even more problematic when you take into account the fact that we can't even agree on how to count page views.
I know I have experienced that here with the C3 site, in playing around with tools like Google Analytics. I often have people who I know have visited our site and who respond to me with things they have found, or write comments in the comment section, but who do not appear when I looked at a geographic breakdown of where viewers are coming from. There are major questions about how international audiences are and should be counted and whether raw numbers or panel numbers are more valuable from the perspective of monetization.
That doesn't even begin to take into account questions of how to measure engagement online.
I often say that the danger in numbers is that quantitative data is often even more subjective than contextualized qualitative data, yet Western thought automatically prioritizes numbers as somehow more scientific and objective, in ways that can be detrimental to truly understanding a phenomenon.
The single most important question to address moving forward is how much we prioritize an accurate ratings system compared to a consensus ratings system. After all, one is much easier to reach than the other. 100 percent accuracy is impossible, since we can argue endlessly about the weight given to page views, versus time spent on a page, versus more active engagement with content and ads, and so on. But how close can we get to the myth of complete transparency in pageviews online? As always, we'll continue to follow and analyze these issues here at the Consortium.