August 8, 2007
The Problems with Measuring Reputation in the PR Industry

I've had the pleasure of being connected recently to some intelligent folks over at Peppercom, a public relations company that serves a variety of interesting clients, from The Columbia School of Journalism to Netflix to Panasonic to Tyco.

Ed Moed, who is one of the co-founders of Peppercom, wrote a piece recently about the public relations industry, focusing on the dangers of the way quantitative metrics are understood for measuring corporate reputations in the public relations industry.

Considering all that we've been writing lately about metrics in relation to the Nielsens, engagement, and both the television industry and the success of Web advertising (look here), I found his perspective on the dangerous assumptions always backing what are accepted as "hard numbers" to be illuminating.

In short, he looks at a recent study which measure company reputations on the basis of the amount of positive press that company has received. Ed's point, however, is that articles touting the release of some new product or service doesn't necessarily mean that readers, or the media itself, views those companies positively, just that they gave them some positive coverage.

His post emphasizes that the metrics measure something, but not what they purport to measure (similar to how measuring how much someone watches a certain show doesn't tell you whether they are engaged or not but that they watch that much).

Ed writes:

Can one make the indirect assumption that generating positive media coverage will ultimately lead to a better reputation (or a better reputation with the media in this case)? Sure. But, what Delahaye has generated (and our industry continually does) are outputs, not end business or reputation outcomes that most CMOs, and certainly all CEOs, actually care about. Thus, the name of this particular Index is misleading.

We don't have a public relations partner here at C3, but this type of work greatly interests us, because the question about all quantitative research is what it actually tells us, as statistics can always be misleading if you misunderstand what they were asking and what they say. Particularly, when thinking about the importance of branding, this questions the idea that positive publicity automatically leads to positive feelings from the readers and of course has a bias against a major product rollout (in this case, the iPhone launching exclusively with AT&T/Cingular).

Reader kdpaine, who says they were working at Delahaye when the index launched, writes, "I find it particularly amusing that while the trades trump 'AT&T's top reputation score' Cingular comes in second in the 'sucks' category in the blogs. When people's opinions are counted, it's a amazing how different the results can be, eh?"

Amazing indeed.

5 Comments

 

Sam,

Thanks for your interest in this topic.

As you could probably tell, I'm a tad passionate about metrics as it pertains to the public relations agency because these studies only help to further perpetuate the discipline as one that can't be measured.

Your post reinforces the need to educate many about how quantitative metrics can and should be utilized to produce real end outcomes.

Ed

On August 9, 2007 at 9:10 AM, KDPaine said:
 

Sam, so glad to see your interest in this topic. It's definitely heating up. Check out a comment today from Mike Daniels on my blog http://kdpaine.blogs.com. There's more on our newsletter blog as well http://kdpaine.blogs.com/themeasurementstandard

 

Passion about metrics may sound bizarre to some, Ed, but I think what distinguishes you from other people is your willingness to question the validity of the metrics, rather than having that blind passion of just wanting numbers to blindly trust.

The problem with all metrics is that it always will try and standardize something that can't be perfectly compared, since numbers, by their very nature, eliminate the nuances. I think the way you make these numbers meaningful is to acknowledge this pitfall up front and to use it to improve metrics. You don't really get anywhere by claiming that having metrics is useless, on the one hand, or not rocking the boat at all on the other.

In your case, it's not even questioning the data collected but rather how people are using the results and touting what those results mean. In this case, the study does tell us something interesting, and something worth measuring, but people are equating those results to mean something other than what they do.

And thanks for the links, Katie!

 

Sam - I agree with the last sentence in Ed's piece, which says, "Until our industry starts to call a spade a spade by actually treating outputs as positive indicators that can lead to real outcomes, but not actually the end factors themselves, we won't be taken seriously."

As I commented on Ed's blog, we do have new research, (hundreds of competitive analysis studies utilizing millions of clips) that indicate a company's "share" of positive unpaid media does correlate extremely well with hard business results like sales, survey preference scores, etc. So there is obviously a strong link between outputs and outcomes. However, it's critical to utilize outputs as "linkage" data only, not as outcomes in and of themselves. I do believe the industry is making progress towards 'real' measurement! You might be interested in a new white paper we just co-wrote with Dr. Don Stacks and Dr. David Michaelson, "Exploring the Link between Share of Media Coverage and Business Outcomes" linked on the homepage at http://www.instituteforpr.org.

 

Angela, thanks for the comments. I appreciate your linking me and the C3 community to further research as well. I think you are right that common sense indicates that positive press is likely to help a company's standing tremendously, which does not make tracking the amount of positive unpaid media useless by any regard. It's just the conflation that it necessarily equals reputation that can be dangerous, especially as those statistics get out of the company's hands that released them and are quoted elsewhere, with the context of how these numbers were gathered removed. But that's the nature of quant in the media, anyway--much of what's reported gets taken out of context along the way, and losing the context can distort the numbers dramatically. I think it just goes to indicate that even quantitative data relies on the qualifiers around it to have meaning, so there are actually very few cases in which quant data isn't reliant upon some degree of qualitative positioning to be relevant.