(or, why reading the biz press gives me a headache)
Men had been thought to make up 34.7 percent of the soft-rock audience, according to Arbitron Radio Today 2008, based largely on paper entries. This month, Research Director and the publication Inside Radio released their analysis of meter-only cities from July through October, showing men make up 40.1 percent of the total light-rock audience, a jump of 16 percent.
The difference between 40.1 percent and 34.7 percent is 5.4 percent. Not 16 percent. It was this ‘math error’* that jarred my otherwise passive reading. And I suddenly had questions.
Are we comparing apples-to-apples? In other words, do the two numbers represent the same geographic area (markets) and reasonably similar sample sizes? That’s the implication, but it’s false. Arbitron measures 98 markets, according to the 2008 report (pdf) linked in the story. It looks like there were 31 markets with meters in 2009, but that’s a deduction.
I’m well over halfway through the article at this point. Had this fact been mentioned sooner, I probably wouldn’t have given the story the time of day. Nor would I have tweeted it.
What other questions are unanswered?
- The 2008 Arbitron report (2007 data) is based “largely on paper entries.” What does that mean? A look at the report (not the NY Times news story, the 100 page Arbitron study) reveals that there were 400,000 diaries in 98 markets and 200,00o interviews in 75 markets. Are the data given equal weight? Arbitron doesn’t say. Did the diaries really over-report or was it the interview (with another human) where subjects gave answers that they thought would reflect more positively on themselves? Arbitron doesn’t say. And the 2008 Arbitron report does not mention the two pilot metered markets (Philadelphia and Houston).
- The markets are characterized differently in the two reports: soft-rock (Arbitron 2008) versus light-rock (meter studies, no public report). Does the choice of label affect diary and interview results? We don’t know.
- How many people are in the metered studies? The reporter doesn’t say, but she does include this “caution” from Michael Harrison when discussing the talk radio audience: “the sample size in markets using meters was relatively small.” This one quote moves the entire article into “possibly interesting but too many unknowns.”
Is there a lesson for news organizations here?
Maybe, just maybe, it would have made more sense for the reporter to spend a bit more time thinking about this story. I can see several possible outcomes:
- a blog post with a short summary and assessment (interesting but not enough data to make million dollar ad decisions) along with links to all relevant source documents or
- a much deeper story focusing on the real issue: how to finance legacy media institutions in a world where behavior is becoming much easier to track — you know, reporting as more than stenography or
- a much deeper story focusing on the exploding mass of behavioral data, voluntary or otherwise (think cookies and tracking), available to marketers today or
- a much deeper story about the psychology and limitations of self-reported data (a story that probably would not have been in the business section) which could be extrapolated to social issues and politics
Any one of these paths would have provided more value for the reader than this article. The three deeper stories are unique (can’t be found elsewhere ) and, thus, can be monetized (I hate that word). A combination of blog post and one (or more) of the deeper stories would demonstrate that media organizations are beginning to understand how their “product” should be changing.
* Once-upon-a-time I could explain why calculating a percentage change of two percentages is sloppy math (ie, ‘bad’). But my brain is on strike today.