There's been strong reaction to radio Triple J's Hottest 100 of All Time, as blogged about here previously.
That reaction hasn't been uniformly critical. In fact, there's been a considerable backlash, much of it based on the erroneous belief that if a poll is large enough - and this one had 500,000 entries - it must be right.
Wrong.
Any poll is only ever as good as the sample, the questions asked and how the results are gathered and analysed.
And the findings of this one don't have what researchers call 'face validity'. The problem isn't with any of the individual songs on the final list - each no doubt has its proponents. It's the big picture statistics that don't lie. When even Triple J announcers and fans are surprised and dismayed that not one of the supposed 100 greatest songs of all time is by a female artist, it suggests some pretty significant errors.
"Errors?!!" I hear you say. "But it can't be wrong - it's a poll. It's about people's opinions, so it must be right!"
People's opinions are never wrong. Absolutely never. But opinion polls often have errors that render their findings wrong. I'm using the word "error" here in the sense used by researchers and statisticians to describe problems in research design, analysis and interpretation. Let me explain.
If we accept that a TRUE Hottest 100 songs of all time exists out there in the minds of Triple J listeners, then the idea of the poll is - within practical limitations - to capture that collective mindset with an acceptable level of accuracy.
"Error" refers to any problem with the methodology that could contribute to the end result of the poll not being a reasonable reflection of what's actually in the collective mindset of our population of interest.
First and foremost is "sampling error". In any voluntary poll, the findings are only representative of those who actually vote. That naturally means that people who are particularly passionate about the cause (in this case, a particular song or artist) will vote. In other words, they aren't representative of the whole population - statisticians call this a biased sample. But people who didn't vote can't complain, as they only have themselves to blame.
In any case, what we have read about the scale of the Triple J poll (some 500,000 votes) and the spread of age and gender means you'd be hard-pressed to blame sampling error for the complete absence of female artists in the Hottest 100. So it's back to the methodology...
The next type of error is to do with the survey itself. We know what we were looking to find, but did we ask the right questions?
Triple J could have asked everyone who voted to nominate his or her top 100 songs in order, and then counted every vote and applied some kind of weighting based on that order.
But that's not what happened. In fact, Triple J asked listeners to nominate only their top 10 greatest songs of all time. You can well appreciate why Triple J would do this for practical reasons, but it introduces some significant sources of error.
Firstly, there's what I will call the Tenacious D effect. The instructions and the task are likely to have suggested to many people (consciously or unconsciously) that if they had to choose the 10 "greatest songs in the world" then these must be truly "awesome" songs.
Not surprisingly, the final list of the Hottest 100 was heavy on anthemic, epic, deep and meaningful power ballads - the kind of things that get played at funerals (yes, even Heath Ledger's). There are very few "feel good" dancefloor-fillers. And it appears to have helped a song considerably if the artist died in tragic circumstances.
Secondly, compiling a Top 100 out of thousands of 10-song samples introduces a very significant statistical problem. What you end up with is a sampling distribution of people's Top 10s, and NOT a true list of the Hottest 100. And that produces very unrealistic results, as per the following example - the figures are made up, but they illustrate the problem.
Let's put the Tenacious D effect aside for now and assume we asked a large group of people to list their Top 100 songs in order.
10% said Aretha Franklin's "Respect" was one of the 100 greatest songs of all time, and 0.5% of people had it in their Top 10
50% of people agreed that Nirvana's "Smells Like Teen Spirit" is one of the 100 greatest songs of all time, and 10% of people had it in their Top 10
Around a third of our sample said they'd never even heard of "Chop Suey" by System of a Down but 5% said it was one of the 100 greatest songs of all time and half of those (2.5%) had it in their Top 10.So now we compile our list.
If we use all the votes to compile the Top 100, then Nirvana ranks above Aretha, with System of a Down lower down the list.
But when we only count people's Top 10s, Nirvana still - rightly - ranks high in the Hottest 100 and System of a Down makes it in towards the bottom but Aretha Franklin doesn't show up at all.
Without having access to any of the specific figures from the Triple J poll, the number of songs in the final Hottest 100 that could be considered relatively obscure - even for a Triple J audience - strongly suggests that these kind of survey and statistical errors are to blame for the lack of diversity that has bothered so many people.
How can Triple J design a poll that does a better job of finding the TRUE Hottest 100 of all time? Well, talk to some market researchers and statisticians to begin with.