Pages



9/23/2009

The Psychology of Netflix

Netflix has just awarded a $1 million prize to an international, seven member team of engineers, mathematicians and computer scientists for solving "the Napolean Dynamite problem."

You know how Netflix tells you: "if you liked that movie, you're sure to love this one?" That's what the prize was about.

Three years ago, Netflix launched a contest to try to find a better mathematical algorithm to predict what you like. More than fifty thousand "contestants," many of whom formed teams, tried to find a solution. People from all around the globe (186 countries!) participated.

On the face of it, it seems rather simple. People rate movies (1 to 5 stars), and from those ratings and the ratings of others, it should be simple to say what they like. But it's actually not simple at all. There are all kinds of different reasons a person might like one particular movie and why they might like or hate movies that are very similar or entirely different. And there are all kinds of different people rating the movies. Netflix has tons of data (i.e., movie ratings), and those numbers come from "people like you" and people who are nothing like you at all.

The problem is: how do you crunch the numbers when there are so many variables involved? How do you make sense out of all those ratings when you have so much "variance" floating out there that you can't account for? It's like the old saying, "there's no accounting for other people's tastes."

The million dollar prize was to be awarded to the person or team who could come up with a better way to crunch the numbers and the goal was to make the predictions just 10% more accurate than what they were. That's no simple feat.

Clive Thompson wrote a detailed article about the contest last year (November 21, 2008) in The New York Times Magazine. The contestants who got a quick jump on reaching the goal were using a statistical technique called "singular value decomposition." This wasn't a new form of math. It's a variant of an established statistical method called "factor analysis" in which a researcher takes huge chunks of information and boils them down to just a few factors.

This method is commonly used in research on human judgment and perception. When people make a choice between two things (are you going to buy the Mac or the PC?), they will typically say they had a hundred different reasons. In reality, it usually boils down to just two or three factors, or maybe even one.

Clive Thompson called it "the Napolean Dynamite problem" because it turns out that this is one of the movies that makes the math so hard and disrupts the whole system. "Miss Congeniality" and "Lost in Translation" are two other movies that are on that list. They are movies that people don't just "either love or hate." They're movies that are particularly good at stirring up all types of conflicting and maddeningly inconsistent reactions.

So what does this have to do with psychology?  I think it was said best by an AT&T scientist, Chris Volinsky, who was quoted in the New York Times article. Describing his work on the Netflix prize, he said: "we're teasing out very subtle human behaviors" while trying (in the words of Clive Thompson) "to draw exceedingly sophisticated correlations and offer incredibly nuanced recommendations."

"Teasing out very subtle human behaviors" is exactly what the science of psychology is about. And it can be argued that this contest had everything to do with understanding and predicting human behavior.

There are psychologists whose work would seem to have nothing to do human behavior. There are psychologists who do hardly anything other than study and publish mathematical equations, and then tell the rest of us how to conduct research. More importantly, they tell us how to make sense out of the numbers we gather in our research. When I was in graduate school, I turned in a take home final exam question on which I wrote a five page essay. I got an A on the paper, but I understood the question in a completely different way than the guy who sat next to me in class. He got an A for turning in one page of math equations in response to the same question.

Psychology is a science only because we have mathematicians telling us how to make sense out of our observations. And that's the reason that in psychology programs, students have to take all of those dreaded statistics courses.

The math involved in the Netflix challenge is amazing, and for most of us, largely beyond comprehension. It is hugely important to companies like Netflix, iTunes and Amazon, or any company that uses people's preferences to sell products. If they can make relevant suggestions, they're going to sell more. That's why they do it.

What is even more amazing is that they are doing this math with data from just a single and simple "Likert-type" scale. Imagine how much more they would know if they had people rate the movies on five different variables (but then, they would lose information because more people would probably not bother to submit ratings).

That could be the next challenge for a company like Netflix:  how do you get more information from just one action of the mouse? Here's my suggestion (and my official contest entry): show the movie cover along with two others and ask the rater to drag the movie to either the first, second or third position. A "paired-comparison" judgment like that could provide tons of information for the math guys to go after.


Copyright, Paul G. Mattiuzzi, Ph.D.