“How We Decide” If a Game is a 9.5
Posted by Ben Zeigler on June 8, 2009
A few weeks ago I was listening to episode 4 of Out of the Game, and they started talking about “How We Decide” by Jonah Lehrer. At this point I realized I had been given a copy of it as a present, and given that it’s a book on psychology endorsed by Shawn Elliott I put it at the top of my reading stack. I’m glad I did, because I quite enjoyed it. A quick summary is that it’s a more complete, better written version of “Blink” by Malcolm Gladwell. It combines a few really good anecdotes about quick decision making (apparently when caught in a quick moving forest fire the right solution is to light a SMALLER fire directly in front of you) with an overview of current research into both conscious and unconscious decision making. I recommend it to anyone with an interest in psychology.
The book builds what seems to be a really solid framework for making good decisions in a wide variety of contexts. There are two main processes we go through for making decisions: emotional and logical. They’re good at solving different kinds of problems, and are very complementary. The unconscious brain is very good at pattern matching and evaluating statistical models, and presents the results as input into your conscious brain as “gut feelings”. However, it is bad at dealing with falsehoods or irrelevant information. The conscious brain is good at simulating outcomes to solve problems and at regulating emotion, but it is very capable of thinking too much and incorrectly overriding emotional inputs. The general formula is to use your conscious brain to filter information and monitor emotional state (the best decisions get made while you are in a moderately excited state, as opposed to entirely dispassionate or enraged), and then let your unconscious brain think about it for a bit. The one that “feels right” will more often than not be the right decision.
I wrote last year about psychology and Game Reviews, and there’s a study in How We Decide that directly supports my thoughts. in 1990 Timothy Wilson put together a study comparing the ability of college students to rate jams to the ability of jam experts from Consumer Reports. When simply asked to rate the jams, the college students showed a correlation coefficient of .55 with the experts, which is reasonably high and shows that the expert’s choices matched fairly well with the average college student. Then, Wilson asked a different group of students to analyze why they preferred certain jams using elaborate questionnaires and a wide variety of categories. This group of college students showed a correlation coefficient of .11, which is essentially meaningless. After further study it turns out what happens is that the jam “reviewers” would try to describe individual components, such as “spreadibility”, that didn’t really affect their overall enjoyment of the jam. Then, as they evaluated all of these categories they tended to revise their preferences to match with what they had written in the review. The reviewers had overthought the problem and in the process had modified their initial preferences to match their specific analysis, as opposed to analyzing their preferences.
A similar study was performed by Wilson with paintings (a Monet, a Van Gogh, and 3 humorous cast posters). One group of women was just asked to choose their favorite painting, and 95% chose the Monet or Van Gogh. The second group was asked to explain why they liked the poster they chose, and that group was split 50% between the fine art and cat posters, because the cat posters had more content available for explanation (the subjects were not trained artists). The book goes through a litany of other studies, all showing that when you try to carefully think through a complicated decision you end up making poor choices.
Let’s see, these studies are all about situations with a complicated decision and the need to generate explanatory content. In all of them, people who explained their decision before making it made worse decisions. Yeah, that sounds a lot like game reviews. If we assume that game reviewers are trained experts (most are) who have consciously trained themselves to be a good judge of games, and are not influenced by extremely strong emotions at the time of rating, their initial gut assessment of a review score is likely to correlate very strongly to the actual enjoyment of a game. However, if a reviewer rationally dissects a game into components they will be likely to rate a game higher or lower then it actually warrants, because it has “bad replay value” or something. So, game reviewers, stop thinking so much about the game and just pay attention to your emotional state while you’re reviewing a game. You’ll make better decisions.
One Response to ““How We Decide” If a Game is a 9.5”
Sorry, the comment form is closed at this time.