Reading: Cheater’s high

by Chris F

I came by this journal article while I was watching an internet show, but it’s a really interesting one. Recently (read in the past month), a journal article has been published in the Journal of Personality and Social Psychology which shows proof (laboratory experiments) which confirm something that most of us already knew: cheating when you’re not aware that you’re hurting another person, makes you feel good. It’s a very interesting 20 page article, and it raised some interesting points, which I believe will help me with my dissertation.

The first things I have learned is that these researchers are doing a study on the possible benefits of unethical behaviour, and while there have been many studies before into this area, apparently only one other study actually took into account the possibility of people having a positive affect from behaving unethically, all of the others focusing on the negative affect of behaving unethically. A bit disturbing was reading about how most of the other experiments in unethical behaviour had people inflict harm to other people(usually via electroshock application), and then measured how they felt afterwards. Other than one or two of the other mentioned studies here, which focused on cheating organizational systems like the US tax collection, nobody thought to do research into why people cheat on minor stuff, like games or time sheets, and that’s what this study set out to experiment with. They created 6 experiments through which they analyzed different types of stimuli and conditions in which their subjects might want, could or had to cheat, and then gave them a test to see how they felt about themselves, after doing the same test before the cheating happened.

I will talk about the experiments themselves in a bit briefly (not too interested in all the math-statistic-y stuff), but I found the introductory section to be very interesting, as well as the possibilities that arised in the wake of my reading this.

Another revealing concept that I have found by reading this article is the existence of two states of the human mind, these being the “should” and “want” states, with the “should” state being the idealist, future-looking persona that we have, the one that looks to the future and says that it would like to have something, while the “want” state is conditioned by much more immediate factors and tends to be impulsive. They illustrate it by saying that the “should” state starts you on a diet but the “want” state is the one who eats a cake in the first day of the diet, because it looked delicious.

Related to the two states, an interesting observation is made: “if an individual is not attentive to long-term consequences, unethical acts may fail to induce negative affect in the moment in which an individual makes a decision”, which I’m pretty sure is something that people who run gambling and scamming operations are intrinsically familiar with when it comes to their customers. Make them think in the present, never further than that next hand, keep their eyes on the prize, and they’ll keep playing. This whole paragraph comes after the writers of the article have read about how, if a person believes that something will cause them a negative affect, they are less likely to do it.

Coming back to the reason for which I find this article to be very interesting, it mentions (backed up by previous study) that one of the reasons why people might cheat is to have a “sense of greater autonomy and influence”, to “circumvent rules by which others are bound”(my emphasis) or to “take advantage of other people’s decisions by manipulating the information by which they are making those decisions”. All of these seem to me reasons why people cheat in games and deceive others. As animals, we want to win. As humans, we want to feel special and superior, and of course we will feel better if we believe that we are better than other humans because we know something they don’t know, especially if it’s something that circumvents the rules by which we are all bound, making us more free or giving us an edge, even if it’s just imaginary or something minuscule. We want to feel special, dammit, and if being able to save 2p when I’m shopping for £50’s worth of stuff, then I’ll do it! It’s one of the psychological strategies that marketing people use.

A nice story that the article tells is that of a cashier that “consistently embezzled from her register. She said that she did it because it gave her new goals and a sense of challenge”. I love this, as it proves that when people are faced with what seems like boring jobs, they tend to make games out of it. We could take this a step further and say that she was a game designer without knowing it, really. I’ve seen this done time and time again, people creating games and making their own rules in order to find something challenging in a really mundane job or situation, like staff meetings or a lengthy speech by the school leader when you place bets on whether she’s going to mess up the name of the new member of staff. All of these are small games that we set up for ourselves, and as for the cashier, we could say that she was playing with high stakes, trying to game the system. She knew what she did was not allowed, it wasn’t “inside the rules”, but she did it anyway because she found it fun. This is the reason why I’m a bit taken aback that before 2012 nobody thought that people cheat for fun.

One of the authors cited in this study mentions that “the euphoria of getting away with [cheating] overshadows the material gain”. This quote is very interesting, because it ties in with one of the articles that I have mentioned before, the one about crowding motivation in and out. Corroborated with what that article talked about in regards with the highest amount of work happening in work places in which the rules are lax, it seems to me that if you create a system in which people are allowed to cheat and get away with it, but not gain an over-powered advantage, you might end up getting more out of your employees/players/users than if you enforce very strict rules. People don’t like always being told what to do, and, as I’ve mentioned in the paragraph above, when we’re given rules, we’re trying to find ways around them. If the rules are guidelines or don’t exist, there’s nothing else to do but focus on our actual work or game. To summarize, “let them cheat, if you’re not losing much”.

The next 10 or so pages go through the 6 experiments which the researches have created and put people through in order to demonstrate their theory. They use a lot of math and statistics and PANAS and ANOVA tests and things I don’t really find that interesting in order to measure how people feel before and after cheating. What I did find really interesting about their methods, though, was how they created the tests. They started with a simple, template test, and then iterated. Each time they were reiterating an experiment, they identified a risk in the results of the previous test, isolated why that risk existed in the experiment mechanics, and then changed that, hoping for more accurate results, which is exactly what we’re doing when we’re creating games and making iterations on them. They played with the concept of cheating, with taking away the choice of cheating from the user, with working with and without financial incentives and a few other factors. It felt like reading a game design document to go through the experiments, chronologically.

Another interesting concept that I came about when reading through the experiments was that after the first experiment, the researchers thought that maybe non-cheating people felt better about themselves because they chose not to cheat, after seeing almost identical results in self-perception from both cheaters and non-cheaters. I was a bit dazed by this because I did not take this into account: it’s a user’s decision if they cheat or not and yes, maybe some of them will feel better for choosing not to cheat. It is, after all, a meaningful decision, and the players will be affected by it. In order to take this decision away from the people who participated in the study and better isolate the “cheater’s high”, the researchers iterated the experiment into a position where the subjects couldn’t choose to cheat or not, they had someone else give out higher numbers in a test they took. Like this, the choice was taken away from all the users, and some of them cheated by default, because the experiment demanded it. The people behind the study then still got inconclusive results, but attributed them to the possibility of having the “cheating” group feel better because they felt that they out-smarted the person who gave the researchers the inflated results.

In the end, after 6 experiments, the researchers found that there is an element of positive affect when a person cheats, but concluded that more experimentation needs to be done in order to see if it can be better isolated and to understand how it applies in different situations. One particular situation that I find intriguing is to see if cheating as a group has a bigger effect on people, because the possible guilt would be spread among all the other “accomplices” instead of one person.

All in all, one of the best reads I’ve had so far, and one that opens up a whole lot of possibilities for my dissertation, especially since it seriously made me think if I should leave some loopholes open for the players to find and exploit, to make them feel better. You have to understand that in a crowd-sourced game, it’s hard to isolate someone who cheats lightly, and, as I was mentioning in the Towns Conquer post, if you create a leaderboard or any other way of people to identify that they are better than others for a project like OpenStreetMap, people might just add random inaccurate information to the system just to gain points, which is unacceptable in such a situation.

Chris F.

Bibliography

Ruedy, N., Moore, C., Gino, F. and Schweitzer, M. (2013) ‘The Cheater’s High: The Unexpected Affective Benefits of Unethical Behavior’. Journal of Personality and Social Psychology, 105 (4), pp. 531-548 DOI 10.1037/a0034231. [Available at: http://www.apa.org/pubs/journals/releases/psp-a0034231.pdf Last accessed 24th October 2013]

Advertisements