Most of the time, when someone cites a video game study, they don’t give you any details about the methodology, which always sets off my skepticism alarms. Typically, these studies “discover” that playing violent video games may be tied to short-term aggressive feelings. … which should seem like a no-brainer to anyone who’s had the experience of losing a game of Tetris because the dang computer wouldn’t give me a red piece!
However, this one time, I remember running across an article about a study that found no correlation between video games and short-term aggressive feelings, and it actually linked the official documentation of the study. Excited to finally have “proof” that my industry wasn’t the cause of the world’s ills, I clicked through and read the study.
Do you know what violent game they were testing? World of Warcraft.
Now, I’d like to give these scientists the benefit of the doubt. I’m sure they weren’t using the most low-impact, relaxing, cartoony, inoffensive “violent” game in the industry in an attempt to put their finger on the scale and tip the results in the direction that they wanted them to go. That would be disrespectful.
(Yes, angry WoW players, I know that World of Warcraft gets very challenging and intense, especially in the endgame … but this was a study in which people came in and started fresh games, playing for only an hour or two. They were stuck in the parts of the game that you blast through in your sleep just to try out a new alt.)
But this got me thinking … are all the studies this poorly-constructed? Are the researchers so ignorant about video games and how they work that they are repeatedly choosing specific games that tilt their results in random directions, without controlling for the peculiarities of those games?
The reproduced-in-full essay then goes on to give a bunch of steps that should be taken to design a range of video game studies, and makes a point at the end. I think the point is more important as a rhetorical trick than taken straight, but the rest is good!