Mary Russell

Today, a few brief words of advice about playtesting.

Playtesting is of course the scalding hot viscera that brings these little Frankenstein monsters to life, and as might be expected by my choice of metaphor, it can be a messy, dangerous, and exhausting process. There are oodles and oodles of advice out there about how to choose testers, how to conduct tests, what to ask, with much of the advice often being contradictory. Really, you just have to find the method, or combination of methods, that will work best for you.

But whichever method you choose - however you go about it - the one thing I would implore you to do at all costs is keep the ruleset stable.

"But Tom," you say, and I'm not sure why you're saying it out loud, because you're talking to a screen and I'm pretty sure right now that I can't actually hear you, but let's keep going, "But Tom, isn't the whole point of playtesting to gather feedback, see what works and what doesn't, and to make changes accordingly?"

Yes indeed. But you don't need to make changes every couple of days, tweaking here and there. For playtesters, that's the death of a thousand cuts. Having to print new components today, learn new rules tomorrow, find out the next day that they had been playing with a rule that had changed but that they had forgotten - all of that very quickly leads to burnout and exhaustion. Playtesters are after all human beings, and we human beings are fragile, temperamental things: you need to use them gently.

I once read something written by a gamer who has done playtesting for several titles and publishers, where he explained that every time he sat down to playtest a game - every time - he and his entire group re-read the rules from scratch. That works for him and his group, I suppose, and more power to them, but nothing seems more likely to lead to multiple homicides. No jury would convict me.

That gamer recommended that all playtesters do the same, which perversely blames exhausted, burnt-out playtesters for lacking the stamina and discipline to deal with the grind. I think that's completely backward: there's no reason to put playtesters through the wringer in the first place. Give them a stable ruleset, and keep it stable for a good long while as you collect data.

Once you have good, substantial data, then you make changes, rolling them out all in one go. We did this recently with Brad Smith's NATO Air Commander, keeping the ruleset stable during the first phase of testing. After several weeks, we started working with Brad on making changes, and after we had implemented them, we rolled the whole thing out - new rulebook, new VASSAL mod - as "Phase 2". After several more weeks, we'll see what's what, and make whatever changes might be necessary for the next go-around.

The playtesters are happier, more productive, and less likely to tap out. More than that, I am convinced that it results in a better product overall, because the designer and developer are waiting until they have good, solid intel to inform their decision-making rather than rushing in and flailing about, desperately trying to course-correct without having access to the big picture. Sometimes this means that the version of the ruleset the testers are working with is flawed, perhaps fundamentally so. This doesn't happen often - for me and mine, we don't roll out a game to external testers until we've done quite a bit of work on it internally - but it can happen. This is why we playtest, after all. The best bet in that situation is to acknowledge the discovery and announce your intention to fix it after the current phase of testing. In this way you can continue to receive data from testers on other aspects of the game - making the eventual overhaul more strategic and less tactical in nature - as well as getting further impressions of why something doesn't work, and ideas as to how it might be fixed.

Leave a Comment