Reflection on Interactive Fiction (Future Business)

Richard Verdin
Serious Games: 377G
3 min readMay 20, 2019

The Interactive Fiction project for CS377G was an extremely impactful one. From the get-go, I became extremely excited about this project due to the fact that we were assigned the task of situating our game in a future dystopia that makes us fearful. Before the assignment, I had spent a great deal of time thinking about the increasing power of Artificial Intelligence and the potential dangers that it can bring forward in the future. As such, this game presented the opportunity to integrate my learnings in the classroom with my personal interests and fears.

Upon knowing that Artificial Intelligence would be the focus of my game’s conflict, I needed to come up with the specific fears that I wanted to showcase. After great thought, I decided that the biggest fear was the inability to tell Artificial Intelligence apart from humans, in addition to having it be used for malicious purposes with very clever disguises.

With this in mind, I needed to come up with a world-landscape and game objective that were conducive to delivering the message. After iterating through various concepts, I decided that person-to-person contacts with high stakes would be extremely valuable. Therefore, I decided to center the story around a management consultant who was consistently interacting with new clients — some robots and some otherwise. The objective of the game would be to use context clues to identify good robots from evil ones, and failure to do so would result in a draining of bank accounts due to working with malicious entities. To further complicate matters, the identification of a good robot/person as evil also results in the draining of bank accounts (the cost of a false report).

With this in mind, I first created a game with multiple levels of identification required. My first series of playtests revealed that my levels were far too simplistic. Players quickly cracked all of my cases, and there was no fun involved. In my second iteration of playtests, I increased the difficulty of the game, but I soon realized that the stakes were not high enough.

More specifically, in the first two iterations of the game, a wrong identification of an evil robot would simply have the player restart the round. This, in turn, led to players randomly guessing until they made their way through to the end.

As such, I learned that different game mechanics and consequences work better for some game than others. While I have enjoyed many games that have you restart the level upon failure, it did not work for my game. As such, I introduced the concepts of “lives” into my game and let my players fail a set amount of time before permanently losing the game (and then optionally restarting). This change turned out to be extremely effective, as players carefully pondered their choices and were afraid of permanently losing.

Furthermore, elevating the game from very easy to more difficult involved a great amount of planning. Whereas I began my game with multiple easy levels, I pivoted my efforts to create one hard-to-crack level and to create a game-slice rather than a full experience with multiple levels. This paid off because I allowed myself to enumerate the possibilities greatly and therefore increase the complexity and difficulty of the levels.

These changes ultimately made the game effective in getting people to care and think about Artificial Intelligence, and the need to overtly label when AI is used in a product or service. This game taught me that choices rapidly lead to different outcomes, and that playtesting is crucial as previous proven-methods of game designs do not apply to every system.

If I could do this experience over, I would use more of Twine’s advanced features to further immerse the player in the world. Furthermore, I would have begun with the idea of creating one complex level than simply enumerating them for the sake of having multiple levels.

--

--