MMOs vs Free Software
How the “Gamer’s Paradox” prevents online games from being truly free.
For many years, I have been a fan of Massively-Multiplayer Online Role Playing Games (MMORPGs). I’ve also been a proponent of free and open source software (FOSS), sometimes called software libre. Something that I have often desired is a way to combine these two interests of mine: to create open-source software for massively multiplayer games.
Unfortunately, I don’t believe that this is possible. There is a deep-rooted conflict between the principles of free software and the fundamental nature of multi-player online games.
The basic problem, in a word, is cheating.
The value of a game is that it offers challenges to the player — it makes things difficult by design, so that we can enjoy overcoming those difficulties. To that extent, games impose artificial constraints — often referred to as game rules — upon the player’s behavior. Cheating occurs when a player removes or bypasses those rules, in a way not intended by the game designer, such that the game becomes easier and the challenge is minimized or removed entirely.
(This is not the same as “cheat codes” which are intentional Easter eggs embedded into the game by the designers.)
A player who cheats not only does a disservice to the other players; they also do a disservice to themselves, by making the game less fun. As I often say, “it wouldn’t be much of a soccer game if you could simply hold up the ball and say, ‘I win’.”
In commercial online games, cheating also harms the game creator. The reason is because players won’t pay money for a game which they believe has a non-level playing field. If customers think that some players have an unfair advantage, they won’t become emotionally invested in the game, which significantly reduces its popularity.
That being said, figuring out a clever cheat can be a fun exercise. Unfortunately for the cheater, that enjoyment is generally not sustainable — you can only figure it out once. Beyond that, once you have access to a working cheat, it requires a lot of self-discipline to give it up and go back to the non-cheating way of playing the game. This means that unless a game is poorly designed, the reduced level of challenge will result in a permanently worse experience.
Some people also gain another kind of satisfaction from keeping their unfair advantage hidden from other players — they get a feeling of superiority and power from knowing something that the other players don’t. I can’t speculate much about this since I have never felt that way, and I suspect most people don’t either. I guess I would say that I would not want to tie my sense of self-worth to something as fragile as a secret which could be discovered; I would rather take pride in personal attributes that are more robust and permanent.
Often the line between cheating and mere clever strategy is not so clear. In many cases it requires a judgement call on the part of a designer or referee to decide whether a strategy trivializes the level of challenge. A skilled designer will carefully formulate the game rules to prevent this from happening.
Despite all this, people do cheat. It’s not even all that rare. Why?
Part of the reason is due to what I call the Gamer’s Paradox: the fact that game players have two different, conflicting set of goals. In a sense, every game player is effectively two different people: the person who wants a challenge and the person who wants to win.
The person who wants a challenge enjoys the struggle of overcoming difficult obstacles. They want the game to be hard, but not so hard as to be impossible to their level of skill.
The person who wants to win is focused on one goal: winning. That means removing every obstacle, every challenge, by whatever means necessary.
The drive to win is very powerful. As primates, we have a strong competitive instinct. This impels us to behave in ways that we might not otherwise find sensible, including taking actions which hinder our own long-term goals, e.g. having fun.
The problem of cheating is especially acute in massively-multiplayer games. One reason is that people are more likely to cheat strangers than close friends. For example, it’s much less common for people to cheat in LAN-based games, where there are a small number of players who are in a pre-existing trust relationship with each other. Another reason is that MMOs often have complex economies which spill over into the real world, which means that there are monetary incentives towards cheating.
Most real-world games have built-in enforcement mechanisms designed to prevent violations of the rules. For example, professional sports have referees. A football player who is running towards the goal line may step very close to the edge of the playing field while trying to avoid opposing players (in fact there are often strong incentives to come as close to that line as possible), but if they step across that line they will be declared out of bounds.
Computer games also have enforcement mechanisms. The type of enforcement depends on the nature of the cheat. Game cheats fall into three basic categories: forbidden actions, forbidden knowledge, and augmented skill.
Forbidden actions are the most basic and least subtle form of cheat: they allow the player to modify the game world, or their own status in that world, in a way which would normally be disallowed by the rules. An example of this would be a cheat that allows the player to run twice as fast as other players. These kinds of cheats are easily countered for the most part — the game merely has to include code that detects or prevents such modifications.
Forbidden knowledge is more subtle: these cheats allow the player access to information that the game would normally not allow. An example of this would be giving the player the ability to look through walls and see where the other players are hiding. Unfortunately, the game cannot know whether the player can “see” any particular information or not. Instead, the remedy for this type of cheat is to store this information in a way that is not easily accessible to the player in the first place.
Augmented skill is the most subtle and most difficult cheat to counter. With this type of cheat, the player uses custom software aids that allow them to play at an inhuman level of competence. An example of this is “aim bots”, which automatically target other players in a first-person shooter. The reason why this cheat is hard to counter is that the player is not technically violating the formal rules of the game; it’s hard to algorithmically tell the difference between an aim bot and a player that is just extremely skilled. Remedies for this type of cheat often depend on statistical analysis — detecting that a player has a highly unlikely winning streak — along with human judgment (“I know it when I see it”).
In any system of rules or laws (either in the real world or a virtual one), it is not always possible to detect violations reliably. It would be simple if we could just put up an impenetrable wall that makes it impossible to go beyond the bounds of what is allowed, but this only works in very limited circumstances.
In the criminal justice system, there are many types of crime in which only a small percentage of perpetrators ever get caught — this includes everything from running a red light to embezzlement. In such cases, a modest penalty is insufficient, the punishment has to be especially harsh in order to act as a deterrent. If a company saves money by dumping waste into a nearby river, and there’s only a small chance that they will get caught, the fine has to be much more than the amount of money they save, otherwise they will simply write it off as the cost of doing business.
It is important not to overgeneralize here — humans are not always rational, and draconian punishments are not always the the most effective way to deter undesirable behavior.
In the case of online games, however, the game host has limited options for punishing players. In most cases, the worst they can do is to revoke the user’s access to the game and delete all of their accumulated game data.
At the same time, the incentives for cheating are not infinite. Even though a cheat might be technically possible, it may be sufficient protection to ensure that the cheat is simply too much effort to be worth while. It’s not like banking where you have to be mathematically certain that theft cannot happen.
So what does all of this have to do with free software?
In order to prevent cheating, commercial online games rely on the fact that game programs are difficult to modify. This is a consequence of the fact that modern compilers translate source code into highly optimized machine code that is obfuscated and difficult to analyze.
But the whole point of free software — the reason free software was created in the first place — was to make it possible for users to read, understand and modify programs running on their computers. There’s a moral principle at work here: if you own a computer, you should be able to decide what software runs on that computer.
A game program which is open source can easily be modified by the player, making it possible for them to alter the game rules. Open source makes cheating trivially easy for even a novice programmer. This isn’t so much of a problem for single-player open-source games (of which there are many), since the incentives for cheating are lower. But it is a significant problem for large-scale online games, especially games that feature immersive multi-player environments.
It’s interesting that I once posed this question directly to Richard Stallman, the father of free software, at a technical conference. He didn’t have a good answer for me. Given my recollection of our conversation at the time, I suspect that he would consider “freedom” to be more important than “fun” — that is, he would likely judge that if online games can’t be free software, then online games should not exist.
(It’s also interesting to think about what would happen if there is some future technical breakthrough in decompiling software back to its original source code. At that point all software effectively becomes modifiable by the user.)
This situation is somewhat more complicated because most online games are client/server, which means that only a portion of the game is running on the user’s own computer. However, before I get into this, I want to talk about an alternative model, which is peer-to-peer online games.
During my career as a game developer, I have several times been approached by various enthusiastic young coders who had spent a lot of effort designing an architecture for a massive, peer-based virtual world. In this architecture, each player would be responsible for hosting a portion of the game world on their own computer, which they could author and control. Other players could ‘visit’ those parts of the world by making a network connection to that computer. Much of the beauty of these systems was in the elegant user interface or coding language for authoring these experiences.
In this model, there was no central server, no authoritative source of truth. The proponents of these systems felt that a central server was both undemocratic and a performance bottleneck. They were philosophically committed to the principle of decentralization, which in most cases is a good thing. Unfortunately they did not account for the fact that games are a special case, because of the Gamer’s Paradox.
Although these systems were often very clever in their design, they all came up short when it came to preventing players from behaving badly. When I would ask the various proponents of these proposals how they were going to handle the problem of cheating, the usual response was to try and rationalize the problem away, to dismiss the issue entirely — to either claim that people won’t cheat (they will), or that cheating doesn’t matter (it does). After all, if they were to take my question seriously, it would mean admitting that their entire vision had a fatal flaw.
Probably the best answer in peer-based architectures is to simply give up on the idea of making it a game, and make it a purely social experience. If there are no challenges and nothing to win, then there’s no reason to cheat. (Of course, there are other kinds of attacks and exploits which can be targeted at a social online system, but these of a different nature and are not technically “cheats”.)
However, competition is an important aspect of human behavior. A world in which we simply give up on providing competitive experiences would be a much poorer one!
The principle of free software allows you to modify the programs running on your own computer, but that does not mean that it gives you the ability to modify programs running on someone else’s computer — even if the source code to those program are available.
Thus, it is possible to have a game server that is controlled by someone else other than the player. If that someone else is trustworthy and committed to providing a challenging experience for their users, then opportunities for cheating are reduced, although not eliminated.
Most online multi-player games use a client/server architecture. The server functions as a referee, coordinating the actions of the players and deciding which game actions are allowed. The client program is responsible for accepting player input, transmitting that information to the server, receiving updates on changes to the game world, and then displaying that experience via a graphical display.
There is also an important third component: the network protocol which connects the client and the server. As we will see, a lot depends on whether this protocol is open or obfuscated.
Because the central server is handling thousands of players simultaneously, it cannot afford to dedicate too many computing resources (CPU, memory) to any one player. On the other hand, the client program is only responsible for the experience of a single user. This means that all of the resources of the machine can be dedicated to providing the best experience possible for that user. As a result, many expensive computations are off-loaded to the client in order to maximize performance.
The client therefore becomes a point of vulnerability. Even though the player cannot modify the code on the server, they can theoretically modify the code on the client, which in turn can affect the server’s calculations.
Let’s examine the three kinds of cheats mentioned earlier with respect to a modified client program.
Forbidden actions. The player could modify the client to allow them to make changes to the world that are not normally allowed by the game rules. To prevent this, the server would have to validate all of the data coming from the client.
For example, in the case where the user has given themselves the ability to run twice as fast as other players, the server would need to measure the distance between successive position updates and compare velocities. This would need to take into account any buffs, enchantments, or other character abilities that would normally increase the player’s movement speed. And it would have to accommodate the fact that this data is often “noisy” and non-deterministic.
In some cases, the data coming from the client is not a direct reflection of the player’s commands, but the result of a complex, expensive algorithm such as a pathfinder or a physics simulation.
In the case of a pathfinding algorithm, the solution is expensive to compute but cheap to verify: while the client needs to explore all possible paths in order to compute the optimal route, the server only needs to check that the optimal route is valid — that it does not pass through any obstacles or violate the game’s laws of movement.
For something like a physics simulation, the problem is much more challenging. In many cases, it is not possible for the server to re-run all of the differential equations needed to simulate gravity and collisions, because that would be too expensive. In such cases, the server may have to use a rough approximation, something is cheaper to calculate. This leaves open a window for cheaters to slightly tweak the rules of motion — such as slightly altering the trajectory of the basketball so that it enters the hoop instead of missing.
Forbidden Knowledge. Some kinds of games rely on hidden information. Battle simulations often allow for players to use elements of the terrain or “stealth” abilities to hide their presence.
Ideally in a client/server world we would avoid downloading this information until the exact moment when a player is permitted to access it, which would mean that there is no way to hack the client to reveal this information. However, in practice this is not possible because of network latency. In order to get good real-time performance, we need to download the data in advance of when it is needed.
If the data to be downloaded is very large and relatively unchanging, we can encrypt the data in advance, download it, and then only send the decryption key at the last possible moment. However, most of the interesting ‘hidden’ information is not in the form of bulk downloads, but in the form of real-time updates.
For example, suppose we have a “stealth” ability that makes the player invisible to anyone more than 10 meters away. To implement this, we program the client to not display stealthed characters when they are farther than 10 meters. However, if the player were to modify this code and remove this restriction, they would be able to see all of the stealthed characters, making the ability useless.
One solution to prevent cheating would be to have the server not send position updates for that character to any player located more than 10 meters away. Those players won’t know where the stealthed character is, even if the client is modified, since the information is not present on the client.
However, if it takes 100ms to send a position update, and the player is running directly towards the stealthed character at 20 meters/second, then by the time they get that position update they will have traveled an additional 2 meters closer, putting them just 8 meters from the attacker — which might be just enough to make the difference between a successful ambush and an unsuccessful one.
In this scenario, we can make short term predictions about when the data will be needed — we can start sending position updates when the player is within, say, 15 meters rather than 10. The code on the client will remain the same — only show the stealthed character when they are within 10 meters. If a cheater is able to remove this restriction, they can only extend the range of visibility from 10 meters to 15 — a minor advantage.
However, it is not always possible to make such predictions. If a character is 100 meters away and hiding behind a tree, it is very expensive for the server to perform a line-of-sight calculation to determine whether any part of that character is visible from the player’s viewpoint and not obscured by the terrain. Worse, the visibility of the player is changing from one millisecond to the next as players move around the battlefield, far faster than the server can keep up.
The only general solution in this case is to not allow the client to be modified. But as we shall see, that is easier said than done.
Augmented Skills. As previously mentioned, this is the most subtle type of cheat, and therefore the most difficult to counter.
Let’s say we have a simple aim bot that allows the player to target and hit their opponent 100% of the time. How can the server tell that the player is cheating, as opposed to merely being very skilled? One might say that human dexterity and skill have limits, and a 100% hit rate is unlikely. But what about an aim bot that only increases the player’s hit percentage by 20%? That is still a significant advantage, but much harder to detect.
Once again, the only solution is to somehow prevent the client from being modified. Either that, or remove all real-time features from the game so that the player’s dexterity and reaction times don’t matter. But even that’s not enough — you could make an “aim bot” for chess if you wanted…
The essential problem is that the server has no way to detect the presence of a modified client. All the server ‘sees’ is the data packets coming from the client; it knows nothing about the program that sent those packets. And because the client is the program responsible for sending those packages, it is in complete control of what the server ‘sees’.
Is it possible for a server to know whether or not a client has been tampered with?
I once posed this question to Internet pioneer Vint Cerf, at a different technical conference (ACM97). He didn’t have a good answer for that one, but thought it would make a good master’s thesis. I disagree; it’s not a good thesis because the answer is trivial: “no”.
What about the idea of digitally signing the client program, and having the server request that signature? This won’t work because it is easily circumvented — simply keep a copy of the original, unmodified client around and compute the signature based on that.
The only way a digital signature approach could work is for the signature to be enforced at the operating system level, as happens in some game consoles. In these environments, the OS will not allow the program to run unless if has a valid signature signed by a trusted authority, which a modified program would not have. However, such an idea is fundamentally incompatible with the principles of free software — the idea that the user gets to control what programs run on their computer. (You may notice that there is very little if any FOSS software on game consoles.)
A note on add-ons: many game clients, such as World of Warcraft, allow the player to customize the behavior of the game by programming small plugins called ‘add-ons’ in a scripting language. These add-ons are, for all intents and purposes, open source programs that run within the game, so one might imagine that they would make it trivial to cheat.
However, these game clients are very careful to maintain a firewall between the game world and the sandboxed execution environment of the add-ons. Add-ons are unable to directly affect game play or retrieve information about the player’s location within the world or the objects in their immediate vicinity. Instead, add-ons are only allowed to access the user interface of the game, as well as a few other data sets like the inventory and spell book, and only in strictly limited ways. For example, an add-on cannot cast a spell unless it is triggered by a keypress or mouse click, and even then it can only cast a single spell in response to that user event.
The WoW add-on API is a fascinating study in cheat prevention.
As I see it, there is no way to make a MMO client that is free software without opening it up to the risk of cheating. While is is possible to hack a pre-compiled, closed-source binary, in practice it is very difficult. Free and open source software makes it much, much easier.
What about the idea of making just the server part open source, and keeping the client closed? There are several problems with this. First and foremost it doesn’t sit well with the ideals of free software.
Also, an open source server would have to have an open network protocol as well, that is a protocol that was easily understood and not obfuscated. After all, anyone could examine the published source code of the server and learn all the details of how the protocol works.
This means that even if a cheater was unable to modify the client, they could potentially replace it. They could write a brand new client that pretended to be the “official” client, using the same open protocol specification. The server would have no way to detect that this had happened.
It might be possible to make a very simple game that had no real-time play, no hidden information, no features amenable to cheating. But that wouldn’t allow for the kind of immersive simulation that I find interesting.
The annoying part about all of this, for me personally, is that I often think about what an open-source MMORPG server would look like, from a technical and architectural perspective. For some reason my brain won’t let go of this idea but insists on worrying at it like some bit of food caught between my teeth. The fact that I know that it’s a fool’s quest only makes it more frustrating.