I’ve just finished the v0 of a simple turn-based strategy game, EmojiTactics. You can play it here: [mobile /desktop]. It’s similar in spirit to Hearthstone or Magic: The Gathering. It also has a curious property: every graphical element in the game is an emoji. This post rambles a bit about that and other dev topics that you might likely find boring.
Emojis Only, Please
Apart from helping make the game tiny, using emojis has other odd consequences. One of these is that the game looks and feels quite different depending on what platform you’re using:
The 300k Clown Car
The whole 72-level game checks in at about 300k. In large part, that’s because the images I’m using are already on your device. Here is a rough breakdown of where those other kilobytes end up:
EmojiTactics has a few sounds for use during encounters. The sounds are all absurdly downsampled to a 1k bitrate with ffmpeg:
ffmpeg -i in.wav -ac 1 -t 0:00:01 -ab 1k out.mp3
base64 -i out.mp3
new Audio('data:audio/mp3;base64,<base64 content>')
Instant inline audio.
A piece of advice for all the aspiring game developers out there: Your game is far more complex and incomprehensible than you think it is. I proceeded through two iterations of significant simplification and the game is still pretty confusing for newcomers. I’ve even heard this advice before. I just didn’t pay enough attention to it.
The wiki on the subreddit helps a lot for getting into EmojiTactics.
Each unit in EmojiTactics has a cost, and I had hoped to do something a little more scientific than guess. Enter a tiny bit of machine learning.
Tensorflow.js can do a lot of nifty neural net things, but you can also use it to learn (solve for) a few variables that you don’t want to hard-code. For EmojiTactics, I trained 11 coefficients in a per-unit scoring expression that sums the following terms:
(starting scores: health, damage, movement, ability) 1. health
7. ability × (health + movement)
8. (ability × (health + movement))²
9. damage × health
10. (damage × health)²
11. ability × damage × health
Terms 7 and above are “feature crosses” in machine-learning terminology: they help encode scoring logic when the scores aren’t completely independent. For instance, in term 9 we multiply a damage score by an health score. Why? Having one unit with a very high offense and very high defense is better than having two units that specialize in each independently.
Similarly, in term 7, units with specialized abilities are easier to protect if they have good health or good movement (slower units cannot attack them).
The squared terms help dampen the value of unusually high scores. The learned coefficients here are usually negative. Why? Very powerful units can be countered with some specialized abilities, so over-investing in a single unit isn’t a good idea.
What about training data? I generated it by simulating random games (see “The AI” below) with teams of 1–5 units and getting a win/loss value for each.
The branching factor of a move in EmojiTactics isn’t very large; this means that brute-force solutions aren’t a bad idea. EmojiTactics uses the minimax algorithm. This algorithm essentially searches the tree of possible plays and assumes that your opponent will always choose the best move. Although the tree isn’t huge, there is still a limit on how many moves you can foresee; when a certain maximum depth is hit, we just estimate the “winnability” of the game state at that time. Fortunately, there is already a machine-learned function to evaluate a unit’s value above, so I just re-use that to perform the estimate.
One gotcha here: the minimax algorithm can produce moves that appear bad: particularly when the algorithm has decided that all the options result in a loss, or all in a win, every move has the same weight, and the AI may start looking like it’s making terrible moves (particularly if it’s thinking far enough ahead). Another way to say this is:
The minimax algorithm makes no attempt to win quickly.
It also makes no attempt to delay an “inevitable” loss. To get around this, I wrote a little extra code that attempts to break ties by picking the move with the “greediest” valuation. This helps a lot, but I suspect that tie-breaking alone may not be enough: I may need to pick the greedy option even when the true valuation is only slightly worse. A little more experimentation is required here. Stay tuned.
Let’s skip natural language, while we’re at it?
Pictures and icons get you pretty far, and EmojiTactics has virtually no text, so it’s pretty i18n friendly.