How I Built Emojitracker

Adventures in Unicode, Real-time Streaming, and Media Culture

Matthew Rothenberg
30 min readDec 9, 2013

--

Emojitracker was one of those projects that was supposed to be a quick weekend hack but turned into an all-consuming project that ate up my nights for months. Since its launch in early July, Emojitracker has processed over 1.8 billion tweets, and has been mentioned in approximately a gajillion online publications.

Emojitracker wasn’t my first megaproject, but it is definitely the most complex architecturally.

While the source code for emojitracker has been open-source since day one, the technical concepts are complex and varied, and the parts of the code that are interesting are not necessarily obvious from browsing the code. Thus, rather than a tutorial, in this post I intend to write about the process of building emojitracker: the problems I encountered, and how I got around them.

This is a bit of a brain dump, but my hope is it will be useful to others attempting to do work in these topic areas. I have benefited greatly from the collective wisdom of others in the open-source community, and thus always want to try to do my best to contribute back domain knowledge into the commons.

This post is long, and is primarily intended for a technical audience. It details the origin story and ideas for emojitracker, the backend architecture in detail, frontend client issues with displaying emoji and high-frequency display updates, and the techniques and tools used to monitor and scale a multiplexed real-time data streaming service across dozens of servers with tens of millions of streams per day (on a hobby project when you don’t have any advance warning!).

Prologue: Why Emoji?

These fingers wrote a lot of emoji code, they earned it.

I’ve also always had a soft spot for emoji. My friends and colleagues know that emoji makes an appearance in many aspects of my life, including my wifi network, LinkedIn recommendations, and domain names. I even once signed a part-time employment contract stipulating emoji was my native language and all official notices be provided to me that way (which I don’t necessarily endorse). Oh, and then there’s the emoji nail art (which I do endorse).

I’d been playing around with the idea of realtime streaming from the Twitter API on a number of previous projects (such as goodvsevil, which was the spiritual predecessor to emojitracker), and I was curious about seeing how far I could push it in terms of number of terms monitored. At 842 terms to track, emoji seemed like a prime candidate.

Emoji are also a great way to get insight to the cultural zeitgeist of Twitter: the creative ways in which people appropriate and use emoji symbols is fascinating, and I hoped to be able to build a lens that would enable one to peer into that world with more detail.

And finally, (and quite foolishly) emoji seemed simple at the time. Normally I try to pick hacks that I can implement pretty quickly and get out into the world within a day or two. Boy, was I wrong in this case. Little did I know how complex emoji can be… This post is a testament to the software development journey emoji brought me on, and the things I learned along the way.

Background Understanding: Emoji and Unicode

The history of Emoji has been written about in many places, so I’m going to keep it brief here and concentrate more on the technical aspects.

TLDR: Emoji emerged on feature phones in Japan, there were a number of carrier specific implementations (Softbank/KDDI/Docomo), each with its own incompatible encoding scheme. Apple’s inclusion of Emoji on the iPhone (originally region-locked to Asia but easily unlocked with third-party apps) led to an explosion in global popularity, and now Emoji represents the cultural force of a million voices suddenly crying out in brightly-colored pixelated terror.

For some background music, watch Katy Perry demonstrate why she should have been a primary delegate on the Unicode Consortium Subcommittee on Emoji at http://youtu.be/e9SeJIgWRPk.

But for the modern software developer, there are a few main things you’ll need to know to work with Emoji. Things got a lot better in late 2010 with the release of Unicode 6.0… mostly. The emoji glyphs were mostly standardized to a set of Unicode codepoints.

Now, you may be thinking: “Wait, standards are good, right? And why do you say ‘mostly’ standardized, that sounds suspicious…”

Of course, you’d be correct in your suspicions. Standardization is almost never that simple. For example, take flags. When time came to standardize Emoji codepoints, everyone wanted their country’s flag added to the original 10 in the Softbank/DoCoMo emoji. This had the potential to get messy fast, so instead what we ended up with were 26 diplomatically-safe “Regional indicator symbols” set aside in the Unicode standard. This avoided polluting the standard with potentially hundreds of codepoints that could become quickly outdated with the evolving geopolitical climate, while preserving Canada’s need to assert their flag’s importance to the Emoji standardization process:

These characters can be used in pairs to represent regional codes. In some emoji implementations, certain pairs may be recognized and displayed by alternate means; for instance, an implementation might recognize F + R and display this combination with a symbol representing the flag of France.

Note the standards-body favorite phrases “CAN BE” and “MAY BE” here. This isn’t a “MUST BE,” so in practice, none of the major device manufacturers have actually added new emoji art flags, infuriating iPhone-owning Canadians every July 1st:

https://twitter.com/withloveclaudia/statuses/351744535291887616

For a detailed and amusing exploration of this and other complex issues surrounding the rough edges of Unicode, I highly recommend Matt Mayer’s “Love Hotels and Unicode” talk, which was invaluable in helping my understanding when parsing through these issues.

For these double-byte emoji glyphs, the popular convention is to be represent them in ID string notation with a dash in between the codepoint identifiers, such as 1F1EB-1F1F7.

This of course makes the life of someone writing Emoji-handling code more difficult, as pretty much all the boilerplate you’ll find out there assumes a single Unicode code point per character glyph (since after all, this was the problem that Unicode was supposed to solve to begin with).

For example, say you want to parse and decode an emoji character from a UTF-8 string to identify its unified codepoint identifier. Conventional wisdom would be that this a simple operation, and you’ll find lots of sample code that looks like this:

# return unified codepoint for a character, in hexadecimal
def char_to_unified
(c)
c.unpack("U*").first.to_s(16)
end

If you have a sharp eye, you’ll probably notice the danger-zone of using first() to convert an array into a string: assuming we’re always going to get one value back from the unpack() since we only sent one character in. And in most cases, it will of course work fine. But for our strange double-byte emoji friends, this won’t work, since that unpack() operation is actually going to return two values, the second of which we’ll be ignoring. Thus, if we pass in the American Flag emoji character, we’ll get back 1f1fa—which represents the rather boring on its own REGIONAL INDICATOR SYMBOL U:

Figure 1: Not the American Flag.

So instead, we have to do some string manipulation hijinks like this:

# return unified codepoint for a character, in hexadecimal.
#  — account for multibyte characters, represent with dash.
#  — pad values to uniform length.

def char_to_unified
(c)
c.codepoints.to_a.map {|i| i.to_s(16).rjust(4,'0')}.join('-')
end

Now, char_to_unified() on a UTF-8 string containing the American Flag emoji will return the properly patriotic value 1f1fa-1f1f8.

Figure 2: The land of the free, and the home of the brave.

Victory!

Surprisingly, there wasn’t a good Ruby library in existence to handle all this (most existing libraries concentrating on encoding/decoding emoji strictly in the :shorthand: format).

Thus, I carved that portion of the work in emojitracker out into a general purpose library now released as its own open source project: emoji_data.rb. It handles searching the emoji space by multiple values, enumeration, convenience methods, etc. in a very Ruby-like way.

For example, you can do the following to find the short-name of all those pesky double-byte Emoji glyphs we mentioned:

>> EmojiData.all.select(&:doublebyte?).map(&:short_name)
=> [“hash”, “zero”, “one”, “two”, “three”, “four”, “five”, “six”, “seven”, “eight”, “nine”, “cn”, “de”, “es”, “fr”, “gb”, “it”, “jp”, “kr”, “ru”, “us”]

For more examples, check out its README. This library is consistently used across almost all of the different software projects that make up Emojitracker, and hopefully will be useful for anyone else doing general purpose Emoji/Unicode operations!

Emojitracker Backend Architecture

Here’s the overall architecture for Emojitracker in a nutshell: A feeder server receives data from the Twitter Streaming API, which it then processes and collates. It sends that data into Redis, but also publishes a realtime stream of activity into Redis via pubsub streams. A number of web streamer servers then subscribe to those Redis pubsub streams, handle client connections, and multiplex subsets of that data out to clients via SSE streaming.

We’ll talk about all of these components in detail in this section.

Feeding the Machine: Riding the Twitter Streaming API

If you’re doing anything even remotely high volume with Twitter, you need to be using the Streaming APIs instead of polling. The Streaming APIs allow you to create a set of criteria to monitor, and then Twitter handles the work of pushing updates to you whenever they occur over a single long-life socket connection.

In the case of Emojitracker, we use our EmojiData library to easily construct an array of the Unicode chars for every single Emoji character, which we then send to the Streaming API as track variables for status/filter. The results are easy to consume with a Ruby script utilizing the TweetStream gem, which abstracts away a lot of the pain of dealing with the Twitter Streaming API (reconnects, etc) in EventMachine.

From this point it’s simple to have an EventMachine callback that gets triggered by TweetStream every time Twitter sends us a matching tweet. It’s important to note that the Streaming API doesn’t tell you which track term was matched, so you have to do that work yourself by matching on the content of the tweet.

Also, keep in mind that it’s entirely possible (and in our case, quite common!) for a tweet to match multiple track terms—when this happens, the Twitter Streaming API is still only going to send it to you once, so it’s up to you to handle that in the appropriate fashion for your app.

Then, we simply increment the count for each emoji glyph contained in the tweet (but only once per glyph) and also push out the tweet itself to named Redis pubsub streams (more details on the structure for this in the next section).

The JSON blob that the Twitter API sends for each tweet is pretty massive, and at a high rate this will get bandwidth intensive. The feeder process for Emojitracker is typically receiving a full 1MB/second of JSON data from Twitter’s servers.

Since in our cases we’re going to be re-broadcasting this out at an extremely high rate to all the streaming servers, we want to trim this down to conserve bandwidth. Thus we create a new JSON blob from a hash containing just the bare minimum to construct a tweet: tweet ID, text, and author info (permalink URLs are predictable and can be recreated with this info). This reduces the size by 10-20x.

As long as you drop-in a performant JSON parsing engine (I use and highly recommend Oj), you can do all this parsing and recombining with relatively low server impact. Swap in hiredis for an optimized Redis driver and things can be really fast and efficient: the feeder component for Emojitracker is acting upon ~400-500 tweets-per-second at peak, but still only operates at ~10-12% CPU utilization on the server it runs on, in MRI Ruby 1.9.3. In reality, network bandwidth will be the biggest constraint once your code is optimized.

Data Storage: Redis sorted sets, FIFO, and Pubsub streams

Redis is an obvious data-storage layer for rapidly-changing and streaming data. It’s super fast, has a number of data structures that are ideally suited for this sort of application, and additionally its built-in support for pubsub streaming enables some really impressive ways of shuffling data around.

For emojitracker, the primary piece of data storage we have is a set of emoji codepoint IDs and their respective counts. This maps very well to the Redis built-in data structure Sorted Set, which conveniently maps strings to scores, and has the added benefit of making it extremely fast to query that list sorted by the score. From the Redis documentation:

Redis Sorted Sets are, similarly to Redis Sets, non repeating collections of Strings. The difference is that every member of a Sorted Set is associated with score, that is used in order to take the sorted set ordered, from the smallest to the greatest score. While members are unique, scores may be repeated. With sorted sets you can add, remove, or update elements in a very fast way (in a time proportional to the logarithm of the number of elements). Since elements are taken in order and not ordered afterwards, you can also get ranges by score or by rank (position) in a very fast way.

This makes keeping track of scores and rank trivially easy. We can simply fire off ZINCRBY increment commands to the set for the equivalent emoji codepoint ID every time we see a match—and then call ZRANK on an ID to find out it’s current position, or use ZRANGE WITHSCORES to get the entire list back in the right order with the equivalent numbers for display.

This gives us an easy way to track the current score and ranking, but we want to stream updates in realtime to clients, so what we really need in addition is way to send those update notifications out. Thankfully, Redis PUBLISH and SUBSCRIBE is essentially perfect for that.

With Redis Pubsub streams, the feeder can simply publish any updates to a named stream, which an client can subscribe to to receive all messages. In Emojitracker, we publish two types of streams:

  1. General score updates. Anytime we increment a score for an Emoji symbol, we also send an activity notification of that update out to stream.score_updates.
  2. Tweet streams. 842 different active streams for these (one for each emoji symbol). This sounds more complex than it is—in Redis, streams are lightweight and you don’t have to do any work to set them up, just publish to a unique name. For any matching Tweet, we just publish our “small-ified” JSON blob to the equivalent ID stream. For example, a tweet matching both the dolphin and pistol emoji symbols would get published to the stream.score_updates.1f42c and stream.score_updates.1f52b streams.
Illustration: crossing 842 pubsub streams with a single PSUBSCRIBE statement in Redis.

Clients can then subscribe to whichever streams they are interested in, or use wildcard matching (PSUBSCRIBE stream.score_updates.*) to get the aggregate of all tweet updates.

While this live stream of tweets in Emojitracker is mostly powered by the aforementioned Pubsub streams, there are cases where they won’t work. For example, when a new web client connects to a detail stream it’s necessary to “backfill” the most recent 10 items for display so that the client starts with some data to show the user (especially on the less frequently used emoji symbols).

A totally ridiculous illustration of a FIFO queue I found on the web. I decided it required some emojification.

Redis doesn’t have a built-in concept of a fixed-size FIFO queue (possibly more accurately described as a fixed-size evicting queue?), but this is easy to emulate by using LPUSH and LTRIM. Push to one side of a list, and then immediately trim from the other to maintain the fixed length. Like most things in Redis, it doesn’t matter if these commands come out of order, it will balance out and the overall size of the list will remain relatively constant. Easy-peasy.

Putting it all together, here’s the relevant section of source code from the Ruby program that feeds Redis from the Twitter streaming API (I included the usage of the aforementioned EmojiData library to do the character conversion):

 matches = EmojiData.chars.select { |c| status.text.include? c }
matches.each do |matched_emoji_char|
# get the unified codepoint ID for the matched emoji char
cp = EmojiData.char_to_unified(matched_emoji_char)
REDIS.pipelined do
# increment the score in a sorted set
REDIS.ZINCRBY 'emojitrack_score', 1, cp
# stream the fact that the score was updated
REDIS.PUBLISH 'stream.score_updates', cp
# for each emoji char, store most recent 10 tweets in a list
REDIS.LPUSH "emojitrack_tweets_#{cp}", status_json
REDIS.LTRIM "emojitrack_tweets_#{cp}",0,9
# also stream all tweet updates to named streams by char
REDIS.PUBLISH "stream.tweet_updates.#{cp}", status_json
end
end

It’s common knowledge worth repeating that Redis is highly performant. The current instance powering Emojitracker routinely peaks at 2000-4000 operations/second, and only is using ~3.98MB of RAM.

Pushing to Web Clients: Utilizing SSE Streams

When thinking about streaming data on the web, most people’s thoughts will immediately turn to WebSockets. It turns out, if you don’t need bidirectional communication, there is a much simpler and well suited technology that accomplishes this over normal HTTP connections: Server-Sent Events (SSE).

I won’t go into detail about the SSE protocol (the above link is a great resource for learning more about it), instead I’ll just say it’s trivially easy to handle SSE in Javascript, for example the full logic for subscribing to an event source and passing events to a callback handler can be accomplished in a barely more than a single line of code. The protocol will automatically handle reconnections, etc. The more interesting aspect for us is how we handle this on the server side.

Each web streamer server maintains two connection pools:

  1. The raw score stream — anything connected here is going to get everything rebroadcast from the score update stream, and everyone gets the same thing. Pretty simple.
  2. The tweet detail updates queue is more complex. We use a connection wrapper that maintains some state information for each client connected to the stream. All web clients receiving tweet detail updates from the streaming server are actually in the same connection pool, but when they connect they pass along as a parameter the ID of the emoji character they want updates on, which gets added to their wrapper object as tagged metadata. We later use this to determine which updates they will receive.

There are typical Sinatra routes that handle incoming stream connections, and essentially all they do is use stream(:keep_open) to hold the connection open, and then add the connecting client’s information to to the connection pool. When the client disconnects, Sinatra removes it from that pool.

In order to populate the SSE streams on the server side, we need to get the data out of Redis to pass along. Each web streamer server spawns two independent event-propagation threads, each of which issues a SUBSCRIBE to a Redis stream. Not surprisingly, these are the two types of streams we mentioned in the previous section: 1.) The overall score updates stream, and 2.) a wildcard PSUBSCRIBE representing the aggregate of all individual tweet streams.

Each thread then processes incoming events from the Redis streams, iterating over every client in the connection pool and writing data out to it. For the raw score updates, this is just a simple iteration, for the tweet details, each wrapped connection in the pool has it’s tag compared to the event ID of the current event, and is only written to in the case of a match.

The end result is a relatively efficient way to stream updates out to many clients simultaneously, even though they may be requesting/receiving different data.

Performance Optimizations for High Frequency SSE Streams

SSE is great, but when you start to approach hundreds of events per second, raw bandwidth is going to become a concern. For Emojitracker, we needed to turn to a number of performance enhancements were necessary to reduce the bandwidth of the stream updates so that people without super-fat pipes could play along.

Note: both of these optimizations are probably overkill unless you are handling at least tens if not hundreds of events per second, but in extremely high-frequency applications they are the only way to make things possible.

Trim the actual SSE stream format as much as possible.
Every character counts here. SSE streams can’t be gzipped, so you need to be economical with your formatting. For example, the whitespace after the colon in DATA: is optional. One character multiplied by potentially hundreds of times per second ends up being quite a bit over time.

Consider creating a cached “rollup” version of the stream that aggregates events.

You’re never going to need to update your client frontend more than 60 times per second, as that’s above what humans can perceive. That seems pretty fast, but in Emojitracker’s case, we actually are high frequency enough that we typically have many score updates occur in every 1/60th of a second ticket.

Thus, instead of rebroadcasting each of these events out immediately upon receiving them from the Redis pubsub stream, each web stream holds them in an in-memory queue which we expunge in bulk 60 times per second, rolling up the number of events that occurred for each ID in that timeframe.

Therefore, where normally in one 1/60th of a second tick we would send this:

data: 2665  \n\n
data: 1F44C \n\n
data: 1F44F \n\n
data: 1F602 \n\n
data: 2665 \n\n
data: 1F60B \n\n
data: 1F602 \n\n

We can instead send this:

data:{"2665":2,"1F44C":1,"1F44F":1,"1F602":2,"1F60B":1}\n\n

The size savings from eliminating the redundant data headers and repeat event IDs is nontrivial at scale (remember, no gzipping here!). You can compare and see the difference in action yourself by curl-ing a connection to emojitracker.com at /subscribe/raw and /subscribe/eps.

Even though in emojitracker’s case we go with 60eps for maximum disco pyrotechnics, in many cases you can likely get away with far more aggressive rollups, and broadcast at 30eps or even 5-10fps while still maintaining the user experience of full-realtime updates.

Gotcha: Many “cloud” environments don’t properly support this (and a workaround)

The crux: after building all this in development environment, I realized it wasn’t quite working correctly in production when doing load testing. The stream queue was filling up, getting bigger and bigger, never reducing in size. After much spelunking, it turned out that the routing layer used by many cloud server providers prevents the web server from properly seeing a stream disconnection on their end. In an environment where we are manually handling a connection pool, this is obviously no good.

My solution was to hack in a REST endpoint where clients could send an asynchronous “I just disconnected” post— the stream server would then manually expunge the client record from the pool.

I wasn’t 100% satisfied with this solution— I figured some portions of clients would disconnect without successfully transmitting the cleanup message (flakey net connections for example). Thus, the stream server also sweeps for and manually disconnects all stream connections after they hit a certain stream age. Clients that were actually active will then automatically reestablish their connection. Again, it’s ugly, but it works. I maintained the appearance of a continuous stream without stutter by reducing the EventSource reconnect delay significantly.

These were, of course, temporary hacks that were far less efficient in terms of extra HTTP requests (albeit ones that managed to carry emojitracker through it’s peak traffic). Thankfully, they are no longer needed. Very recently, Heroku finally rolled out labs support for Websockets which also fixes the underlying routing issues affecting SSE, thus removing the need for the workaround. (Thankfully, I made my workaround hacks enabled via a config variable, so once I added websockets support to my dynos I was able to quickly disable all those hacks and see everything worked fine.)

(You may be thinking that with all these workaround hacks it wasn’t worth hosting at Heroku at the time, and I should have just used my own conventional dedicated server. However, you’ll see later why this would have been a bad idea.)

With all these changes, one might wonder how I monitored the streaming connection pool to see how things were working. The answer: a custom admin interface.

Not crossing the streams: The admin interface

When attempting to debug things, I quickly realized that tailing a traditional log format is a really terrible way to attempt to understand what’s going on with long-lived streams. I hacked up a quick web interface showing me the essential information for the connection pools on a given web server: how many open connections and to whom, what information they were streaming, and how long those connections had been open:

Part of the stream admin interface for one of the web dynos, on launch day.

The stream admin interface is actually open to the public, so you can mess around with it yourself.

Having an admin interface like this was absolutely essential to being able to visualize debug the status of streaming pools. From just watching the logs, there’s no way I would have noticed the connection pool problem in the previous section.

Frontend Architecture

For the most part, there is nothing that surprising here. Consuming a SSE stream is a fairly simple endeavor in Javascript, with widespread browser support. However, there were a number of “gotchas” with secondary functionality that ended up being somewhat complex.

Rendering Emoji Glyphs

Spoiler alert: sadly, most web browsers don’t support emoji display natively (Google, get on this! Forget Google+, we want emoji in Chrome!). Thankfully, you can utilize Cal Henderson’s js-emoji project to sniff the browser and either serve native emoji unicode or substitute in images via JS for the other browsers.

For that though, you still need to host a few thousand images for all the different emoji symbols. If you’re going to want to display in more than one resolution, multiply that by 5x. Too add to the problems, most of the existing emoji graphic sets out there (such as a the popular gemoji), have unoptimized PNGs and are missing many common display resolutions.

I wanted to solve this problem once and for all, so I created emojistatic.

Emojistatic is a hosted version of the popular gemoji graphic set, but adds lots of optimizations. It has multiple common sizes, all losslessly compressed and optimized, hosted on GitHub’s fast infrastructure for easy access.

It does more too, out of necessity. There are unfortunately many other problems inherent in display emoji beyond just swapping in appropriate images. I’ll discuss some of them here, and try to show what the emojistatic library does to help address them.

Image combination to reduce HTTP requests
Swapping in images is great in some instances, but what if you are displaying a lot of emoji? For example, in emojitracker’s case, we are displaying all 862 emoji glyphs on the first page load, and making 862 separate HTTP requests to get the images would be crazy.

Image via emojinal art gallery.

Therefore, I built-in automatic CSS spritesheet generation to emojistatic. I used the embedded data-URI CSS sheet technique instead of a spritesheet, because shuffling around literally thousands of copies of a 1MB image in memory could have grave performance implications. In order to facilitate this, I ended up spinning off another open-source tool, cssquirt, a Ruby gem to embed images (or directories of images) directly into CSS via the Data URI scheme.

In order to get this to work with js-emoji, I had to fork it to add support to it for using the data-URI technique instead of loading individual images. The changes are in a pull-request, but until the maintainer accepts it (nudge nudge), you’ll unfortunately have to use my fork.

Native emoji display: the cake is a lie
What a pain. At least it must be easier on those web clients that support Emoji fonts natively, right? Right?!?! If we just stick to Safari on a fancy new OSX 10.9 install, surely Apple’s love for technicolor cuteness will save us? …Unfortunately, no. (Insert loud sigh) Can’t anything ever be simple?

What doesn’t work properly? Well, if you have a string with mixed content (for example, most tweets containing both words and emoji characters), and you specify a display font in CSS, characters that have non-Emoji equivalents in their font-face will default to their ugly, normal boring versions. So you get a ☁︎ symbol instead of the lovely, fluffy emoji cloud the person used in their original tweet.

If you try to get around this on a Mac by forcing the font to AppleColorEmoji in CSS, you will have similarly ugly results, as the font actually contains normal alphanumeric characters, albeit with weird monospace formatting.

Native-rendering of an English tweet containing Emoji in Safari 7.0 on MacOSX 10.9.

To get around this problem, I stumbled along the technique of creating a Unicode-range restricted font-family in CSS, which will let us instruct the browser to only use the AppleColorEmoji font for those particular 842 emoji characters.

Listing out all 842 codepoints would work, but would result in a bulky and inefficient CSS file. Unfortunately, a simple unicode-range won’t work either, as Emoji symbols are strewn haphazardly across multiple locations in the Unicode spec. Thus, to generate the appropriate ranges in an efficient manner for emojistatic, we turn again to our EmojiData library, using it to find all sequential blocks of Emoji characters greater than 3 in size and compressing them to a range. Go here to examine the relevant code (it’s a bit large to paste into Medium), or just check out the results:

>> @emoji_unicode_range = Emojistatic.generate_css_map
=> "U+00A9,U+00AE,U+203C,U+2049,U+2122,U+2139,U+2194-2199,U+21A9-21AA,U+231A-231B,U+23E9-23EC,U+23F0,U+23F3,U+24C2,U+25AA-25AB,U+25B6,U+25C0,U+25FB-25FE,U+2600-2601,U+260E,U+2611,U+2614-2615,U+261D,U+263A,U+2648-2653,U+2660,U+2663,U+2665-2666,U+2668,U+267B,U+267F,U+2693,U+26A0-26A1,U+26AA-26AB,U+26BD-26BE,U+26C4-26C5,U+26CE,U+26D4,U+26EA,U+26F2-26F3,U+26F5,U+26FA,U+26FD,U+2702,U+2705,U+2708-270C,U+270F,U+2712,U+2714,U+2716,U+2728,U+2733-2734,U+2744,U+2747,U+274C,U+274E,U+2753-2755,U+2757,U+2764,U+2795-2797,U+27A1,U+27B0,U+27BF,U+2934-2935,U+2B05-2B07,U+2B1B-2B1C,U+2B50,U+2B55,U+3030,U+303D,U+3297,U+3299,U+1F004,U+1F0CF,U+1F170-1F171,U+1F17E-1F17F,U+1F18E,U+1F191-1F19A,U+1F201-1F202,U+1F21A,U+1F22F,U+1F232-1F23A,U+1F250-1F251,U+1F300-1F31F,U+1F330-1F335,U+1F337-1F37C,U+1F380-1F393,U+1F3A0-1F3C4,U+1F3C6-1F3CA,U+1F3E0-1F3F0,U+1F400-1F43E,U+1F440,U+1F442-1F4F7,U+1F4F9-1F4FC,U+1F500-1F507,U+1F509-1F53D,U+1F550-1F567,U+1F5FB-1F640,U+1F645-1F64F,U+1F680-1F68A,U+1F68C-1F6C5"

This is then dropped into an appropriately simple ERB template for the CSS file:

@font-face {
font-family: 'AppleColorEmojiRestricted';
src: local('AppleColorEmoji');
unicode-range: <%= @emoji_unicode_range %>;
}
.emojifont-restricted {
font-family: AppleColorEmojiRestricted, Helvetica;
}

When we then use the resulting .emojifont-restricted class on our webpage, we can see the improved results:

Same example, but custom font range saves the day. (try demo in your own browser: http://codepen.io/mroth/pen/cpLyK)

Yay! But unfortunately, this technique isn’t perfect. Remember those double-byte Unicode characters we talked about earlier? You may have noticed we rejected them in the beginning of our unicode-range generation algorithm. Well, turns out that they are obscure enough that there is no way to represent them in standard CSS unicode-range format. So by doing this, we do lose support for those few characters represented in a mixed string, and we can actually only display 821 of the emoji glyphs.Win some, lose some, eh? I’ve looked long and hard without being able to find a solution, but if anyone has a secret workaround for this, please let me know! For now though, this seems to be the best case scenario.

Keeping it all up to date: chained Rake file tasks
Keeping all these assets up to date in emojistatic could be a pain in the rear when something changes. For example,add one emoji glyph image, you’ll not just need new optimized versions of it, but also to generate new versions of the rollup spritesheets, minify and gzipped versions of those, etcetera. Rake file are incredibly powerful, because they allow you to specify the dependency chain, and then are smart enough to rebuild just the necessary tasks for any change. A full run of emojistatic can take 30-40 minutes from a fresh state (there’s a ton of image processing that happens), but subsequent changes occur in seconds. Once you get it working, it feels like magic.

Going into the detail of complex Rake file tasks is beyond the scope of what I want to cover in this blog post, but if you do anything at all like this, I highly recommend watching Jim Weirich’s Power Rake talk, which was immensely helpful for me in grokking proper usage for this technique.

Frontend Performance

Image from http://ftw.usatoday.com/2013/09/emoji-sports-art-is-the-best-kind-of-art

It took lots of attempts to figure out how to get so many transitions to occur on the screen without slowdown. My goal was to have emojitracker work on my iPad, but early versions of the site were bringing my 16GB RAM quad-core Core i7 iMac to its knees begging for mercy.

Crazy, since it’s just a webpage, right? The DOM really wasn’t meant for handling this many operations at this speed. Every single manipulation had to be optimized, and using jQuery for DOM manipulation was out of the question — everything in the core update event loop needed to be written in pure Javascript to shave precious milliseconds.

Beyond that though, I looked at a number of different techniques to try to optimize the DOM updates and visual transitions (my good pal and Javascript dark wizard Jeff Tierney was extremely helpful with this). Some of the comparisons and optimizations we examined were:

  • Utilizing explicit CSS animations vs. specifying CSS transitions. (using Javascript based animation was entirely out of the question as we needed native rendering to get GPU acceleration.)
  • Different methods of force triggering the transition animation to display:replacing an entire element vs. forcing a reflow vs. using a zero length timeout.
  • Maintaining an in-memory cache of DOM elements as a hash, avoiding repeated selection.

And of course, all of the various combinations and permutations of these things together (as well as the 60eps capped event stream mentioned in the backend section versus the full raw stream). Some might work better with others, and vice versa. The end user’s computer setup and internet connection also would play a factor in overall performance. So what combination would get us the absolute best average frames-per-second display in most environments?

To test this, all methods are controlled via some variables at the beginning of our main Javascript file, and the logic for each remains behind branching logic statements in the code. As a result, we can switch between any combination of methods at runtime.

A special benchmark page can be loaded that has test metrics visible with a benchmark button. The additional JS logic on that page basically handles stopping and restarting the stream for a distinct period of time using every possible combination of methods, while using FPSMeter.js to log the average client performance.

Menu bar during a benchmark performance test, showing the current animation method and FPS.

Upon completion, it creates a JSON blob of all the results for viewing, with a button that will send the results back to our server for collation.

Benchmark results as JSON blob.

This gave me an easy way to ask various people to easily yet exhaustively test how it performed on their machines in a real world way, while getting the results back in a statistically relevant fashion.

If you’re interested, you can check out the the full test suite logic in the source.

(Oh, and by the way, the overall winner in this case ended up being using the capped stream, cached elements and zero-length timeouts. This is probably not what I would have ended up choosing based on testing on my own machine and gut intuition. Lessons learned: test your assumptions, and sometimes the ugly hacks work best.)

In the future, I’m almost certain I could achieve better performance by using Canvas and WebGL and just drawing everything from scratch (ignoring the DOM entirely), but that will remain an exercise for another day—or for an intrepid open source contributor who wants to send a pull request!

Deploying and Scaling

The first “soft launch” for Emojitracker was on the Fourth of July, 2013. I had been working on emojitracker for months, getting it to work had consumed far more effort than I had ever anticipated, and I just wanted to be done with it. So I bailed on a party in Red Hook, cabbed it back up to North Brooklyn, and removed the authentication layer keeping it hidden from the public pretty much exactly as the fireworks displays began.

https://twitter.com/mroth/status/352975279897067520

Perhaps this stealth approach was a bit too stealth, because the attention it received was minimal. A couple of friends told me it was cool. I pretty much forgot about it the next day and left it running, figuring it’d be like many of my projects that just toil away for years on their own, chugging along for anyone who happens to stumble across them. But then…

One crazy day

Fast forward about a month. I had just finished up getting a fairly large forearm tattoo the previous night, and I was trying to avoid using my wrist much to aide in the healing (e.g. ideally, avoiding the computer).

Over morning espresso I noticed the source had picked up a few stars on GitHub, which I found interesting, since it had gone fairly unnoticed until that point. Wondering if perhaps someone had mentioned it, I decided to do a quick Twitter search…

Oh shit.

It was certainly out there. Just to be safe I spun up a second web dyno. Within an hour, emojitracker was on the front page of Buzzfeed, Gizmodo, The Verge, HuffPo, Digg… when it happens, it really happens fast, and all at once. Massive amounts of traffic was pouring in.

https://twitter.com/emojitracker/status/360807967232229376

Here’s where Heroku’s architecture really saved me. Although I had never put much initial thought into multiple servers, their platform encourages developing in a service-oriented way that you can naturally scale horizontally. Adding a new web server was as simple as a single command, and it would be up and serving traffic in under a minute, with requests load balanced across all your available instances. Press-drive traffic spikes go away almost as quickly as they arrive, so you’re going to be scaling down as often as you scale up.

Even better, you pay per-minute for web dyno use, which is really helpful for someone on a small budget. I was able to have a massive workforce of 16 web servers during the absolute peaks of launch craziness, but drop it down when demand was lower, saving $$$.

By carefully monitoring and adjusting the amount of web dynos to meet demand, I was able to serve tens of millions of realtime streams in under 24hrs while spending less money than I do on coffee in an average week.

Riding the Wave: Monitoring and Scaling

I primarily used two tools to monitor and scale emojitracker during the initial wave of crazy.

The first was log2viz.

log2viz

Log2viz is a Heroku experiment in which is essentially a simple web visualization that updates with the status of your web dynos based the last 60 seconds of app logs.

I was also periodically piping event data into Graphite for logging purposes.

Graphite charts during launch day.

In order to see total size of pools we want each web streaming server to report independently and have Graphite roll those numbers up. This can be a bit tricky on Heroku because you aren’t going to have useful hostnames but it turns out you can get the short form dyno name by accessing the undocumented $DYNO environment variable, which is automatically set to reflect the current position of the dyno, e.g. web.1, web.2, etc. Thus you can wrap Graphite logging in a simple method:

# configure logging to graphite in production
def graphite_log(metric, count)
if is_production?
sock = UDPSocket.new
sock.send @hostedgraphite_apikey + ".#{metric} #{count}\n", 0, "carbon.hostedgraphite.com", 2003
end
end
# same as above but include heroku dyno hostname
def graphite_dyno_log(metric,count)
dyno = ENV['DYNO'] || 'unknown-host'
metric_name = "#{dyno}.#{metric}"
graphite_log metric_name, count
end

Then you can use the graphite_dyno_log() method to log, and then query in graphite for web.*.stat_name to get an aggregate number back.

Between these two things I was able to have a relatively discrete view of current system status and performance in realtime. You need this sort of realtime data if you’re going to be able to achieve the goal we had here, which was to rapidly scale up and down in response to demand.

I did this manually. That first evening I needed a break from intense computer usage all day, so I actually spent the evening in a bar across the street from my apartment with some friends, having a drink while passively monitoring these charts on some iPhones sitting on the table. Whenever it looked like something was spiking, I used the Nezumi Heroku client to scale up instances from my phone directly. I didn’t even have to put down my drink!

If you have the extra cash, you certainly don’t need to micromanage the instances so closely, just set above what you need and keep an eye on it. But it’s nice to have the option if you’re riding out a spike on a budget.

People have experimented with dyno auto-scaling, but in order to implement it for something like this, you’ll need to have a really good idea of your performance characteristics, so you can set appropriate metrics and rules to control things. Thus, it’s better if you are operating a stable service with historical performance data — it’s not really a realistic option for the pattern of totally obscure -> massively huge suddenly and without any warning.

Things I’d still like to do

There are a few obvious things I’d still love to add to Emojitracker.

Historical Data
This should be relatively simple, I just need to figure out what the storage implications would be and the best way to structure it would be. Showing trend-lines over time could be interesting to see!

Trending Data
Right now the only way to see when things are trending are to eyeball them, but this is a natural thing for Emojitracker to highlight explicitly. This may actually be a prime application for bitly’s ForgetTable project, so hitting up my alma-mater may be the next step.

Alternate Visualizations
Emojitracker does have a JSON API, and the SSE streams don’t require authentication. I’d love to see what more creative folks than myself can come up with for ways to show the data in interesting ways. I’d be happy to work with anyone directly who has a cool idea that requires additional access.

Remember, emojitracker is open source, so if any of the above projects sound interesting to you, I would love collaborators!

Reception and conclusions

Fan Art, via @UnbornOrochi

So was it worth it?

For me creating emojitracker was primarily a learning experience, an opportunity for this non-engineer to explore new technologies and push the boundaries of what I could create in terms of architectural complexity.

Still, it was incredibly gratifying to see all the positive tweets, the funny mentions, and the inexplicable drive of people to try to drive up the standing for poor LEFT LUGGAGE or rally for their latent scatalogical obsessions.

(Since I’ve been told I’m supposed to keep track of press mentions, I’ll post the press list here, mostly so I can ask you all to send me anything that I may have missed!)

The best part has been the people I’ve met through Emojitracker I may not have otherwise. At XOXO one guy came up to me to introduce himself and tell me that he was a big fan of Emojitracker. Suddenly, I realized it was Darius Kazemi (aka @tinysubversions), an amazingly prolific creator of “weird internet stuff” whose work I’d admired for quite some time.

I’ve had the opportunity to work professionally on some amazing things (with amazing people!) in the past. But it was at that point, for the first time, that I felt like that what I had used to consider my “side projects” were now what should define my career moving forward, rather the companies I’d worked at.

I know, and have always known, that Emojitracker is a silly project, one with dubious utility and requiring a bit of lunacy to have spent so much time and effort building. Still, for all the people who saw it and smiled, and may have had a slightly better day than they would have otherwise — it was worth it.

For that reason, I hope that this braindump of how I built Emojitracker will help others to create things that are worth it to them.

ENJOYED THIS ARTICLE?: You might also enjoy the followup post enumerating all the changes involved in scaling over the next 1.5 years here: “How I Kept Building Emojitracker”

Epilogue: Emoji Art Show!

The Eyebeam Art and Technology Center gallery.

I’m thrilled to announce that Emojitracker is going to be featured in the upcoming Emoji Art and Design Show at Eyebeam Art & Technology Center, December 12-14th 2013.

I’m working on an installation version of it now, and there may be a few surprises. Hope to see you there if you are in the New York area!

https://twitter.com/kittehmien/status/405528153151397888

xoxo,
-mroth

P.S. I’m publishing this using Medium as an experiment. If you want to keep up to date with my future projects, the best way is to follow me on Twitter at @mroth.

--

--

Matthew Rothenberg

Artist + hacker. Made @emojitracker & other internet detritus. Past lives: @flickr, @bitly, @polaroid, @khanacademy.