Part 1 provided a overview of the meteoric rise of Pokemon Go and the laundry list of growing pains it suffered when its (very insecure) API was cracked mere weeks after release. Part 2 focuses on said cracking and on the eventual reverse-engineering of the API, and takes a look at a few of the methods by which PoGo developer Niantic’s security measures were implemented and circumvented.
Niantic’s first, and possibly largest, misstep was releasing PoGo into the wild without proper certificate pinning, a rather complicated subject that I’ll attempt to explain very briefly. To start, PoGo uses SSL (Secure Sockets Layer) certification to protect its network traffic, the same security protocol that guarantees the safety and legitimacy of HTTPS sites. When you are using SSL, third-party Certificate Authorities will help verify that the website or server you are visiting or connecting to is the website or server that it claims to be. But the safeguards afforded by SSL, while generally sufficient to protect against outside parties trying to intercept your network traffic, do not do so much to prevent you from intercepting your own network traffic. So by setting up a MITM (Man-in-the-Middle) attack “on themselves,” hackers were able to capture the data being sent out from their PoGo app to Niantic’s servers.
This data was found to be written in Protobuf (Protocol Buffers), which Wikipedia tells me is an open source, “data serialization format” created by Google, used to convert complex objects into sequences of bits. Being open source and relatively popular, the Protobuf-formatted data was easily interpreted. The next step was to simply compile a library of all the signals and requests sent by the official PoGo app and put them together to essentially recreate the game’s API, which was subsequently used for millions of server-destroying bots and scanners.
The aforementioned certificate pinning was one of the major security measures implemented by Niantic in an attempt to crack down on unofficial APIs. Pinning adds an additional but important action to the series of steps that comprise the SSL certificate verification procedure. Without certification pinning, the certificates were only checked to have been signed by a valid Certification Authority. Thus, MITM attacks were possible as long as the attacker had a CA-signed certificate. Certificate pinning appends a new requirement that the certificate must originate from the PoGo app directly to be recognized by the servers. If the certificate does not match, the app refuses to start the game.
Unfortunately for Niantic, the protective value of certificate pinning was severely diminished once their API had already been cracked. With most, if not all, of the app’s communications protocol being deciphered, hackers were able to write a relatively small amount of code (sub-100 lines) to attach to the unofficial API that would overwrite the “chain” parameter in the below picture with the value of the official PoGo certificate. So instead of checking the certificate of the unofficial API or the MITM proxy, the checkServerTrusted function would only ever see the real, valid certificate. Certificate pinning was broken the same day it was implemented.
Niantic’s failure to implement certificate pinning upon the game’s release cost them heavily, as a successful implementation of this security measure would have made reverse engineering the API significantly harder. Likewise, there was an alternative checksum verification (codenamed Unknown6) that was built into the game from day 1, but wasn’t turned on until August, that could have also greatly deterred hackers. But that particular issue is a little too far beyond the scope of this author’s understanding. So that about wraps up Part 2 on certificates and cracking certificates. In Part 3, I’ll take a look at the few ways they’ve actually been successful in battling unofficial APIs and perhaps note some lessons you could learn from their story.