It has entered the general consciousness by now that North Korea hacked Sony. The FBI has said they’re sure, and as the classic example of a story too good to check, the media has run with it. Journalists use it as a casual example, it’s in the first draft of history to be sure. It’s turning up as a cultural trope, something we all know. It’s the set up for jokes. But we’re not sure that it happened that way, at all. The evidence is not only thin, it points in far too many directions — many of them not at North Korea. The truth is, like most hacking attacks, we will probably never be sure who hacked Sony.
This is because we’ve reached a point in history where it’s easier to find earth-sized exoplanets than it is to figure out who’s to blame for most hacks.
This is true despite having police around the world focusing more time, energy, and money on these attacks. It’s true despite a global network of CERTs (Computer Emergency Response Teams) from governments, universities, and corporations around the world working to prevent and counter attacks. It’s true despite the fact that intelligence services are spying on absolutely everything all the time. This is because these attacks are easy to do, often even automated, and because distinguishing malicious network activity is damn hard, and functionally impossible when you have to sift through the whole net.
It’s so much easier to talk about the DPRK and Sony because most hackers either can’t be located or accessed by law enforcement. Making it North Korea gives us something to do and think about it. Blaming the DPRK is a preferable story to admitting we can’t do anything about the Sony hack, or asking why Sony is always a target. Honestly, at some point, it became hard to not blame Sony for their security woes, and that point was 2011. Fool me once, shame on you, fool me more than 20 times in one 12 month period alone, shame on me.
Of course, Sony was never trying to get hacked. The hardest part of security has always been detecting when you’ve been compromised. This is profoundly different from the world of physical security our minds evolved to handle. Even disease doesn’t matter to us until we see symptoms, and computers don’t cough or get fevers. Many people spend years compromised and don’t know it.
Even deciding what counts as a compromise or malicious activity is hard. If you extend “hacking” to include software you don’t know about executing on your computer to slurp up data about you and sell it off to an interested party, then almost every website you visit these days counts as malware. At the other end of the spectrum of outright malice, intelligence services attack nations, both enemy and friend. Organized crime attack banks, casinos, and wherever else they can find money. Almost no one gets caught. Even police departments pay ransomware authors to get their files unlocked. In most of these cases, no one will ever figure out who did it.
So Who Done It?
On the surface, the idea of attributing these deeds seems simple. Somewhere out there, someone did a thing, and they are responsible for the thing they did. If they get caught, they’ll get punished.
On the net, it all has the feel of Old West justice, down to wanted notices for colorfully named characters.
But if you’re living anywhere more complicated than an Agatha Christie novel, like the real world, attribution is never as simple as a who done it. Attribution is always situational. “Who done it” depends on the outcome you’re looking for. For instance, if a child falls off a playset, did the child do it? The child’s caretaker? Or the playset manufacturer? How we answer that question often depends not only on the details of the event, but also the position of the reviewer. A mother and a state health and safety regulator could have two different but equally true answers to that question. Attribution requires investigating and understanding the situation. That’s hard enough to do in the physical world. On an ever-shifting digital network, investigation and understanding get lost fast.
Network attribution is hard to do.
In January a new user by the name of “King Zortic” went on Twitter and took two planes out of the air by tweeting to the airlines that those planes had bombs on them. Without even chatting with King Zortic (though I did a bit on Twitter at the time) I can tell you why he did it — because the idea that grounding planes with a few tweets is so ridiculous, how could you not do it? From King Zortic’s point of view, the fault didn’t lay with him, it was with a system so stupid that he could do what he did and be taken seriously. This argument says that it’s up to the rest of us to adjust our social norms to the reality of anonymous speech on a global network. But I suspect he would put it more like: ‘I did it because LOL UR SO STUPID’. It amounts to the same argument, and while no sane person would support making that argument the way King Zortic did, the problem remains. In the mean time, the FBI is attempting to attribute these tweets to the real person behind the “King Zortic” name and put him or her in prison. King Zortic is right in a very important way — this is hard, and enforcement doesn’t get us closer to an answer to the underlying problem of contemporary culture not having mechanisms for dealing with anonymous networked speech.
King Zortic and I chatted about how you could make the attack much worse. You could script it and spawn thousands of little robotic King Zortics over the net and twitter from other people’s hacked computers, issuing ridiculous bomb and death threats until everything ground to a halt. King Zortic told me he had thought of that, but decided it was too much work. He settled for grounding a couple planes and sending the police to a random address (because lulz, again). Law enforcement and someone calling themselves a “cyber expert” claimed King Zortic would be caught by his IP address on Twitter, but I doubt he will. All he had to do to obfuscate himself was use a few tools like Tor and the trail will quickly go cold for the FBI.
People like to point out that Tor allows for illegal activity like this, mistaking Tor for causing this behavior. And this does happen, but it’s overblown. Mostly hackers doing illegal things don’t need tools like Tor. What’s faster and more reliable than Tor is hacking a bunch of computers with naive users and using them as a platform for further attacks. Since at that point you’re already setting out to commit a crime, you might as well go for the faster, easier option — hacking or even in some cases renting offshore or hacked computers. These days, when people realize they’ve been hacked, the path back to the hacker usually dead-ends at a fellow victim’s computer, which has been scrubbed of evidence before you could get to it.
Sometimes the hacker has crossed so many borders that jurisdictional negotiation to get to them might take longer than the lifetime of the hacker.
Everyone hacking knows that if you pick your jurisdiction carefully, you have nothing to worry about. By 2012 every American kid with a bit of sense and an interest in hacking was crawling all over the Syrian internet and telecom servers, hacking the crap out of everything, because they knew no one would bother them for doing it. Syria may have been hacked as often as Sony.
Right now from anywhere in the world, you can look for the ideological opponents of whatever government is currently running your electricity grid and pretty much hack whatever you want. No one is going to stop you from hacking the countries they like the least. This often means pure criminals and thrill-seekers are much less likely to get pursued than hacktivists, who are usually stuck striking close to home for political reasons.
Without good network forensics, investigation usually uses circumstantial evidence, like encoding languages and the code itself. Sometimes this really helps — sometimes organization or people leave their name in the malware. But often people do remember to remove their name from malware, or stick the names of people they want to fuck with in there. As for language encoding, if you go into your computer’s settings right now, you can change your encoding language to be Chinese, or Korean. Congrats! You’re now only a malware search away from getting your hacking attributed to a nation-state.
As for using code for attribution, this gets less straight forward by the day. If we accept the metaphor that malware is a kind of munition, then this is the only kind of bomb where your enemy gets to keep everything you send their way, forever. They get to study it and play with it, even if it was successful. Not just your enemy; your bombs will travel all over the world and be ripped apart, studied, reported, incorporated into other bombs and eventually used back on you again. The better the technology of the bomb, the more everyone has to play with and develop into ever more terrible versions of itself. This is why the emphasis intelligence services and governments have on attacking over defending leaves so many security experts in a near-permanent state of headdesk.
Usually, when people get caught, it’s because they’ve directly or indirectly admitted to what they did or who they are in public. When hackers don’t do that, they’re pretty hard to catch.
Attribution is always political.
No one in the world wants to attribute any problem in a way that makes them responsible, yet powerless to do anything about it. And no one does, even when it’s the most true story you can tell.
Even when you can pick out exactly who did something, fitting that information into the larger picture is never straightforward. Take Blackshades, a program — a remote administration toolkit, or RAT. Blackshades was marketed as a way of attacking an unwilling victim’s computer and taking control, stealing their data and invading their life. Bad stuff, and obviously bad guys right? Except it’s not so simple. RATs are everywhere, people use them to remotely access computers all the time. Technically, there wasn’t much difference between Blackshades and any of the other dozens or hundreds of remote administration toolkits all around our lives, keeping our corporations running, our networks up, or letting us apply Windows updates to our computer-illiterate relatives’ machines. So what started as a gimme (arrest sleazy people selling Blackshades) suddenly gets more complex. Shouldn’t it be the people who used Blackshades to commit crimes who get arrested? Does this just make a certain form of marketing illegal? What should you be charged with if you market a tool for illegal use? Should marketing that a crow bar can be used to break windows you don’t own be a punishable crime?
On the other hand, since Blackshades was marketed for criminals, does that mean providing tech support made the company accessories to every crime committed by Blackshades users? But if they are, why isn’t Microsoft?
If it’s a matter of what Blackshades’ creators knew, then it’s pretty easy to avoid that attribution and do the same thing they did, even marketing to the same crowd, albeit slightly more subtly. That implies that we can’t attribute these hacking attacks to Blackshades alone, just a sense that talking about hacking on a black webpage somehow edges you into criminality.
Yet it’s got to be true that many of those people wouldn’t have downloaded the same tool on a white background, and might never have done that thing to their best friend’s ex-girlfriend’s computer. So then, it is Blackshades’s fault for leading them into a crime they would have never committed, since they didn’t have the technical chops to use a straightforward RAT.
The FBI is telling the attribution story in a way that serves their needs. To some degree Blackshades is guilty, more-so than their customers, because it’s not too hard to go arrest the people who sell Blackshades, not because what they are selling is a bad or illegal technology. RATs aren’t bad or illegal. This attribution is a lot more useful than it is in any profound way true. It relies on public ignorance about computers to be meaningful. If we all knew what RATs were, where to get them, and how to use them, this marketing would be more silly than dangerous.
Attribution is about what you’re trying to accomplish.
Who we say done it is important not for some great accounting of abstract justice, but because good attribution is required for making the changes that prevent the bad thing from happening again.
This is why attribution might be the most broken thing about network security right now. If we decided that what we wanted to do was make the network more secure, instead of attributing hacks to the endless supply of skiddies the internet will give us until we die, we’d attribute like consumer protection advocates have since the late 19th century, when that Old West was getting tamed, and make it about poor manufacture.
When someone gets mauled by a bear we don’t blame the bear. When someone gets electrocuted by a toaster we don’t blame the electricity. This isn’t because these things are more or less guilty than a kid mailing a RAT to someone as an attachment, it’s because there’s no point in blaming bears and electricity. With bears, we usually blame park services for not providing bear boxes or people for not using them. We make safety standards for toaster manufacturers and sue the crap out of those manufacturers if they don’t follow them, which they pretty much always do, these days.
For a while we shot all the bears who mauled people, but that was stupid and counterproductive. We are discovering the same is true, if for different reasons, about arresting most hackers and pot smokers. It doesn’t help the public, and it doesn’t help the hackers or the pot smokers.
Nobody in the computer industry wants to hear that they should be on the hook for making the net more secure — much like no one wants to hear that it’s society’s job to learn how to deal with shitty anonymous speech. But if you want to change bad outcomes of systemic problems, coming up with systemic answers requires systemic attribution. And don’t ever look for enforcement bodies like the police to look for systemic answers. Prevention is by necessity an existential threat to retaliatory actions. “Bear Hunter” would be a pretty cool handle for anonymously closing malls with tweeted bomb threats, but it’s not a job title anymore, and no one in their right mind wants it to be. Cybercop can head in the same direction with a little more Cyber Consumer Protection, even if that sounds much less sexy. (And lets be honest, Cybercop is much lamer than Bear Hunter anyway.)
Before this gets called victim blaming, that is about a way of attributing as well. “Victim blaming” describes where we believe the agency is. The ecology of malware shares more features in common simple biology than drunk frat boys. Telling corporations to secure their damn networks and products isn’t victim blaming anymore than telling doctors to wash their damn hands is.
As doctors, preschool teachers, and the secretive creators of Stuxnet found to their dismay, infectious agents gonna infect.
Teaching opsec to normal people gets called victim blaming sometimes as well. I often hear that telling people things like “don’t put your nude pics in the cloud” is the digital version of saying “don’t wear a short skirt.” But this misunderstands the situation. Given the present state of computer security, it’s the digital version of saying cook your pork, vaccinate your kids, and don’t drink non-potable water. And learn enough about how your computer works so that you don’t become the platform for the devastation of someone else’s life and fortune.
Attribution doesn’t just mean blaming.
We forget that attribution can be a good thing, too. Attribution is a whole package. Choosing how to interpret attribution is about being useful, not punishing or rewarding per se. If we attribute many of the recent bugs in Free/Open Source Software to the people who accidently wrote them in, our investigations would turn up a software ecosystem in need. Obviously the answer isn’t to jail the overworked and underpaid authors of something like OpenSSL, the origin of Heartbleed. It’s to support them, bring more resources in, give them the occasional vacation and make sure their code gets audited. When we gave up shooting bears we established natural space for them. When we started requiring toasters to not kill people we also created engineering schools so knowledgable toaster engineers could be trained to engineer toasters correctly. Things got better for both bears and toaster owners.
Eventually we need to figure out what national parks and EE schools look like for the net. That answer will never come from talking to law enforcement. What the law enforcement approach got us in the 19th Century West was genocide and ecological catastrophe.
We’ve done this before with our technologies. We can stop treating the internet like the Old West, and stop blaming countries that don’t have transistor radios for our security woes. Both providers and the public can learn to deal with network technology in a more realistic way with standards of safe manufacture and responsible use. And until we do, it’s all going to stay broken.