You Should Understand Cybersecurity Too (Part 1)
Not very long ago, HypeLabs was exhibiting at a conference, probably the WebSummit, if memory serves me right. We had a booth as we usually do. People come and go; some engage with us and some are engaged by us, nothing that any other attendee wouldn’t say. Some guy approaches our booth, as many before him, and reaches out to me directly with the usual pleasantry that you see at these things. After some short initial small talk, I pitched our project: we’re connecting all sorts of devices using mesh networking, decentralizing connectivity in a secure way, using whatever technologies are already available on the device. At some point, when I’m elaborating on security concerns, he asks me a question, and as normal as that might seem, that was the question that motivates me to share this story now:
— In how many layers of security do you guys work?
I digested the question for a couple of seconds. Tried to make sense out of it. It triggered more thoughts in my brain than I can realistically explore, and it certainly didn’t take more than just a few seconds, tops. At this point I understand that I’m not talking with a technical person, or the question would never have been put in those terms, not by a professional. As is usual the case, I struggle with this type of reasoning. I always try to make sure that what I say is accurate from a technical perspective, but if we could define what a bad answer is for an even worst question this would be it. I remembered my studies in logic, from college, that starting with a false premise you can deduce anything. Perhaps this is an excuse, but as I pondered my alternatives my hesitation became obvious and I felt I had to answer. Between stuttering and mumbling “uhs” and “hums”, this is what I came up with:
— Uh, uhm… All of them.
Even I was surprised with the certainty and posture that somehow I managed to keep while spitting out words of technical barbarity. It felt like betraying my professional principles, that as a CTO at HypeLabs I cannot say things like that — if I could, how would that lack of technical principle reflect within the company? I didn’t expect my answer, but I expected his comeback even less:
— Oh, wow, great. That sounds awesome!
If I could facepalm right there and then I would, but professional courtesy dictated otherwise. Notice that we’re not talking about my grandmother, knitting on the couch, that struggles with the concept of a smartphone. This is about an individual in his mid-twenties, with college education and a C-level at a tech company, and that bears relevance. This is a person that is very likely in charge of a product — one that probably claims to be secure — that is being consumed by some audience, and this conversation is taking place in one of the biggest tech conferences in the world. I don’t believe that someone in his position needs to understand deep technology, sure, but for tech entrepreneurs there definitely are some basics that are required. After all, if tech entrepreneurs don’t understand security, then who does?
I don’t mean that everyone needs to know how to apply Curve25519 in a ECDH key agreement — that’s not even what security is all about. Rather, this article is about cybersecurity as a concept. That’s what that guy should understand, and that’s what everyone should understand, even non-technical people. This is important because there’s no such thing as a secure system, especially if the people behind it don’t have a security-oriented mindset. I usually start this kind of discussion by going over security in conceptual terms, but I believe that it is not yet clear why non technical people need to understand this — that’s why I’ll start by elaborating on social engineering.
Kevin Mitnick, in case you haven’t heard of him, is a hacker that became famous during the ’90s over a series of controversies that ultimately led to his arrest, charged with wire fraud among other accusations. There’s a movie about that, called Takedown (2000). Back then he was considered the most dangerous hacker in the world, hunted by the FBI. Prosecutors even managed to convince the judge that he could start a nuclear war simply by whistling into a phone. This led Mitnick to serve one year in solitary confinement, without bail and no access to a phone, plus four more years in a federal prison. Mitnick wrote several books, including The Art of Deception (2002), where he describes the human factor of security, or the concept of social engineering. In fact, I’m mentioning Mitnick to borrow his stories — as you may imagine I have none of my own, as surely having the FBI on my back is not among my immediate life plans.
On one account, Mitnick managed to convince a security guard at a telephone central office that he was there to collect old IDs because everybody was getting new ones — the guard happily gave him his ID, granting him all sorts of access. On another account, he called Motorola and convinced an employee that he was a friend of a friend, asking said employee to send him the MicroTAC (an old cellular phone) source code — she did send him the code, but not before being granted permission by her security manager, if you can believe that. By the age of twelve he led a Los Angeles bus driver to believe that he was doing a school project, and for that he needed a special hole punch that was used on public transportation at the time — after getting what he wanted, he travelled anywhere in LA for free. On another one of his earliest stories, he reportedly hacked into DEC by claiming to be one of the project’s lead developers and that he was unable to access the system at the time — they actually allowed him to choose a new password of his own.
What these stories have in common is the fact that Mitnick deceived people — not machines — into giving him what he wanted. For most of the times, he didn’t manage to get unauthorized access because he discovered flaws in the system, or used complicated algorithms; rather, he tricked someone with legitimate access into providing him with sensible information: passwords, user accounts, system details, etc. That’s what social engineering is: deceiving people into giving up information, for reasons that will probably seem extremely plausible. It’s a play of trust.
Social engineers will resort to a number of tricks to deceive people, among which is establishing some familiarity — they’ll claim to be a friend of a friend, work in the same company but in a different division, or have something in common with the victim. They’ll also wait for the right time — a coworker that is on vacation, a family member traveling somewhere, or a snow storm to justify some problem with the system. They’ll be nice. The’ll be patient. Hacking is a craft of patience. Social engineers exploit cognitive biases, behavioural patterns that humans display systematically. They’ll appeal to your concerns, worries, problems, and fears. It’s also common to include some sense of urgency — don’t talk with your superior, enter your password now or your account will be permanently lost.
These are all reasons as to why becoming a successful hacker takes time. It can take days, weeks, months, even years to gain unauthorized access to a system — at least the type of hacking that matters. This obviously requires a lot of energy, and therefore attackers rarely choose their victims randomly. Actually, when they decide on a given system they’ll most likely already know that it conceals something that is valuable to them, money or otherwise. As attackers study such systems for long periods of time, by the time they resort to social engineering and get you on the other side of the line, they’ll probably already know the system better than you do. This results in convincing stories. They’ll often know what to say to avoid your spider-senses, making you feel like you’re not doing anything wrong.
Social engineers fail more than they succeed. In fact, this craft — if we can call it that — is not hollywoodesque at all. What movie directors need to represent is fast typing, exciting scenes and breaking satellite links. But there are no loading bars for firewall breaches that go well with a musical crescendo, or rotating 3D cubes with cryptic characters. The system doesn’t bip because your IP is being tracked, and most certainly what it takes to protect yourself isn’t typing “start counterstrike” in an unrecognizable command prompt — yet this is the kind of stuff we see on TV. This sort of pop culture has, in part, surrounded hacking in myths, ones that scare more than they inform. If social engineering tells us about our part in the security of computer systems, debunking such myths gives us motive, by understanding what hacking really is and who the people behind it are.
The hacking myth
Hollywood is probably one of the great contributors for the misunderstanding of technology, and hacking in specific. A hacker, in this context, is someone that works in cyber security — or at least that practices it in some way. We’re not talking about mythical creatures, ones that resort to divine and ancient wisdom or other inhuman qualities to break into systems. There’s technical knowledge revolving around detecting flaws plus a human factor, the part that understands that humans are the weakest link in a system’s security chain.
This is why it’s important that everyone understands what security is, what it actually means. There’s no point in equipping your computer with the most expensive security software if an attacker asks you for your password and you just gladly give it away — a chain is only as strong as its weakest link. The first line of defense is understanding what attackers look for, how they act, and therefore educate people into facing technology with a security-oriented mindset. There are identifiable systems flaws, surely, and we’ll go over those later, but right now what is important to understand is that people are the greatest “software flaw” of all.
It’s not false that hackers have a deep understanding of how computer systems and networks work, what is falsely reported on many accounts is how they do it and to what lengths they can go. It doesn’t take five minutes to “break a firewall”, but rather many years of study, persistence, and determination to penetrate a system. Any cyber attack involves a large set of skills, both technical and non technical, that attackers need to master. And even then it’s not guaranteed, as, after spending a considerable amount of time studying a specific vulnerability, it might just happen that it has already been patched at the time of the attack.
The most technical readers could, at this point, be somewhat mad at my loose definition of what hacking is. Many interpretations out there attempt to segregate “hacking” according to morality: a hacker violates systems with good intention — such as reporting vulnerabilities — , while a cracker does the opposite; these are also called white hat and black hat hackers, respectively. These are definitions with which I can’t abide. In short, I don’t believe in such manicheisms of good and evil, especially when considering intent — Julian Assange is a great such example, as the outcome of his work can easily be debated under both lights. This is the case of a person who meant the best — an idealist — but ultimately ended up deemed a criminal by the US Department of Justice and hunted by the authorities instead.
Hacking is, therefore, not good nor evil, but a skill. Many skills, even. A hacker needs to learn how to code, understand computer networks, master specific systems and, of course, social engineering, among other things. If a hacker uses those skills to steal money from large corporations and give it away to the poor without keeping a cent, it won’t matter what the aim was, or whether it was right or wrong: it’s still hacking. It’s noticeable that the vast majority of hackers out there don’t really exploit flaws that they find; instead, they report them, open bug reports, and in many cases even fix it themselves. This vast majority is remarkably composed by so-called white hat hackers.
I remember a situation during college when a fellow student and friend came to me with a talk of scare. He had just learned about the Deep Web. He claimed to have connected to some network, where he was displayed an initial page with the Saw movie dummy and the question “Would you like to play a game?” prompted on his screen — he immediately shut down everything, and his linger was clear in his speech. To his limited interpretation, some sort of landing page — one that is shown when you initially connect to a network — was some indication that “they” were onto him. He asked “how could they possibly know”, to which I replied: “Why not? They have landing pages when you connect to the WiFi at McDonald’s too”.
Notice that I’m not trying to marginalize the dangers and illegalities that happen on the Internet, far from it. Perhaps my friend did venture somewhere he shouldn’t and his scare was justified, it’s just highly unlikely. These networks are a huge part of the Internet that is not indexed by search engines, and that alone gives a lot of margin for speculation and exaggeration. Encryption and anonymity do provide criminals with camouflage that is of interest to them, and there’s definitely a lot of bad stuff going on online — some of which are known to us, such as the Silk Road — but the Deep Web is just content, it can hold anything; in fact, some estimations indicate that over 95% on the web’s data is not reachable by search engines, therefore falling under this category — for the most part, there’s nothing wrong with that.
The same is true with hacking. A tremendous majority, one that I cannot quantify, is working to protect our systems, keeping intruders out. It doesn’t work like hollywood wants us to think it does, and therefore we shouldn’t define our perception based on these distortions of reality. Bad people exist, yes, but they exist everywhere, and being protected also means being aware, therefore it’s something that we can’t just run from in fear. Despite the many dangers that lurk online, we are our own measure of safety, by not sharing sensitive information and not trusting anyone online. An attacker cannot influence our lives beyond the point that they revolve around technology.
This video is a good example — people are led into believing that their minds are being read, but in fact all the information is just coming from their facebook accounts. The information is public, anybody can access, and the profile owners were the ones to put it there themselves. This can, as the video claims, be used against us, but only to the extent of what we publish and make our lives depend on it. This also shows how keen we are to believing in distortions of realities that we don’t fully understand and that, I believe, is more dangerous than the thing itself. A great part of being safe is knowing how we can be attacked, as identifying possible scenarios contributes a great deal to being safe.
It should now be somewhat clearer that our perception of cybersecurity is important because so is our role in protection. Grasping these concepts does not imply deep knowledge of technology, but then again neither does protecting ourselves. The first step in security is thus having a security-oriented mindset: don’t share sensitive information, don’t rush into things, don’t trust people, double check your sources. It’s important that we don’t just follow these guidelines, but rather assimilate them as part of our day-to-day behaviour. Remember that it’s unlikely that someone hacked into your Facebook account, and that instead you trusted someone with your password. It’s also important that we clear our minds of hollywood distortions, that hacking is some sort of magical endeavour that enables others to compromise our systems at will — it isn’t, and they won’t. What remains to be seen is the sort of tools that attackers use, ones that they can use to compromise our commitment.
The first part of this article was about the human factor of security — the user’s role in it. In the second part I’ll over some tools and techniques that attackers use — the attacker’s side of the story. For the third I’m reserving a discussion on how the systems can be design to circumvent most of it — the system’s accountability. Finally, I’ll share some thoughts on the fourth part on where security is headed and what we should expect from the future.
PS: I can’t, in all consciousness, close this article without making note of one specific particularity. Recall that Mitnick was convicted to one year in solitary confinement under the claim that he could start a nuclear war by whistling into a phone? Well, he couldn’t. This is an example of how grave it can be for people in power to misunderstand technology, in this case even more than others. Can we really compare the CFO of a company giving away sensitive information and a judge having the ability to wrongfully sentence one year of someone’s life based on a myth? If everyone should understand cyber security, to what extent can we demand for people in power to take on that responsibility? Feel free to share your thoughts in response.