You’re Being Watched — And Not Just By The Government
The NSA is tapping our phones and reading our emails, but they may not be the only ones. In fact, smart criminals can turn almost any device you own against you
ANG CUI STANDS ON THE STAGE at the annual Chaos Communication Congress in Hamburg, Germany. It’s December 2012, and although it’s freezing outside, he looks comfortable in dark jeans and a blazer over a colorful T-shirt. He’s an accomplished security engineer, but, he tells the audience, his greatest technological achievement is creating the gigantic PowerPoint presentation they are about to witness.
“If we can power through the PowerPoint without my laptop bursting into flames, I think we’ve really accomplished something,” he says. The audience laughs and cheers.
Cui and his colleague, Michael Costello, are in Hamburg to explain precisely how they have managed to break into a popular phone made by Cisco Systems — a piece of hardware found on desks in thousands of offices all around the world. Their hack, Cui says, turns the phone’s receiver into an always-on listening device.
Cui may work in computer security, but he doesn’t really focus on computers: he worries about the multitude of devices that surround them. Most of the gadgets that populate our lives — the printers, the phones — don’t run anti-virus software, and Cui’s spent the past five years breaking into them to prove that they should.
The crowd in Hamburg is rapt. Cui’s presentation offers a glimpse into the frightening new world of digital espionage. If hackers get into your printer, he says, they can get a copy of everything you print. If they hack into your video conferencing system, they can monitor your every move. And, he explains to the audience, if they hack into your office phone, they can record every single thing you say.
It’s a few weeks before the announcement, and I’m getting a sneak preview of what Cui and Costello have done. We meet at Cui’s office at Columbia University in New York—he is a fifth-year Ph.D. student at its Intrusion Detection Systems Lab.
When I arrive, he hands me a small electronic device. He’d built it himself: it’s a mess of circuitry and electrical tape about the size of a credit card. There is a small phone cord coming out of one end.
He tells me it is called the Th1ngp3wn3r (in hackerspeak, to pwn something means to own it, to take control of its functions). Cui plugs it into an Internet-connected Cisco Unified IP phone that sits on his desk.
“This is something that you would take with you to a job interview,” he tells me in the rapid-fire cadence of a hip, educated New Yorker. “You plug it into the back of the phone while nobody’s watching. The hack takes about a minute.”
Next he pulls an Android phone from his pocket. He sends a piece of malicious software from his cell, through the th1ngp3wn3r, and onto the Cisco phone. The screen in his hand flashes white text scrolling over a blue background. Lights blink on the th1ngp3wn3r. Cui sits the cell phone down on his desk. Something is happening.
On Cui’s nearby laptop, lines of information begin popping up every few seconds. It’s data coming from the phone; the malware is active, and has already rewritten some of the fundamental code that manages the desk phone.
“Now the phone is sending out a whole bunch of traffic,” he says. After some typing and clicking on his laptop, Cui opens a file and presses play. A recording of his words from just a moment ago, repeated from his laptop’s speakers, rings out around the office.
“Now the phone is sending out a whole bunch of traffic,” it parrots back.
In just a few seconds an ordinary office phone has become a listening device, and the room is now bugged.
And although he was in the same room as the phone when he activated the bug, he could have been anywhere. “This thing could be in Taiwan and it would still send,” he says.
What if he takes his th1ngp3wn3r to a large company? He could connect it to a Cisco phone in the lobby and copy over his malware within minutes.
“And from there, you can actually get the phone to own other phones,” he says. “So you can take over the whole company.”
Cui showed me photographs of the same Cisco phones in other places: a conference room on Air Force One, on the president’s desk in the Oval Office, and behind a technician at an Illinois water pump facility that was allegedly hacked in 2011.
“These things are all over the place,” he said.
Taking the phone that sits on the desk of the world’s most powerful man and turning it into a listening device was not easy. Cui and Costello had to reverse engineer the phone’s source code in order to tell the microphone to stay on all the time. The process took years, but was accomplished with limited resources.
“My budget was one person, 150 dollars, a laptop, and licenses to software that’s available to everyone,” Cui explains. “That’s just, you know, tools of the trade.”
He pauses for a moment. “What can you do with a billion dollars of funding?”
The answer to that question may seem obvious in the wake of recent revelations about the activities of the NSA. The agency has much more than a billion dollars of funding — its budget is classified, but most estimates put its budget at around $10 billion each year — which it is using, in part, to conduct a trawling operation to collect a vast amount of data from communications companies, Internet services and email providers.
But while the extent of the NSA’s surveillance programs are played out in the media, in the courts, in hotels in Hong Kong and airports in Moscow, there is no reason to believe that American intelligence operatives are the only ones sucking up our personal data. And understanding how to stop random acts of surveillance is what Cui has spent his life working toward.
Red boxes and rebellion
ANG CUI’S PATH began in rebellion. He was born in China thirty years ago, but grew up on the Upper West Side of Manhattan. As a young teenager growing up on in mid ‘90s New York, he immersed himself in what was a sort of golden age for hacking. Hackers and phone phreaks could still trick payphones into mistaking tones played into their receivers as coin deposits; the world was largely unencrypted and it was a playground for smart kids like Cui.
“I was a little punk kid running around calling people,” Cui says. He’d use a device called a Red Box to play tones into payphones to call voicemail message boards run by 2600, the magazine for hackers. On the voicemail boxes, hackers would leave messages for each other. “There were a lot of people trading information about random stuff. ‘Oh, I own a server in NASA.’ NASA was always getting picked on in those days. It still is,” he said. “People would say, like, ‘I’ll trade you a Red Box for an account on NASA.’”
One early brand of mobile phone, StarTech, had handsets that could be reprogrammed to listen in on other cell phone conversations. They were easy to find in used cell phone bins in Chinatown, selling for $15. “And you’d just walk around and listen to people’s stupid conversations,” he says. But his life as a teenage spy was ultimately frustrating: the tawdriness of people’s conversations disappointed him.
“It blew my mind that this thing works, and works reliably, everywhere on the planet, and the bulk of humanity uses it to say, like, ‘What do you want for dinner tonight? Let’s call Chinese food.’”
Cui’s formal education in computer science began at Stuyvesant High School. It continued at Columbia, as did his rebellious streak. In his senior year, Cui organized a controversial beauty pageant, and was also trying to start an online business from his dorm room. The website — a collection of questions that companies ask at job interviews — ended up violating the school’s bandwidth quota. He was put on probation and asked to write an essay on the importance of following network usage policy.
“But I never wrote it,” he says. “I was too busy running the beauty pageant. So I graduated on probation, technically.”
After college, Cui went to work in the financial sector as a security engineer. That was when he realized there was no way to secure the printers and IP phones that sat in every office. Instead, he had to rely on vendors to secure the devices before he ever received them.
“You can’t install software on it; you’re not allowed to look at the code it runs,” Cui says. “Intuitively, knowing about security, you realize this thing is completely vulnerable. It can definitely be attacked.”
For three years, Cui considered the problem and grew increasingly frustrated. “The reason I came back to Columbia to do my Ph.D. is I wanted to tear apart everything that I wasn’t allowed to touch before,” he explains.
Professor Sal Stolfo, who runs the Intrusion Detection Systems Lab at Columbia University, accepted Cui into the program five years ago. “His application was very late but very interesting,” Stolfo told me later, by email. “And a bit of his reputation for fun and harmless shenanigans made it easy to accept his application.”
Two years into his Ph.D. program, Cui and his team scanned the Internet in search of the most vulnerable devices: those that still use their default passwords. Their targets included webcams, routers, printers, and connected telephones. In total, the scan identified 540,000 publicly accessible devices that were still configured with their factory default passwords. If you have a wireless router at home that you plugged in and never configured, then one of those was probably yours. But that wasn’t all they found.
“This is not just, you know, my mom’s Linksys router,” Cui told me. “We found video conference units in district attorneys offices in various states, definitely sensitive offices. And when you have an embedded device like a video conference unit, you get eyes and ears. And this is just what’s perceivable from public Internet. If the scan were to look at organizations from the inside, the number would be much higher.”
It was after the scan in 2010 that Cui began to hack the devices themselves. He started with printers, and devised a way to take control of an HP printer by sending a malware-embedded document to it. With this exploit, an attacker could email the malicious document to their target – say, a resume – and once the document is printed, the printer is owned: It would then send the attacker a copy of every document it printed.
“This one just cost maybe like $2,000 and some duct tape,” Cui said. “You no longer have to be a nation-state to engage in the nation-state level of sophisticated hacks on the embedded system side. And, again, there’s no defense, and no detection mechanism for it.”
Not long afterward, a computer engineer named Dillon Beresford wanted to see what it would take to find vulnerabilities in some of the most important computers there are—the sort of machines that control a power grid.
Beresford chose the Siemens S7 PLC. The now-famous computer virus known as Stuxnet had recently infected the same Siemens model in a nuclear facility in Iran. Stuxnet infected the PLC and caused the centrifuges it controlled to explode. Stuxnet, the New York Times later reported, was developed in a joint operation between the United States and Israel.
Because the Siemens S7 is widely used in critical infrastructure facilities here in the United States, Beresford wanted to test how insecure the units were. If Stuxnet could infect a PLC and cause centrifuges to explode, similar malware could infect a PLC in a power plant and shut down a grid.
Beresford started his project with no experience hacking PLCs. Within a month, he discovered eight vulnerabilities on the Siemens system. Perhaps the most troubling of his discoveries was a backdoor — a username and password combination hardcoded into the machine’s firmware, and mistakenly left by Siemens engineers. Malware could be programmed to go through that backdoor and issue malicious commands, very much like Stuxnet.
And all it took was one man with a background in engineering, working alone in his apartment in Texas.
“I wanted to try and debunk some of the myths about how it would have taken a nation-state with millions of dollars in funding and a team of talented researchers to develop an offensive weapon that could be used and deployed in the field against critical infrastructure,” he said at a press conference announcing his discovery.
Beresford handed his findings over to Siemens and gave them time to resolve the issues before presenting the vulnerabilities at a conference.
But then Siemens released this statement: “While NSS Labs has demonstrated a high level of professional integrity by providing Siemens access to its data, these vulnerabilities were discovered while working under special laboratory conditions with unlimited access to protocols and controllers.”
Beresford fired back by email.
“There were no ‘special laboratory conditions’ with ‘unlimited access to the protocols’. My personal apartment on the wrong side of town where I can hear gunshots at night hardly defines a special laboratory. I purchased the controllers with money my company so graciously provided me with.”
In the wake of Beresford’s disclosure, security experts criticized Siemens for leaving so many security holes open in such a crucial system. Beresford didn’t have to disclose his findings for the vulnerabilities to exist; there is no telling whether they’d previously been discovered by less scrupulous individuals. Before Beresford discovered these issues, and until Siemens released patches to fix them, every facility that used these units was highly vulnerable to cyberattacks.
Such a response is not unusual. Cui disclosed information about his phone hack to Cisco in October 2012. Cisco developed and released a software patch four weeks later. But by the time the conference took place, Cui and Costello had found out that the patch the company had released didn’t actually work. In fact, it was even worse: the update amounted to nothing more than four new lines of code.
“Between the time that we notified them and the release date, it was approximately one line of code per week,” Costello told the audience.
Experts often criticize vendors for the holes they fail to catch before releasing products. And when vendors learn of new vulnerabilities, Cui and many of his colleagues say they are too slow in their response and sometimes inept in their approach to solutions.
But most companies that build software and devices perpetually have long lists of bugs and security holes to fix. They fix one, and someone finds another. And they can’t be proactive, because they can’t predict every way in which a clever hacker may infiltrate their systems. They can only react to threats as they occur.
Computer security expert Bruce Schneier believes this is simply something we have to live with. “If there are 10 million vulnerabilities in Windows,” Schneier told me over the phone, “you can go find and patch a hundred, but who cares. You’re better off waiting for the bad guys to find their hundred and patch those. Because you have no choice. It’s like terrorism. It’s idiotic to believe that if you put a guard in front of the Sears Tower, you’ve made a difference. Because there are millions of office buildings.”
In other words, companies have to wait for attacks on their systems to occur before they can defend against them. Consider a company that manufactures machines that are used to control processes in critical infrastructure. The company implements security measures into the machine’s software before sending it out into the world. But once the machine is out there, a hacker finds a way around those measures. How long should it take for the company to release a patch? What if the vulnerability is in a machine that controls a power grid, like the one Dillon Beresford found?
Ang Cui thinks he may have the answer to that, too.
EARLIER THIS YEAR, Cui sent an email out to a small group of reporters, explaining how his team had developed a new piece of software that could protect phones from hackers. He calls it Symbiote.
A few days later, Cui and Stolfo hosted a demonstration of their defense mechanism. In their lab, they had the phone set up next to a laptop. Cui asked everyone to gather around. “Before I show the demo, let me just show you guys how we’re doing all this stuff,” he said.
Instead of relying on companies to fix security holes — an impossible task — Symbiote lives inside a device and tries to understand how it works. For example, when Symbiote is injected into a Cisco phone, it analyzes each of the phone’s typical functions and looks for patterns.
When the phone’s receiver is placed on the hook, the chip that controls the receiver’s microphone is switched off. When the phone sends data to the outside world, it’s sending calls and voice messages; it is not broadcasting a never-ending stream of audio.
Like a patrol officer who recognizes the patterns of movement in a neighborhood, Symbiote learns the patterns of the phone’s functions. And if it detects something it doesn’t recognize, it sends out an alert that lets the administrator shut down the system immediately until a fix can be found. If it’s a phone, it stops it from being turned into a listening device. If it’s a printer, it stops it from relaying documents. And if the machine was in a critical infrastructure facility, the alert could even save lives.
Symbiote, Cui explained, is tiny. It takes up fewer than 200 bytes on the phone’s firmware. It lives on the phone, sending out a single datum over and over again, indicating that all is well. That stream is monitored by another device, like a laptop. If Symbiote detects a process it isn’t familiar with, the repeating datum changes, and the laptop notices.
And, crucially, because Symbiote learns about the machine it’s injected into from the inside out — it isn’t programmed to understand a specific device’s vulnerabilities, just to examine its host and see what behavior is unusual — it could potentially protect anything.
But there are still questions. After all, if the NSA is spying on all of us, does this even matter?
If we use services like Google or Facebook, we have no control over whether they share our information with the government. But if the companies that manufacture our devices decide to install software like Symbiote onto them before distribution, snoopers would be cut off from an entire array of surveillance.
Right now, there’s a smartphone in your pocket that has a built-in microphone. It probably has a camera as well. It probably has GPS. If you look up at your computer screen, you may very well be looking into a built-in camera. All of it can be hacked. And as of now, no device manufacturer has implemented a host-based defense like Symbiote to ensure that it isn’t.
And then there’s another fundamental question. Does it work?
There is a lot riding on the answer. Stolfo and Cui have been working closely together, and are starting a company, Red Balloon Security. Their plan is to sell Symbiote to device manufacturers — and that could mean big money if companies like Cisco believe exploits like the one Cui discovered are a serious threat. But can it do what they say?
IN HIS OFFICE, a circle of reporters gathers around, Cui connects the th1ngp3wn3r to the phone. He sends his exploit code to it. The small crowd tightens.
He counts down: “Three. Two. One.”
For a moment, nothing seems to be happening. Then a red light on the Cisco phone starts blinking. Symbiote has detected a change.
A cell phone on the table starts ringing. Cui answers it on speaker.
“Hello, neighbor!” the phone says. “My IP phone has been pwned.”
He looks around, smiles, and hangs up.