Wikipedia.org

Indifference in the world of secret algorithms

An engineer’s view on why innocuous systems can be surprisingly problematic

Sebastian Lemery
I. M. H. O.
Published in
3 min readJun 7, 2013

--

So they’re watching us, like we always expected. Check. Okay, now that we have that out of the way, let’s look at the various reactions:

“Evil empire! Let’s storm the palace!”

“Wha?… I had no idea. I’m upset enough to like a story on Facebook, but not enough to email or call lawmakers and voice my opinion in any useful way.”

“Eh, don’t care. What are they going to do with my phone records? I’m a boring, average American citizen; let them have it.”

So, in summary, the US government claims it is using a combination of data mining (gathering a massive amount of information from various telecoms) and machine learning (teach a computer what is good so it can, on its own, find what’s bad) to locate “potential terrorists”.

On the surface, this seems like a potentially effective idea. A computer handles 99% of the data analyzing, then passes it on to a human counterpart when some pre-programmed red flag is triggered.

Now, let’s say currently all ‘red flags’ must be viewed by a human. This is good, since it means that someone who does something stupid, like mentioning three flagged words in a phone call, doesn’t get put into a list of potential terrorist threats. The human reviewer listens to the conversation and says “Oh, they were just cracking a joke.”

But what happens when we have too much data being processed to effectively have a human check every red flag? Well, the first thing you would do is break apart the flags into risk levels: low, medium and high. You now automate the low level flags, letting the computer manage that list, and then only review the high level flags. Problem solved!

And here’s where it becomes problematic. What happens when someone steals your identity? Last year, I had my credit card info stolen. It was a card I used infrequently, so I didn’t notice for three months. The crooks decided to use it sparingly at first, but then eventually started buying more expensive items. I contacted the bank and we resolved the matter over the phone.

I was able to fix this mistake with my bank because the information was available to me and the bank had a phone number.

This is how innocuous systems become dangerous. By hiding them behind closed doors and relying too heavily on unscrutinized algorithms, they have created a system that can potentially mark an average citizen as a threat by not taking into account unpredictable factors, like identity theft. Now whoever is collecting my data has three months of someone else’s activity mixed with my file.

The NSA is collecting and correlating your data. They are making broad assumptions about your activities based on that data. This isn’t, on the surface, that dangerous. However, when it is combined with algorithmic sorting and secrecy, it is terrifying. How can you possibly correct a mistake you don’t even know about? Imagine your credit score going down 200 points, but you’re not allowed to see your credit report or to file for a correction.

As the NSA and FBI’s surveillance grows, and their systems become increasingly automated, you will see more and more average citizens being added to no-fly lists, ‘suspected terrorist’ lists and watch lists.

And this is all happening because we, as a country, have decided that infrequent terrorist attacks are justification for blank checks and an unbelievable loss of liberty.

--

--