Should a cynical and irresponsible Silicon Valley be allowed to remain in charge of virtually all of humankind’s data?

Sami Kallinen
3 min readApr 17, 2016

Originally published in Swedish in the Finnish daily Hufvudstadsbladet

Automation and robotisation are becoming more prominent in the media, although the uninitiated might be forgiven for still being in the dark about them. Machine learning is happening right now. Artificial intelligence and machine learning applied to massive amounts of data are being harnessed to automate more and more of the things that, hitherto, we have had to do manually.

Right now most of this data is centralized and in the hands of a few key players in Silicon Valley. And this centralization of data, the fuel of machine learning, has also attracted an astounding number of the world’s leading machine learning researchers — the industry’s true superstars — who, in recent years, have been recruited from universities by companies like Facebook, Google and Microsoft.

As a result, these companies drive the research and development in this field, and this situation is potentially dangerous. This concentration of technology-based power is enormous — how do we know we can trust these companies?

In his recent book, “Disrupted: My Misadventure in the Start-Up Bubble” the tech journalist and writer Dan Lyons tells the story of his experiences as an employee of the Boston-based start-up HubSpot. The picture he paints of Silicon Valley is frightening: shallow, pubertal and at the same time extremely cynical and irresponsible.

One can, of course, question Lyons’ depiction. He might, indeed he clearly does, have an axe to grind, but if the picture he portrays is at all correct, then it is justifiable to worry about whether such a culture can take responsibility for managing the data of all humankind.

Of course, we can always decide to refrain from sharing our data with these organisations, but that’s not a particularly satisfactory answer. Firstly because it’s hardly practical or even possible to do that these days, but also because of the opportunities we would lose by taking that option. To not use these services amounts to not using the web, social media, email etc. and to not to share our data would be to throw away our ability to use and develop systems that can really make our lives, and indeed our communities better. Imagine, for example, in the future, more precise diagnoses when visiting the doctor and more consistent decisions by the judiciaries: resulting from machine learning combined with all this data. These things, and much more, could be the outcome of the kinds of research that is going on in Silicon Valley right now.

We need this form of artificial intelligence for all our futures, but we also require some kind of guarantee that the ownership and control of our data, and so our privacy, is not to be compromised.

Researchers are working on ways in which services can apply machine learning to your data, without “seeing” it. “Homomorphic encryption” is a means by which data is protected from external view. Only those who have a key to see the data can also see the results of the calculations which have been done on it. This is one possible model amongst many that could protect us from unscrupulous companies, but it will take a while before this kind of solutions will be ready for use.

While we wait, we have to be extra vigilant about the ethics of those to whom we entrust our data.

--

--

Sami Kallinen

I'm a humanist that dreams of 8-bit sheep. Work @ KSF Media, a newspaper publisher in Finland. Chief Digital Officer. Dad and a concept triangulator.