Is ‘big’ data just ‘opinions’ embedded in code?

Lisa Loudon
Jan 30 · 4 min read
“The web, as I envisaged it … we have not seen yet” Tim Berners-Lee

Supporting education by any means is vital, but I believe it is legitimate to wonder if the power behind AI is only beneficial to those businesses: the so-called “frightful five”; and other individuals that are already economically powerful to the detriment of the rest of us and the environment. In her book Who can you Trust Rachel Botsman writes about the tech focus changing from “technology doing to technology decoding” so what, exactly, is being decoded?

Decoding our data (numerals or text, videos or audio) will involve anything and everything you can imagine that companies hold about us, based on information we have freely offered up, from Facebook likes; through our text interaction with chatbots; from interactions using internet-enabled (IoT) personal assistants; through to personal details about us that have been gathered by service organisations and sold onwards. Anything, in reality, that might make businesses money, either now or eventually — by selling “stuff” to you, or by selling on “opinions” about us to be used elsewhere. It’s a long way from the managed data accessibility for all — the initial premise behind the Internet.

Cathy O’Neill in an excellent TED talk questions our “blind faith” in the organisations that are making money from harvesting data held about us. She wonders which algorithms are being used to describe “success” in our society? If these algorithms are only “opinions” about us, based on our data, embedded into future apps or programs, shouldn’t there be a basic integrity check? How often are our own opinions “objective, true or scientific”? I know mine aren’t, with my completely human, but generally wrong, inbuilt biases and preconceptions. Bias, and how it operates opens up a new research area revealing how algorithms may make certain decisions, as illustrated by the work of AI researcher Sandra Wachter. Organisations such as Amazon have struggled with arranging fairness and equitability within AI, and it seems ironic that an aim to remove the human element —possibly even bias — from recruitment actually ended up confirming bias.

Rumman Chowdhury is the global lead of Accenture’s its Responsible AI initiative, and feels that the industry should move beyond “virtual signalling to real action”. she said recently, “As for the ethics and AI field I’d like to see us digging into the difficult questions AI will raise, the ones that have no clear answer”. She questions if the balances and checks are right between enabling security monitoring versus state surveillance that reinforce biases and discrimination.

I am writing this piece from a privileged position of convenient access to the internet: nearly half the world’s population will come online in the next decade: that means only half of the world’s opinion is used in formulating future tech, yet by the end of 2019 there will be more than 3 million users, so surely we have to ensure, using Cathy O’Neill’s words, that we find out “for whom this algorithm will fail”.

Tim Berners-Lee, the founder of the Internet has created Solid, an open-source platform built to decentralise the web and allow you to carefully use your own, personal, data. Through creating personal online data stores you will be able to move your data from app to app, and a new type of digital assistant will work for you, not for a tech company. An uphill struggle perhaps?

Until then, what about data held on you that you have freely given up through social media? What does your profile say about you? Apply Magic Sauce is a not-for-profit research project by the University of Cambridge Psychometrics Centre which offers a way to understand how your data may be used, and how you are likely to be targetted by advertisers based on your social media content. When I Applied Magic Sauce to my Twitter feed it concluded I am aged 37 and male! Hmmmn. If an algorithm gets this so wrong, and we don’t know why or how an algorithm reached a particular decision, I think we have good reasons to be concerned.

If you’d like to find out how AI can create hope, impacting our possible futures, rather than our probable futures there are free and excellent AI courses available online aimed at all of us — the original one developed in Finland (available in English), or a recent Dutch-language version, of course we all have busy lives, but this is definitely one to add to the list!

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade