When did ‘Do Good’ become ‘Do No Harm’?

On privacy, ethics and data.

qiwen.
capgemini-sea
8 min readSep 9, 2019

--

Long before I entered the world of technology consulting, my foray into the strange world of IT started off with creating the simplest webpage in Adobe Dreamweaver, spending months in my Primary School days saving up to buy a FIFA game — only to realise it was incompatible with my Windows 95 PC, and Isaac Asimov’s I, Robot. If the mention of the last item immediately triggers thoughts of Will Smith and the movie adaption — no, no and no.

Next year marks 70 years since the novel was first published with the famous Three Laws of Robotics:

  • A robot may not injure a human being or, through inaction, allow a human being to come to harm
  • A robot must obey the orders given it by human beings except where such orders would conflict with the First Law
  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws

The ambiguity of the above laws served as a fascinating notion to me back then (planting seeds for the future geek in me) — and still intrigues me till date. Thankfully, I have good company in this regard, with Elon Musk’s adoration for Asimov’s Foundation, Osamu Tezuka’s Ten Principles of Robot Law and to a more distant extent, Google’s (now replaced) motto of ‘Don’t Be Evil’.

Barely a few years ago, the words ‘Facebook’, ‘Google’, ‘Data’, ‘Social Media’ and the likes had a strong positive connotation to it. Those were the days that technology was lauded for giving a personalised face to each touchpoint / channel that users interacted with. Generic pictorial banners on search pages were likened to the dark ages (remember how Yahoo / Altavista’s search results looked? If you’re going “Alta-what?”, exactly my point), and the emergence of social media meant a new business model with new ways of engagement. We fawned over the ability of Gmail to automatically segregate our mails into categories and Twitter’s ability to fulfil our occasional need to cry out for attention.

The same words aren’t uttered with the same kind of reverence anymore. The topic of data privacy has been particularly pervasive over the past two years — especially with stories on Cambridge Analytica, Polar, Marriott, Google+, LinkedIn, and closer to home, SingHealth and the recent Singapore Red Cross breach. All of a sudden, what used to be shining examples of personalisation and innovation now become nightmare scenarios: Android users finding out that Google held location data on where they have been to (location sharing, duh!), or Alexa users realising that it wasn’t just Alexa who was listening to them — but possibly thousands of Amazon employees. The only scant consolation I have from such jumping-on-the-bandwagon fear mongering is the bemusement that most of the ‘freaking out’ on these privacy news were on social media platforms — oh the irony.

But here’s the thing: data privacy isn’t just a technology problem. Of course, many IT breaches are caused by poor IT administration and system implementation lapses — and will only get better with cybersecurity’s advancement. I’ve read my fair share of misguided responses to these problems, and they typically vary between the two views below:

  • It’s the users, stupid.” — it must be the users’ fault for being apathethic / ignorant / mindless / not being able to understand.
  • It’s the technology, stupid.” — data collection is an evil, and therefore it must end.

Invisible Agents of Trust

When you wait at the traffic junction and the ‘green man’ comes on, you will take a quick glance at the traffic, and then cross the road. But how can you be assured that the drivers — complete strangers to you — will stop at the red light? How can you be assured that the vehicles’ brakes are going to work? How are you even sure that the traffic lights two streets down will work in the same way?

The thing is: the entire system has been designed in such a way that you trust the agents in that ecosystem. You trust that the drivers, even though they are complete strangers to you, will stop their vehicles at the red light.You trust the legislations set in place to ensure that the drivers will adhere to these legislation. You trust the likelihood of the vehicles having a good tyre health to halt in time. You trust the ubiquity of these behaviours to apply wherever you go in the city. In the same vein, when the ‘red man’ is present, the drivers continue on their way, trusting that you will not cross the road in that period of time. Why? We all believe in the invisible agents of trust — the drivers you’ve likely never met, the legislation you didn’t set, the vehicles you didn’t personally build and the places you’ve never been to. We simply trust the interconnectedness of a system that is built to ensure that this particular moment of truth (crossing the road) is enabled by the invisible agents of truth. Sure, there will be certain occasions where the norm falls apart — a reckless driver, a jaywalking pedestrian — but we don’t simply abandon an entire ecosystem of agents for these exceptions.

So it’s not the users or the technology, ‘stupid’. It’s about building an application/touchpoint/ecosystem with strong invisible agents of trust such that users have faith in the simplicity, intent and effectiveness of your data policies, and therefore be willing to share their data without fear of the system collapsing on them.

And speaking about sharing data…

Selling Your Souls

We keep seeing the words ‘data privacy’ on the news, but it’s the topic of data portability that interests me more. One can always argue the semantics of the two, but the phrase ‘data privacy’ has been used so often for a variety of topics such that they are as meaningful as the words ‘technology’ and ‘big data’ — words/phrases that says a lot and nothing all at the same time. A perhaps simplistic view, but the issues of data security in the form of software bugs and loopholes (such as the ones at Capital One, Zoom and Marriott) are localised problems and therefore solutions, something that can and will be corrected by a combination of technology implications and market forces (users will demand for more secure software and infrastructure).

In recent times, I have grown more obsessed with the topic of data portability more than the umbrella of data privacy.

So what exactly is data portability? As an individual, data portability provides an ability for me to request, obtain and reuse my own data to another service provider, where previously my information would only have resided with my existing service provider, resulting in high barriers to information exchange, both technically and legally. In fact, the Personal Data Protection Commission in Singapore published a discussion paper earlier this year to highlight the merits and implications of the subject, which I found as a good read to get a better appreciation of the topic: https://www.pdpc.gov.sg/-/media/Files/PDPC/PDF-Files/Resource-for-Organisation/Data-Portability/PDPC-CCCS-Data-Portability-Discussion-Paper---250219.pdf.

Conceptually, the elimination of these barriers towards sharing data has the potential to bring benefits to multiple parties. First, the interoperability of data exchange makes for easy onboarding of users onto a new service or platform — think Singapore’s number portability scheme. Gone are the days whereby opting for a new telco provider meant having to change one’s mobile number — one can now retain his/her existing number regardless of telco network choice. Second, companies have to obtain explicit consent from their users prior to processing their data information — giving users more awareness of how companies are managing their data. More importantly, as the PDPC paper suggests, this initiative paves the way for new innovative business models to emerge (think Internet of Things).

At face value, the idea of data operability is a good one — but many questions come to mind, the first being: what are the boundaries of personal data? Your personal identifiable information is obvious, but what about more distant, yet possibly correlated data? According to GDPR, the answer is — it depends. (I could be corrected, but my inference is that water consumption amount as a value isn’t considered personal data, but will be classified so if it is coupled with more specific data.) If I drew an inference from your personal data to form, say, a credit score, your inferred / derived data is not considered personal data that you can request, obtain or reuse. The interpretability of boundaries pertaining personal data will be an interesting question facing companies in time to come.

Second, a thing you learn from a young age: the more people you tell your secret, the less it becomes a secret. In the same vein, with data portability, it provides a secure way for sharing of information to more platforms, but doesn’t improve the security of data itself, potentially increasing the risk of data exposure as more platforms have access to your information.

Lastly — and perhaps the most intriguing to me, the end-user’s ease of enabling data portability diminishes the significance of its implications. The technical and legal complexity of getting consent will now be multiplied by the spectrum of services that will request and/or provide information from a particular company — which results in a likely blanket, highly-trivialised treatment of obtaining consent: checkboxes and T&Cs. And as we all know, T&Cs is possibly the world’s most useless invention.

Like how 7,500 shoppers sold their souls.

Of course, if the processing of every single nugget of data required an explicit, custom consent, we’d probably be swarmed with emails every few minutes (remember the weeks of horror when almost every email began with “GDPR”?) and overlays upon overlays on each page within a website. But the typical business instinct when facing a complex problem is to solve it with the most all-encompassing single solution — and hence the appearance of another checkbox and T&C combination. Yet this blanket mitigation over-simplifies the implications of providing consent, where a user will inadvertently click ‘Proceed’ without due consideration of its potential repercussions.

Don’t just do no harm. Do good.

As with every major technological conundrum, there’s no omnipotent panacea to resolving existing questions on data privacy and portability — but the growing trend of companies taking the easiest way out via either a blanket solution that reduces transparency to avoid generating the wrong kind of attention or simply delegating it to the lowest priority item on the backlog runs the risk of the erosion of trust in the ecosystem each time a breach or vulnerability is uncovered.

It is imperative for every organisation to care about how customers’ data are collected — all the way from data strategy to the design of each form. If you don’t care, your customer(s) will ensure that you eventually do — and that probably will be too late.

About the author:

Wong Qi Wen is an massive proponent for minimalist design, a self-professed tech gadget geek and a self-appointed Director of Getting-S***-Done. Qi Wen serves as a Senior Consultant at Capgemini Southeast Asia & Hong Kong, with a key focus on digital strategy and customer experience for regional B2B and B2C organisations. His work ranges from crafting CX strategies, service design to being a lead business analyst with a special focus on content, commerce and channels.

PS: Want to get in touch? Find me on LinkedIn.

--

--

qiwen.
capgemini-sea

digital strategist. product owner. seeking the best sourdough bread in singapore.