Play nicely: a newbie’s peek into the compatibility of AI and ethics

Alice Whale
Simpleweb
Published in
5 min readAug 2, 2018

A few months ago, with everyone in the throes of the GDPR panic, it truly occurred to me just how little most people know about what really goes on with their personal data. Or, more surprisingly, how little businesses know about the potential of this data and how their use of it can impact on their customers.

In the wake of the Facebook scandal there was outrage from the public about the ways in which the company and their partners use their data. Yet, the same people slinked off back to the network once the heat had died down. For them, it was still an essential link to their families and friends — it is a social network after all.

GDPR is forcing people to be more transparent and clear in their data policies but, really, most people still don’t really ‘get’ it. People aren’t empowered to truly consider the value they are receiving in exchange for their personal data.

To me this seems unfair, unethical even. You can have a data policy written as clear as day but no-one’s going to read it. So people still won’t really know when their data is being sold to companies they hate, or used to advertise to them in an unlimited capacity.

A balancing act

So what if there was a way to automatically score companies based their data policies? That way users could decide on the value of service they were wanting to use and compare it to how ethical the company was being with their data — at a glance.

Turns out there’s already a really cool tool that already does this called ‘Terms of Service; Didn’t Read’ (ToSDR) which effectively crowdsources data on service’s terms and conditions, which are then broken down into smaller points which are rated ‘good’ or ‘bad’ by the site’s contributors. The result of this is an automatically calculated grade based on the overall ratings, from A (the best) to E (terms raising serious concerns).

I loved that this stuff existed and that other people cared enough about this that they would spend their time contributing to open source technology along the lines of what I hoped to investigate. But the limitations of ToSDR was in its reliance on gathering enough data from its contributors to produce a reliable grading, leaving many sites with none at all. So what I really wanted to know is if we could ‘take out the middleman’ and build a tool that could use artificial intelligence to grade any site, service or policy at the click of a button.

With explicit rules about what needed to be included in policies now enforced by the EU, surely it would be simple to use this technology to extract clauses and automate the decision on how ethical a company was being with their customer’s data? Or so I thought. (Hint: I’m not a developer).

Robot lawyers

A few weeks later, I met with James Touzel, a Partner and Head of Digital at law firm TLT, who’s heading up a project that is using AI to identify risk areas in legal contracts — a web-based solution called TLT LegalSifter. I was keen to find out if and how it was being done and if it could be applied to help people decipher data policies.

What was surprising was that this technology is still brand new and businesses haven’t adopted this en-mass for contract negotiations or any other uses just yet. When you hear about AI you’re told ‘the robots are coming, they’re taking us over’. Of course, they’re not. Not yet anyway…

“We wanted to develop an AI solution that could review and advise on low-risk, low-value commercial contracts initially — nothing too complex — so things like NDAs or SaaS contracts or a consultancy agreement where it’s normally lower risk and lower value”, James explained.

What they could get the AI to do was to identify a clause, or series of clauses, within a contract and serve up pre-written legal advice against it — such as the correct wording for a particular type of clause. A very clever and useful tool indeed, that will almost certainly increase the speed and quality of contract negotiations and enable more junior in-house lawyers, or even procurement and commercial teams, to manage contract reviews.

The only current limitation here is that the AI can’t understand what the clause says. So for personalised advice, based on the AI having recognised and understood intricate differences in clauses or statements, we’re not quite there.

“We’ve gone to market with a product which will identify the risk areas in certain types of contracts and serve up advice, but it still relies on the user to say ‘oh, that’s not what that says, I’m replacing it with that’”, James said.

“It puts the advice in one place, gives you an alternative clause… but it doesn’t do the last mile.”

Nuanced judgement

What this means for building a tool that rates how ethical a policy is, is that I have to decide what is ethical and what is not.

So what most of AI can do, at least on its own, is very black and white. If I tell it that any clause in a data policy that says a service will sell a user’s data is ‘bad’, it will always be scored negatively, even if in that particular case, for whatever reason, it’s actually not bad at all.

It doesn’t make the tool impossible to build but, for it to be genuinely useful for the majority of the population, it would at least need to have as many humans as possible to decide what they want to see in a data policy and what they don’t. And the real challenge would be making this rewarding enough to get the level of human contribution necessary to produce reliable data to work with.

Right now, AI isn’t up to the job, as we’re not at the point where it can be used to make complex decisions on ethics, at least not on its own. I’ve hit a crux in my beginner-level exploration of AI. But I have discovered that there are plenty of others who care about this issue, so I’m pretty sure it’s not the end.

If you’d like to chat about your tech project, get in touch with Simpleweb today.

--

--

Alice Whale
Simpleweb

Content Manager at Simpleweb. Lover of all things Tech for Good. Runner. Ale drinker.