Google’s Moral Obligation?

Google have continually resisted being labeled as a media company. Which, is a little misleading — because they are. Or at least in the ways in which it matters.

So they’ve been taken to task many times, made some changes and taken some responsibility. But still, not a media company.

Strange that producing something (such as a search results page) could be considered a moral act. Maybe an immoral act too, but it is either good or bad to someone. How long can they resist the charge of responsibility over that?

A box of code is amoral you could say. Just a machine, you feed it, it delivers.

Further changes mean that the experience of “search” becomes less abstract for us. Conversational search and personalisation remove parts of the interface. The knowledge flow is more organic, predictive even.

We risk not noticing this knowledge flow at all. A great victory, surely. But unthinking consumption leads to a risky proposition; Bubbles form and our horizons contract.

Does this open us up to a risk of a partisan Google? Their defence here: “we give you content we think you’ll like”. So it’s all our own fault then.

What does a “moral” Google look like? If it is “moral”, whose morals?

‘Big Data’, beyond some well-known failures has its uses. Identifying behaviours and trends and teaching a machine about them can be very powerful.

Imagine the traits of someone with depression, would there be latent evidence of this in their search history?

Could the “bigness” of data identify it? Could Google be taught to find it?

Google seems to be wary of this already.

What information does Google provide? In the UK, it’s the NHS (National Health Service):

This and other searches are a product of conventional wisdom. I don’t disagree with the information given (above), but whose truth is it?

How is this information then used? It’s hardly a secret that Google harvests everyone for anything/everything.

When my daughter was born, evidence of 3am Googling and Mumsnet forums betrayed my most recent life event. I knew this information would have been valuable for an advertiser.

Morally speaking, Aptamil’s uncanny ability to sell me baby formula milk doesn’t keep me up at night. But do you ‘sell’ to someone who has depression, for example?

This is a moral question, one which is catered for already.

How moral is not exploiting someone when you have the information to do more?

If the next search betrays a darkening situation, which search results are returned? Does Google remain some compliant-yet-mute facilitator?

Or, does it steer the searcher away from that particular disaster. Rather than educating a searcher on the best way to overdose, does it instead lead them to help?

Who (or what) is Google to make that decision? Google ‘knows’ a lot, more than we could ever know.

But will it ever know enough to save a person’s life? Would saving a person’s life leave them the better in the long term?

There’s a faint whiff of Asimov. If rule one is to do humans no harm, rule two posits that the machine cannot obey the master’s commands if it could lead to harm.

So could or should Google refuse the searcher when the time comes?

The decision to be amoral against this backdrop is easier. This is Google wanting to wash its hands of the damage which could come from search.

Yet when the time is right, it’ll clean up where possible.

That is having, and eating your cake. This is unlikely to endure.

If Google can’t adopt morals, how long can it continue to ignore them?

Show your support

Clapping shows how much you appreciated Chris Green’s story.