Google’s Selfish Ledger and the Frail Human Will
Recently, an internal video from Google describing a concept known as the Selfish Ledger was obtained by The Verge. The nine minute video is worth a watch if you haven’t seen it. It’s quite creepy.
The basic idea is that Google could use the immense amount of data the company collects to nudge users towards achieving their goals. The “ledger” of a user’s data would determine the best way to achieve the goal — say, lowering environmental impact — and prompt the user to make decisions in line with that goal. In the video we see suggestions to book an Uber Pool or buy local bananas. At one point in the video, the AI decides that it needs more data on the user, so it designs and 3D-prints a bathroom scale which it offers to the user at a price it knows the user will purchase. More data means the ledger has a better understanding of the impact of user behavior and can offer better suggestions.
Google theorizes that we may intentionally add more sources of data to create a richer ledger. In the same way that DNA has been sequenced to understand human biology, the video proposes that human behavioral data could be sequenced. Eventually the ledger could gain a “species-level understanding” of a complex problem like poverty. Beyond simply pursuing user-determined goals, the Selfish Ledger could take on this larger goal and modify the behavior of populations in order to accomplish it.
The Selfish Ledger video is unnerving to any reasonable person, but we need to acknowledge that the data collection behind the concept is already happening. Google, Facebook, and others gather an staggering amount of data on their users. Privacy is a concern, and companies like Apple are capitalizing on the current moment to market themselves as the privacy bulwark of Silicon Valley (I don’t doubt that Tim Cook’s concern for user privacy is authentic, but it is undoubtedly a quite marketable position, not unlike the release of a rainbow watch band during Pride Month).
Yet, GDPR aside, societies are not taking drastic measures to increase user privacy and limit the power of data-hungry tech giants. On the contrary, many of us continue to willingly offer our data. None of my friends that I know of deleted their Facebook account in the wake of the Cambridge Analytica scandal — and neither did I.
Perhaps the benefits of giving over our data outweigh privacy concerns.
It’s easy to come up with scenarios where the Selfish Ledger could be helpful. Imagine you have a meeting at Starbucks. But you forget which Starbucks you agreed to meet at and you’re actually heading to the wrong side of town. Wouldn’t it be great to get a notification telling you to turn around before you arrive at the wrong location and wonder why your friend is late? What about health? When I’m on Yelp figuring out which fast food restaurants are open at midnight, my phone could remind me of the fresh peaches that I bought earlier in the week.
These somewhat trivial examples illuminate the growing complexity of our relationship with data. The more data our digital tools have, the more helpful they can be.
And who wouldn’t want some help even in these trivial ways? All of us can identify in some way with these words from Paul, “I do not understand what I do. For what I want to do I do not do, but what I hate I do” (Romans 7:15). Just as we might use a hammer to do what our strength cannot, or a calculator to do what our intellect cannot, perhaps we should embrace the Selfish Ledger to do what our will cannot.
Becoming a person of virtue is difficult. Doing the right thing often requires sacrifice, or at least delayed gratification. The Selfish Ledger could speed up this process and make us all better people. As long as the goal of the ledger lines up with what you actually want to do. Then again, if the ledger has sequenced behavior across populations, maybe it knows what you “want” to do better than you do.
These are the questions of transhumanism and human enhancement technology that our generation will need to answer. The logic of transhumanism is that every bit of humanity is malleable and manipulatable. But is that actually true? At what point does an enhancement technology become a denial of our reality as human creatures and a grasping for god-like power?
Silicon Valley needs more conversations about ethics because the moral quandaries will only grow. Perhaps Google and other large tech firms should employ Chief Ethics Officers to intentionally challenge their own work. Humans will be much better off if we answer these questions ahead of time rather than letting the pursuit of progress at any cost overtake us.