Finding Humanity in the Mess
2,000 words or less. Entries must follow this year’s prompt: What will it mean to be human in the age of machine learning and artificial intelligence? What will this mean to you in terms of human creativity, identity, love, communication, and community? Given that current AI-based algorithms are a big part of today’s fake news problem, your essay might also address human solutions to this pressing issue.
- Note… Despite not being eligible, exploring this years theme wasn’t a deterrent to submitting this on 12/16/16
Cupid scored her a 70. There were numerous 80’s and 90’s gazing back. My first thoughts were something like, “do I reply to her;” “is a 70 worth my time and a coffee?” Then I started pondering what it meant to be the data, in a science experiment. Was I willing to believe that an algorithm could optimize love? Can the most human of emotions be distilled into a mathematical formula? Paint me a skeptic. More than our quest for love, artificially intelligent machines are crawling through almost every crevice of our lives. Like Cupids unseen arrow, there are equations calculating our work, our interests, our health, and public policy. The news that guides our thoughts, in addition to influencing our decisions is smitten by these calculations too.
Being human in the age of machine learning and artificial intelligence connects to a world of mysteries and secrets. Some of what’s happening we know, some of it we don’t know. We can muse about the Moai statues of Easter Island, the Antikythera Mechanism, Oak Island’s Money Pit, or the disappearance of DB Cooper. However, there’s a vast difference between unraveling mysteries like these and then navigating the treacherous currents of a secret world.
Governments hide their secrets behind the mask of democracy. The Pentagon Papers, Watergate, WikiLeaks, and Edward Snowden have all pulled back a veil. The distance between the levers of power and the citizens best interests is an ever widening expanse. In theory accountability is at the heart of democracy. However, transcending theory into reality is much like transparency being a foundational quality of the powerful, they’re rarely synonymous.
Corporate secrets occupy another universe altogether. Coca-Cola, Colonel Sanders, WD-40, and Google have an iconic like status because of their secrets. Like the flaws in democracy, the marketplace is equally skewed to favoring the few, the rich, and the most powerful. It’s homo economicus dictating the script. The marketplace full of stories that keep consumers fixated and in awe of the corporate Moonshots. We’re not supposed to comprehend them, we’re only supposed to accept every feat of technological prowess as progress. Secrets are the currency of competitive advantage. What’s valuable to the marketplace has nothing to do with keeping in step with individual values or best interests. However, it’s worth considering the impact that corporate secrets can have. In this case they can be measured by the magnitude of difference between not knowing why Coca-Cola tastes the way it does, and not knowing why we each see different search results. I can live with the mystery of one, but I’m concerned about the secrets of the other.
Beyond asking for accountability from corporations and our governments, we have to hold ourselves accountable. Because, accountability is at the core of coming to terms with what it means to be human in the age of machine learning and artificial intelligence. Humanness, and humanity itself will be defined by the relationship we choose to have with technology. It needs to more than just owning and using a device.
Having a conversation about artificial intelligence and machine learning should always reflect on Norbert Wiener’s work and words, as he suggested “any useful logic must concern itself with Ideas with a fringe of vagueness and a Truth that is a matter of degree.” There’s no disputing how deeply enmeshed these technologies are in our lives. Just because they are no longer the stuff of science fiction, it’s no reason to stop questioning why they exist and the influence they are exerting on society.
Think about two points on a line. We can label point A as good, and point B as bad, and everything in between them is the mess. No matter how far we stretch that line, the mess is still there. With machine learning and artificial intelligence we know there is the good and the bad. But, we have to see how our humanity is getting tangled up in the mess. More importantly we have take action and start untangling the mess.
If someone’s on a spending bender with your stolen credit card, we can probably agree the technology behind improved fraud detection is good. On the other hand, we know that machines are chewing on data like zip codes, and someone’s education, then spitting out inequitable credit terms or product pricing. That’s either bad modeling or even worse, it’s predatory and malicious behaviour. If someone is socially disadvantaged, it’s likely that today’s artificially intelligent machines will ensure they remain that way. Machines might be blind and without human care, but that doesn’t mean there’s no bias at work.
Sports fans are happy when their team is Moneyballin’ it’s way to more wins. But, winning this way is about people choosing to model, process, decipher, and implement openly available data. It’s only the won/lose column that suffers when outcomes don’t measure up. However, it’s unconscionable when someone’s unwittingly placed on a no-fly list and having their civil liberties trampled over because the government is Moneyballin’ the apparatus of state surveillance. We all suffer when data is being collected surreptitiously, modelled in secrecy, and processed in a way that’s casting a spectre of guilt over us all. This is stripping away our humanity.
Oncologists using Watson to make significant strides in cancer research and treatment is humane. There’s nothing nefarious about companies optimizing delivery routes and helping reduce the amount of carbon polluting our atmosphere. These are good examples of how valuable this technology is. But, we’re denying our humanness if we simply accept technology that is persistent, pervasive and pernicious in the name of wishing to optimize our way out the chaos and uncertainty that makes life an existential challenge. What’s worse, this attempt to free ourselves from life’s messy quandaries is eroding our freedom of thought.
Artificially intelligent machines are neither accountable nor infallible. They process, they calculate, they do what they are programmed to do. Machines are incapable of exercising intellectual honesty. F. Scott Fitzgerald wrote “the test of a first-rate intelligence is the ability to hold two opposed ideas in mind at the same time and still retain the ability to function.” If those who train the machine aren’t up to performing Fitzgerald like mental contortions themselves, then it’s unlikely such ambiguity can be coded into the secrets of any algorithmic.
To be human in the age of machine learning it’s fair to ask, who is teaching the machines to learn? What qualifications do these teachers have? What do these teachers know about the human experience and condition? Are these teachers prepared to be accountable for the models they create, and the choices of data being fed into them? When significant decisions are being made from machine outputs that can have dire consequences in people’s lives, then demanding more accountability can’t be an act of heresy or treason.
Human learning is largely a social activity. Curiosity is also at the heart of learning. Proclaiming that machines are learning is deflecting from the idea that someone is training them. As machines don’t muse over important human questions such as the meaning life, our attention should be directed at the fact that they are simply tools. We can learn to think critically. We can also learn to be skeptical. These two distinctly human qualities can be our fulcrum for separating fact from fiction, because it’s abundantly clear that some of the tools delivering our news are deeply flawed.
There’s no Walter Cronkite bot because a machine doesn’t care about earning our trust, or communicating with the credibility like he did. A bot can’t reveal the humanness of a tragic moment that comes with announcing the death of President Kennedy. Describing Neil Armstrong’s first step on the Moon wasn’t something mechanical, it was seminal event. Re-visit Cronkite’s 1968 commentary on the Vietnam war, a recognize a bot can’t communicate with the depth or the gravity like he delivered. Instead, today we get Facebook censoring one of that war’s most important images because a machine sees it as child pornography.
In the race to optimize eyeballs and clicks for cash, what we’re getting in return is the erosion of authority. Losing touch with any authorial credibility is making news meaningless. Reducing news and information to a clickbait commodity means facts have absolutely no sublime value. Investigating the facts, and holding our leaders and institutions accountable becomes little more than a ridiculous act. The algorithms driving our news are machines of bias optimization. News feeds are an expansive chasm of polarity. With opinion masquerading as knowledge, and dogma being construed as truth, humanity is at risk.
Machines are voraciously consuming our data, and we’re getting less value in return. The push to optimize and personalize our news is turning world wide web into something more like miniature world. What we see, what we think, and the language by which we define our very existence is dissolving around us. By giving our language over algorithms, these non-human agents are pushing us towards to the precipice of an Oceania like abyss. From his novel 1984, Orwell writes, “the whole aim of Newspeak is to narrow the range of thought? In the end we shall make thought-crime literally impossible, because there will be no words in which to express it.”
My deep concern is that the mechanization of our language and streams of free flowing Newspeak will be the dissolution of dissent. We will be facing a collective social failure if we lose our ability to question and hold accountable those running the most powerful and important institutions. I approach each day with curiosity and great hope that we have the technologists, designers, and engineers who care about putting humanity first. Putting humanity into the heart of their machines will make it possible to imagine a more equitable world, and one where we can flourish in a richer web of human knowledge.
If we think about desire, erotic love, attraction and affection, then Cupid is the mythical embodiment of what’s messy about being human. Cupid’s arrow is symbolic of our tools. Addressing the question of what it means to be human in the age of machine learning and artificial intelligence is about trusting ourselves to be human, getting over our flaws, getting comfortable with the unknown, and hosing ourselves off when it gets messy. I said screw the algorithm and trusted myself. Happy I did, because choosing to date a 70 has become a meaningful relationship. With matters of the heart it’s always good reminding ourselves that machines don’t dance.
“In order to make progress, one must leave the door to the unknown ajar.” — Richard Feynman
From John’s pen (cofounder).
Please visit Mentionmapp and explore the Twitterverse soon!