The Search for Faith in Algorithms

In the now famous 1997 “Game 2” between chess grandmaster Garry Kasparov and Deep Blue, a machine that could calculate 300 million chess positions per second, an exasperated Kasparov claimed that the machine had cheated, believing that another human was playing on its behalf. It was the first time a machine had ever defeated a human in such relative speed. Kasparov had been caught off guard by a single move by Deep Blue. A move so simple in contrast to the machine’s complexity that it was un-machine-like, almost human. Kasparov couldn’t comprehend what this simple move meant. Could Deep Blue be this confident? Was Deep Blue able to predict his next move with such precision? Kasparov felt he was being tricked and continued to play with skepticism towards a machine that exhibited superior intelligence. After the match Kasparov was quoted saying, “I could feel — I could smell — a new kind of intelligence across the table [1].”
Twenty years later, that is the same skeptical relationship we now have towards artificial intelligence. Understandably so. How can we trust a machine that thinks far more advanced than any human? And, how could we trust its findings since we can’t understand how it computes so much so fast? Although we have yet to interact with true AI, algorithms found in search engines may already be our first point of contact with such machines. The future is inevitable and it has the potential to be better than what we could imagine. It’ll likely be far safer and more productive. But first this adaptiveness requires us to surrender human control and human judgment to an emotionless, valueless machine that runs on difficult to understand algorithms. And even though there already are a few instances where humans have learned to trust a machine, from simple ones such as ATMs to more complex ones such as GPS systems, the next evolutionary step in technology is leading us towards something far more advance than minimal task-driven machines, but towards machines that have the capacity to wage vast multitudes of probabilities and conclude the best possible outcomes, and from those outcomes can begin to learn and perform in far more superior ways than what’s feasible with a human mind. Instead of just performing surgery, deep learning intelligent machines will be able to diagnose a patient more accurately than any doctor, by not just understanding the patient physically but perhaps also emotionally.
Soon we may see a search engine with the ability to learn from its users and understand its users’ every need. As of right now, search engines do not only generate answers to fundamental facts such as, how far are we away from Mars? But deeply complex issues such as, what are values? And personal ones such as, did I do the right thing by getting married too young? We then ask ourselves, how do I trust these answers? But more importantly, how can I trust these results from a search engine? Why did it give me these links and not others? So, are we better off at the hands of a deep learning algorithm? And, for the sake of my original topic, how will the concept of “trust” begin to change as we release some of our control?
Algorithms embedded in search engines have begun to change how we experience the internet by tracking our online activities. The most popular search engine, Google, process over 40,000 search queries every second, adding up to 3.5 billion searches per day and upwards to 1.2 trillion per year[2]. All captured and retained by databases to make better predictions. This level of intrusiveness may understandably worry individuals. How personal data is being used and who is using it is troublesome. Online algorithms used by search engines and social networks have led to filter bubbles, the spread of fake news, opportunistic marketing, among so many other things. But just like there have been faults, there have also been some benefits. Search engines have become a lot smarter at finding what we’re looking for, from top websites to the most obscure, which makes web surfing incredibly rewarding. What we will soon begin to see is that the more a user uses a search engine, the more the algorithm in the search engine learns about the user, eventually learning enough to conclude very specific things about the individual and later society at large; designed to give outcomes based on our tweets, Facebook posts, message boards, purchases, music choices, books, and every other inch of the web we touch and leave our prints.
An intelligent enough search engine under unsupervised learning will one day begin to assist users before users realizes he or she needs assistance. This level of dependency will lead us to perhaps trust these algorithms more and more. Whether this “trust” is genuine, coerced or simply blind will be up to debate, nevertheless online algorithms may very well be the first method in facilitating a relationship between humans and A.I. An AI that monitors and understands us better than we understand ourselves; guiding us, making decisions for us, and defining our culture and way of living. If this all sounds like technological determinism is because perhaps it is but on steroids. Soon we may begin to think of the “internet,” perhaps not so much as computer connectivity but rather as a synthetic-omnipresent being with its own intelligence, shaped by the data we trust to give it.
Yet in order to gather all this data, users need to submit it first. Even with its imperfections, as of late, big data has become a wild craze, could it be possible we’re trusting it a bit too much too fast without knowing enough about possible unintended consequences? Yuval Nora Harari, the celebrated author of Sapiens: A Brief History of Humankind and Homo Deus: A Brief History of Tomorrow, has coined the term “Dataism.” Described by him as a new shared religion that no longer worships gods or man but data driven algorithms. As he recently stated in an essay for the Financial Times: “Just as divine authority was legitimized by religious mythologies, and human authority was legitimized by humanist ideologies, so high-tech gurus and Silicon Valley prophets are creating a new universal narrative that legitimizes the authority of algorithms and Big Data. This novel creed may be called “Dataism”. In its extreme form, proponents of the Dataist worldview perceive the entire universe as a flow of data, see organisms as little more than biochemical algorithms and believe that humanity’s cosmic vocation is to create an all-encompassing data-processing system — and then merge into it.” Adding, “Dataists further believe that given enough biometric data and computing power, this all-encompassing system could understand humans much better than we understand ourselves. Once that happens, humans will lose their authority, and humanist practices such as democratic elections will become as obsolete as rain dances and flint knives.[3]”
There is some truth to the idea that there is an obsession over data by academics, scientists and entrepreneurs. Yuval’s other concern, that humans will lose complete authority and become obsolete may sound extreme, but he may be right in believing that at one point we will have greater confidence in algorithms than in humans, at which point, our trust will not lie on a person with expertise but on a machine — forever making us question human decree while easily accepting an output from a non-living thing such as a search engine. Google is already using a deep learning system that is above and beyond in design to simple algorithms restricted to a set of human-created limits. In the past few years Google has applied RankBrain, which generates incredible findings, even out of ambiguous words and sentences, to give the user results he or she intended to search for but wasn’t clear on how to word properly. By using “word vectors,” RankBrain thinks and learns what the user probably meant and makes a much smarter guess than any human engineer ever could. The improvements have led to Sundar Pichai, Google’s CEO, who was up until now skeptical of applying these kinds of machines out of fear of losing control, to say, “Machine learning is a core transformative way by which we are rethinking everything we are doing.[4]” This include’ s Google’s own personal assistant, appropriately named Assistant.
At the same time, Facebook has developed DeepText, described by them as, “a deep learning-based text understanding engine that can understand with near-human accuracy the textual content of several thousand posts per second, spanning more than 20 languages.” Adding, “DeepText has the potential to further improve Facebook experiences by understanding posts better to extract intent, sentiment, and entities (e.g., people, places, events), using mixed content signals like text and images, and automating the removal of objectionable content like spam.[5]” This is the beginning of an algorithm who can read our meanings — instead of us being specific, it understands, not suggests, what we mean even if we find it difficult to express our intentions. Many other examples of how social networks and search engines are experimenting with their algorithms can be found all throughout the internet including websites that deal with academic research or other specified topics.
As I said earlier, the public, like Kasparov, is still apprehensive about deep learning machines. The benefits of this technique are quite obvious, so perhaps there’s more than just privacy and the politics that come with it that results in humans currently lacking faith in such algorithms. There could just be an innate bias in all of us to not trust an algorithm above human judgement. Researchers at the University of Pennsylvania released some interesting findings in a research paper titled, Algorithm Aversion: People Erroneously Avoid Algorithms After Seeing Them Err, in which they state, “People are more likely to abandon an algorithm than a human judge for making the same mistake. This is enormously problematic, as it is a barrier to adopting superior approaches to a wide range of important tasks. It means, for example, that people will more likely forgive an admissions committee than an admissions algorithm for making an error, even when, on average, the algorithm makes fewer such errors.[6]” The individuals tested knew that the algorithm outperformed humans and they still did not trust it.
The best way forward is to realize these machines are not perfect. Yet, relenting human judgement to a deep learning machine which does not have the capacity to feel and is imperfect, is challenging. Julia A. Shah, who leads the Interactive Robotics Group at MIT Computer Science and Artificial Intelligence Lab, recently told MIT news, “Lack of trust is often cited as one of the critical barriers to more widespread use of AI and autonomous systems.” Adding, “These studies make it clear that increasing a user’s trust in the system is not the right goal. Instead we need new approaches that help people to appropriately calibrate their trust in the system, especially considering these systems will always be imperfect.[7]” She is referring to a two-year study between MIT and Singapore University of Technology and Design (SUTD), that noticed humans are willing to “forgive” an algorithm’s mistakes given its overall performance is good and reliable.
But once humans are willing to forgive, there is still the lingering philosophical problem, which is, are we willing to allow search engines and the deep neurons that process our personal data, to know us better than we know ourselves, and if we do, will we ever be the same society again? What the internet is asking us is to trust an irreversible way of living; to give into a machine that will never deal with the consequences while we’re left to face them if there are any. Will the new divide between modern western cultures and the rest of the world be at even more exponential extremes than ever, and what resentments will that create? Will these divides only be fixed by more technology? Will those without advance technology, trust those who have it?
Before the internet we only trusted machines to work properly and process information, knowing it was limited and never expecting it to assist us in such an intimate level. Search engines may no longer just provide us with an answer to what we were asking but a better question than the one we presented. How will this change the way we think of ourselves and for ourselves? And if we do trust such sophisticated algorithms will the trust we have in humans begin to rapidly diminish and if so, to what extreme? Will the skepticism we have now towards machine one day be turned against other humans? A search engine which tells you your life partner may not be the best fit for you, will we trust it? After all, it knows all of your interests, attributes, preferences, friends, family, lovers, secrets, thought process, taste in just about everything, and your general way of being — it will know more about you than any other person you’ve ever open up to — and isn’t this what’s already in our search history? Fragmented pieces of ourselves waiting to be assembled.
The internet consumes us all. We are dutifully tied to it, which makes it much more different than any other technology and this relationship may help us to rethink what “trust” is and whom or what we give it to. Even if these AI-capable search engines fail us in some ways, our deep dependency on the internet may help us overlook their flaws. We’ve long trusted doing many things online. We have shopped for clothing, transferred money from bank accounts, submitted our income taxes, figured out how to get somewhere, sent private emails, and surfed through various websites constantly looking for information while offering our own. And we have long trusted posting just about anything on our social networks allowing others to see a version of ourselves that’s curated — our better sides. But the algorithms that knows us best sit underneath search engines. The billions of things we have asked them and the almost-unlimited amount of knowledge it already knows, together algorithms serve as the aggregated nucleus of the internet. Their dominance is unmatched to any other piece on the internet.
Until now what the internet has provided above all is the ability to connect and trust other people. Certainly, there are wrong doers who take advantage of the gullible, but overwhelmingly we’ve trusted the internet so long as we understand another human is on the other side. As deep learning is introduced into search engines we will slowly begin to trust it, that’s if, just like the chess playing computer Deep Blue, it begins to act much like a human and less like a machine. The sheer vast amount of human knowledge is what has made the internet what it is, a cesspool of human ideas, limited only by the unmanageable size of it all, available for a deep learning algorithm to build on it and gain human-like aptitudes. Applying that thought to the idea that we should start think of deep learning less like Artificial Intelligence and more like Intelligence Amplification, sometimes referred as Intelligence Augmentation, in which we begin to consider search engines a form of technology that strengthens our own intelligence and capabilities, then the willingness to trust these algorithms may come more rapidly. We may begin to think of deep learning algorithms inside our search engines as a collaboration between human and machine where search engines can level being obedient and autonomous at the same time, while our relationship with algorithms will balance trust with both learning and teaching.
In the end, trust is the amount of confidence we give to another human. Currently, we have given it to algorithms and later we will submit it to artificial neurons based on us, in an effort to enhance our own intelligence. This trust is based on a profound instrumental value. There’s utility behind this kind of trust, and whether machines merit this trust will lie on the level of competence delivered by them. If we think of search engines as the standard performance by which to judge advance algorithms and later deep learning by, then possibly they’ve already warrant human trust. Search engines, although imperfect, have made us into super humans with immediate, direct knowledge and with the ability to connect with others in ways we could’ve never had imagined.
[1] Brian E. Shicoff. “Deep Blue’s Intelligence.” Columbia. http://www.columbia.edu/cu/moment/v0/041796/deepblue.html
[2] Google Search Statistics, http://www.internetlivestats.com/google-search-statistics/
[3] “Yuval Noah Harari on big data, Google and the end of free will.” Financial Times. https://www.ft.com/content/50bb4830-6a4c-11e6-ae5b-a7cc5dd5a28c.
[4] Clark, Jack. “Google Turning Its Lucrative Web Search Over to AI Machines.” Bloomberg, 26 Oct. 2015. https://www.bloomberg.com/news/articles/2015-10-26/google-turning-its-lucrative-web-search-over-to-ai-machines.
[5] “Introducing DeepText: Facebook’s text understanding engine.” Facebook. https://code.facebook.com/posts/181565595577955/introducing-deeptext-facebook-s-text-understanding-engine/.
[6] Dietvorst, Berkeley J.; Simmons, Joseph P.; Massey, Cade. “Algorithm aversion: People erroneously avoid algorithms after seeing them err.” Journal of Experimental Psychology: General, Vol 144(1), Feb 2015, 114–126.
[7] Jesse DeLaughter. “Building better trust between humans and machines.” MIT News. N.p., 21 June 2016. Web. 28 Apr. 2017. http://news.mit.edu/2016/building-better-trust-between-humans-and-machines-0621.
