A Rich Conversation on the Complexities of Human Compatible A.I.

Carlos E. Perez
Intuition Machine
Published in
8 min readDec 14, 2017
Naples in Long Beach, December 5, 2017

Thanks to Capital One for sponsoring this post.

The day began with an insightful keynote by Kate Crawford who spoke out about the intrinsic biases we find in the data we use to train AI. Her talk can be found on YouTube:

This keynote served as the perfect setup to begin asking Deep Learning researchers about the role of ethics in their work. People who attend the annual Neural Information Processing Systems (NIPS) conference are typically researchers in the field of neural networks (i.e. Deep Learning). They are folks who work down in the trenches exploring the theory and the “alchemy” of improving algorithms. I suspect that most researchers will spend only a fractional amount of their cognitive capacity on thinking about ethics.

Capital One took the initiative to invite several attendees, including top researchers and engineers from the world’s leading technology firms and innovation hubs — and myself — to a round table dinner to have a conversation about privacy, explainability, and ethics in AI. That is, to discuss the interplay between humans and the AI future we are all building.

This dinner was held at Michael’s on Naples. Naples is a curiously unique part of Long Beach (where the NIPS conference was being held). I came a bit early and had the opportunity to take some fantastic photographs while strolling through the neighborhood (see above photo). Honest, free flowing ideas (and wine!) contributed to the rich discussion. The following are highlights from the main themes the group discussed over the course of the evening.

One of Fortune’s editors was in attendance to kick off the discussion. He described his own personal frustration interacting with the automation behind Facebook. He told us of his interaction with Facebook as it tracked his relationship with his wife and the changes in that relationship, from becoming engaged to becoming married. He noticed, through these life changes, Facebook’s advertising never changed for him. While in stark contrast his fiance (and now wife) changed from advertising for “dresses and venues and all this stuff” and on to baby related products. His personal frustration was that although he felt that he was equally invested in his relationship, Facebook did not adjust appropriately to his life changes. Why does this go wrong all the time?

The Cost for Exploration vs Exploitation

Learning in the most abstract sense involves exploitation or exploration.

One guest, a machine learning researcher, remarked that there is a problem with optimization driven by A/B testing that converges to local optima. Businesses will not roll out features that increase exploration in exchange for advertising revenue. Advertising is (in general) not good at exploration.

Another deep learning researcher expressed similar sentiment. When it is expensive to make a mistake in your decision then a greedy option is required. A basic understanding of the world may be all that is needed to make money. However, explorations need to be made such that one can make better decisions.

In general there will always be a cost trade-off with creating approximate (but good enough) models of the world and ones that are more accurate.

The Danger of Misaligned Objectives

The objectives that AI are designed to pursue may not be aligned with our human needs.

One guest questioned whether engagement objects are truly serving us. Cat videos drive up engagement and “we love to rot our brains.” Have we ended up spending time that we wished were spent in a better way? Are we at the point where we should optimize for metrics that have different objectives? That is, should we not take into account our long-term goals instead of our immediate desires?

Many people want to quit smoking but cannot figure out how to. Should Facebook give the user an option to ask for less cats?

A more explorative interaction with our AI may allow for a deeper conversation that can reveal how we all want to truly spend our time. The guest mentioned an interesting website (http://www.timewellspent.io ) which explores how our society is being hijacked by technology.

The Consequences of AI Decision Making

The consequences of AI-based recommendations may simply be a decision that falls into a gray area.

One guest, a machine learning researcher focused on ethics and ML for good, remarked that it is unclear what the ideal outcome should be. Many males perhaps don’t care about the advertising served to them and defer to their partners regarding purchasing decisions. One’s optimal outcome will be very different from the stereotype measure. However it would be wonderful if we can signal to the AI this difference. These value driven metrics are difficult to quantify when they succeed or fail.

There are systems whose consequence may be much worse than not being served the right advertising. Take for example the “Trolley Problem”, which involves the decision of life and death.

Source: http://nymag.com/selectall/2016/08/trolley-problem-meme-tumblr-philosophy.html

Who Owns AI’s Objective Function?

One can use online ads to influence people to make decisions (that may be against their own benefit), remarked another seasoned researcher and engineer at the table. You can look at this as an optimization problem where one measures actions of people to predict behavior. You can keep tuning this by observing behavior. AI allows you to do this at scale. You can perhaps get anyone to believe anything if your AI is smart enough. Information that we consume is influential to who we are and how we act. Facebook and Twitter have an immense power over us. Perhaps the reason why this may be acceptable, is that they may be very bad AI. Unfortunately, they are incrementally becoming better.

Other guests pushed back, saying we should decouple this fear of advanced technology. Facebook allows advertisers to target people using basic information. We assume sophisticated technology for malice when in fact it may not be required. The problem is we allow entities to target users without any proper oversight.

On another note, one of the guests asked whether the others had read the State Council of China’s guidelines for AI development. The plan is by 2020 China can catch up with world level AI and by 2030 dominate all areas of research and commerce. This particular guest had read other government plans and believed this to be “the best articulated” plan. The plan encourages the use of credit metric as a proxy to predict citizens thinking and behavior. This use of “social credit” is explored in greater detail in “China’s Social Credit System: AI-driven panopticon or fragmented foundation for a sincerity culture?”.

AI that Knows More than you Realize

Another researcher and author posed us the question, “who should have governance over our data?” The average person is unaware of this and is unaware of how decisions are made on one’s personal data.

It was added that there are information we constantly volunteer to internet services. There is a lot of information that is shadow information. It is not a single picture, but one that dynamically changes over time. Most people are unaware of how much more is known by these services than they think.

Facebook has become a massive distributed brain simulator. It is disconcerting to imagine what can be achieved if people’s entire psychological profile is captured.

Predicting Failure Edge Cases

Another pivotal question that arose was “can you engineer such that there is no failure? Can you avoid failure? Can you engineer your way around failure points?” Air bags were invented to save lives. Unfortunately because crash dummies were of the male form, air bags were discovered to be harmful to women and children.

Most wondered if this was indeed a solvable problem. We simply cannot create the knowledge to enumerate all the failure cases. Failures are typically going to be edge cases.

One idea is that more simply, our goal should be to make our systems safe. These edge cases may be just talking points for scenarios that may never happen. The goal of development is to make systems as safe as possible. In short, we try as best we can but we can’t prevent all real or imagined failures.

It was also observed that humans have outsized fear of events with low probabilities. These new predictive technologies allow us to examine situations that are “devoid of outsized bias” we find in humans. Furthermore, the edge cases scenarios that may occur may be very different from the scenarios that we are most afraid of. Humans fixate on ideas that may not exist. The scenarios that scare us most may not occur at all.

For example, in airplanes with autopilot, there is no ethics involved in their design. Rather it is all about safety. The field is highly regulated and we could expect to see this kind of regulation to also develop for self-driving cars.

AI as a Mature and Regulated Discipline

One of my fellow dinner guests thought that AI will eventually emerge as a more mature discipline. He gives the example of chemical engineering that arose from scaling up chemistry. Chemical engineering ensures that a 10-ton vat will not blow up. Engineering has devised safeguards and procedures that ensure chemistry can be executed at higher rate. In AI, we are still in the process of figuring this out. We don’t have the engineering for AI. It goes beyond the understanding of the fundamentals and that includes scaling as well as the human component. As some guests would say, we are not yet at the level of maturity reached by other engineering sciences.

It was also noted that the banking industry is an example of one in which there is already strict regulation. Banks, due to their heavy regulation, already have processes in place to ensure unbiased predictions of their algorithms.

While one guest pointed out that mature engineering involves systems with single tasks, in AI we are dealing with the complexities of handling many different tasks. AI may provide us with a framework that we can use to guarantee qualities that may be applied across many different tasks.

Human Compatible Objective Functions

How can we develop AIs that improve the human condition?

Some of our guests speculated about the idea of social networks being used to improve governance of nation states. Can government define new kinds of metrics, perhaps those that can improve democracy? Can society drive a metric that helps everyone?

In democracies, casting a vote can be expensive. Can we perhaps build more liquid markets by providing citizens a way to delegate their votes? How can we have more involvement by the citizenry? Can society drive the metric to help everyone?

The book that I had just finished writing “The Deep Learning AI Playbook” has a final chapter on “Human Compatible AI”. However, it does not cover as rich a conversation as I had found in this meeting. The wisdom of collective intelligence transcends thinking in areas that a single mind is able to explore. Human compatibility with AI is a vast topic and I am glad Capital One sponsored this dinner with the simple intention of having a conversation about a very important topic.

I must say, the sea bass goes quite well with white wine sprinkled with some random philosophical ideas.

--

--