Thinking Differently About A.I.

The field of AI (artificial intelligence) has witnessed significant successes in terms of solving well-defined problems. Yet, so far, no step seems to have been taken towards the direction of creative problem-solving.

It is often said that if an issue cannot be solved the reason is trying to solve the wrong issue. If this is the case for AI, perhaps we can start to ask the question of what would be the right question to be solved.

According to Ullman, the author of Life in Code, the concept of abstraction which is embedded within our notion of AI needs to be taken seriously as AI and machine learning can be described as main techniques for abstraction. The parameters refer to a specific way for representing complexity in terms of rules to be followed by a machine. Even though accomplishment in the development of a system with greater accuracy for several tasks, such as image recognition, are highly praised, it is often forgotten that the same systems can turn out to be harmful to human-beings when they give the wrong answers. Although human-beings can err as well the fact that machines cannot represent their wonder or be clueless by making the statement “I don’t know”, some things may be easily left out.

According to Chapman, one of the main issues with AI is that our lack of understanding about how to evaluate the progress made so far. AI is an interdisciplinary field encompassing not only science and engineering, but also design, philosophy, and math. In addition to these main fields, Chapman adds the sixth field of ‘spectacle’, which refers to providing good demos. While science refers to the development of predictive models, engineering refers to the development of the pragmatic applications of these models.

So far, abstractions regarding our understanding of intelligence are far from acting as scientific models or useful engineering results. Given an exact definition of intelligence, we cannot fully grasp the related abstractions about intelligence and can only look at demos and tell whether something looks intelligent or not. This does not suffice for making progress.

From the data point of view, big data includes individuals, which raises a warning against too much focus on abstraction. The history of a particular data set reflects the history of discrimination or bias as well. Forgetting that data refers to individuals would be committing a big mistake, as data in the form of some kind of abstraction provides the ground for decision-making. In a similar vein, the automated abstraction engines project these same biases onto the future based on these data abstractions. So, in order to develop fair systems, it is mandatory that these systems can go beyond accepting or rejecting the way data abstracts individuals.

Our ability to sort things by making categories is a form of abstraction which the human mind is good at. On the other hand, we can also accomplish things which could not be modeled by AI abstractions. We can sometimes get bored while we can sometimes find things interesting. Other times, we can change our minds make mistakes. All of these are part of being a human. How would all of our other capacities reflect onto AI?

The real value for any trained system lies in the fact that it should be able to make an error. Overfitting would certainly point to bad performance on a real-world data system. Yet, would it make sense to expect a system to perform well in the real world if we don’t expect the same for training data? Perhaps, we need to reject all abstractions and start to think differently about AI.


Originally published at www.datadriveninvestor.com on January 15, 2019.