Kumiko Tanaka-Ishii by @SebastianNavasF

Kumiko Tanaka-Ishii — Computational Semiotics Pioneer

Alvaro Videla
A Computer of One’s Own
5 min readDec 20, 2018

--

Today’s pioneer is a Professor at the University of Tokyo, that has invested her career in the study of language and programming. In 2010 she published the award winning book Semiotics of Programming which brings a new perspective into the analysis of programs and how we write them.

One interesting aspect of her book is what she calls the ontologies of Being vs. Doing. Being denotes entities by what they are, while Doing denotes entities by what they do or what can be done to them. How is this related to programming?

Being vs. Doing, or Why Favor Composition Over Inheritance

Consider Object Oriented Programming (OOP) with classes. In this case we will work with objects that have the internal structure offered by the class they instantiate. The object functionality will be determined by the class, which means a programmer that wants to use them effectively needs to know about the class internal structure. For the programmer this implies an extra burden of knowing more than it should about implementation details.

Now let’s consider Interfaces, Abstract Data Types, or Haskell like Type Classes. In this case we have a set of declaration of functionalities, a protocol, but we lack the actual implementation. In this case a programmer needs to know what the object can do, or what can be done with it, information that is specified by the Interface. Consider a Haskell Type where it makes sense to test for equality, by conforming to the Eq type class. You know you need to provide functions that implement == and /=, and that’s it.

Of course modern OOP languages mix these two approaches, but it’s interesting to note that without Interfaces, we can pretend to agree what a class does or what could be done to it, that is, we assume that it should implement certain methods. If the class conforms to an Interface then we are sure that certain methods and return values will be provided by the object.

Finding a balance between class inheritance and working with interfaces presents an interesting problem, as Tanaka-Ishii remarks

One common problem among programmers is how to integrate and balance the ‘being’ and ‘doing’ approaches to simultaneously maximize the advantages of code sharing and task sharing.

As software projects increase in complexity, Tanaka-Ishii says that we should move towards the Interface or the Abstract Data Type, since we want to be able to plug objects together without necessarily knowing how they are implemented.

Under the ‘being’ ontology a programmer has the total responsibility to know what’s is going on; today, under the ‘doing’ ontology, the programmer’s knowledge can be limited to the predefined communication protocol, or the interface.

If we bring this discussion to a bigger scale, we can consider how we orchestrate microservices today, where the API they offer is all what we need to know about them, leaving implementation details to other developers, possibly belonging to different teams or even companies.

If you want to dig deeper into this problem, here’s her paper Being’ and ‘Doing’ as Ontological Constructs in Object-Oriented Programming.

Now let’s take a look at her latest research project.

A new tool for assessing how AI adapts to human societies

In an interview for HITE Tanaka-Ishii talks about her research group’s new project, that tries to grade the behavior of AI’s and the impact they can have on markets.

This project follows from these two fundamental questions asked by Tanaka-Ishii’s:

“How can we stop AI that is motivated purely by profit?”, and “How can we design evaluation methods to prevent such behavior?”

These kind of questions seem more and more relevant these days as people start talking about AIs as if they are some sort of black boxes that provide answers for almost everything, from facial recognition used for law enforcement, to self-driving cars, where an error by the AI that could cost a human life is blamed on the algorithm as if that exempts us from all responsibility, either from implementing it, or by putting the algorithm, unrestrained, in practice.

Now let’s consider an AI that starts to behave recklessly in an investment market, up to the point where they produce a financial crisis. Could we tell in advance that an AI is misbehaving, and prevent it from wrecking the economy?

According to Tanaka-Ishii’s research, this should be possible by considering AIs behavior by using a power law statistical model. How does this work?

She explains that many natural and social systems work according to power laws, but more specifically in language–her specialization field–she noted that if we graph the frequencies of words in a language like English, against their frequency rankings, we will see that a power law appears. By frequency of words she means that “the” is the most used word in English, followed by “of” or “and”. She says that if we analyze a big corpus of text, or even the language of a kid, we will see such power laws appearing.

On the other hand if we analyze the text produced by AIs, until recently, they would almost never produce a power law, so there’s something unnatural about them–for lack of a better word. Why is it so important that they behave in such way?

It is still unknown why this kind of distribution happens in nature, but it’s been observed to hold for earthquakes, market prices, language, or population distribution in cities.

So how is Tanaka-Ishii and her group trying to assess AI systems?

We are considering how to create a system that detects any AI systems that do not produce power laws, so that for example we could exclude AI with the potential to behave recklessly in the field of investment. In stock investments, if an AI keeps on pursuing short-sighted maximum gains, it is likely that its behavior will destroy the power law, and this could risk the whole market. We believe it might be possible to detect these signs in advance by using methods that use power laws as the criteria for making judgments.

This is what I find fascinating about Tanaka-Ishii’s work. By looking into semiotics and philosophy, she brings a powerful point of view that can help us reflect and ask the right questions from things like the way we program, to help us assess how well an AI behaves in a society.

Advent Calendar — Help us make it a book!

From December 1st until December 24th we plan to release one article each day, highlighting the life of one of the many women that have made today’s computing industry as amazing as it is: From early compilers to computer games, from chip design to distributed systems, we will revisit the lives of these pioneers.

Each article will come with an amazing illustration by @SebastianNavasF

If you want to see these series to become a book with expanded articles and even more illustrations by Sebastián, then subscribe to our newsletter below.

Credits

References

--

--

Alvaro Videla
A Computer of One’s Own

http://alvaro-videla.com/ Co-Author of RabbitMQ in Action. Previously @Apple @VMWare @EMC. All opinions are my own.