Ernst Ludwig Kirchner: Szene im Café.

The Robot That Did Not Get Away

Joop Ringelberg
5 min readMay 17, 2019

--

Or: Why I Am Done With Sharing

In the 60’s and 70’s, researchers tried to put AI on a sure footing with mathematical logic. They would, for example, equip a robot with logical truths that would help it to start a car¹, like: “if the ignition key is turned right, the motor runs”, and: “when the clutch is up, the car moves forward”. This works, except, of course, when the electrical system is broken. Our robot would not know it and simply let the clutch up and do all the other things it was logically expecting, never getting anywhere.

Under such particular circumstances, its logical sentences are not true. So they were qualified: “if the ignition key is turned right, and the electrical system is not broken, the motor runs”. And now, when the motor does not run, the robot has something to act on, because it can deduce there is a problem with the electrical system.
But it is still stuck when the petrol tank is empty. Or when the tyres are flat. Or when a Martian anthropologist had just zapped away the entire motor block to take it home for analysis. There is simply no end to the list of necessary qualifications: just about anything can interfere, in the real world.

And what’s even worse: to establish that the motor is running, in order to let up the clutch, the robot would need to check all qualifications beforehand.
Obviously, humans deal with this qualification problem in a different way. They reason differently: they operate on defaults, for example, assuming all is well until it isn’t. And so the field of common sense reasoning was established in AI. Because mathematical logic wasn’t up to it, researchers tried to mend it; to create new forms of logic that would adequately capture human reasoning.

Today, we live in a world that operates on data. Facts that we treat as logical truths about ourselves, like place of birth or gender, etc. Such data is extremely useful and therefore has great value. Consequently, big companies gather as much of it as they can. We’ve become painfully aware of this in recent years, because it has upset the power balance between citizens and organisations, to the point of threatening democratic values.

Luckily, effort is being made to let citizens regain control over their data. Berners-Lee’s Solid project, and, closer at home, the DECODE project instigated by the Next Generation Internet initiative of the EU, are good examples. Instead of leaking data all over cyberspace, citizens would put their data into vaults and then decide whom to share it with. DECODE, for example, will have people establish smart rules to express who is qualified to see what data. In this explainer, the authors (Theo Bass, Paulus Meessen) imagine scenarios like “Share with local government only”, or “Share with IP addresses in my City only”, or “Share for 15 minutes”.

Sounds familiar? It should, as it is the qualification problem all over again. And again, there is no end to the list of possible qualifications that specify whom I should share, say, my bank account with.

But wait a second. If it’s the same problem, let’s apply the same solution! So let us turn to the results obtained by AI in the 80’s, to specify what to share with whom. Yes, but … except that it didn’t. The problem turned out to be ill defined, many headed, and computationally intractable to boot. The field split into numerous factions. No really practical results were established and the AI winter ensued².

The real problem is that we simply still do not know how to capture human reasoning in more or less logical terms. So, researchers in the the vault-and-access-rules field, beware! Really clever people spent a decade on it, with no results to take home.

A pragmatic mind might think: but do we really need to? To all practical purposes, will DECODE rules not just simply cut the ice? Never mind the logical finesses, but many situations are not so open at all. No Martians seen since the 80’s, for example.

And they would have a point. The crucial observation being that we often have a pretty good sense of in what context a rule holds. Mathematical logic is about universal truths: sentences that are true anywhere, always. While such truths certainly would be useful, we can make do with lesser truths, that work in specific circumstances.

This plays nicely with something else we all have at the back of our minds: information needs context, too. Data has no intrinsic meaning, it is people who interpret it and give meaning to it by taking action. Information is just physical objects people pass to others to convey messages. Think letters, clay tablets, sound waves, hard disks³.

So we need context to make sense of information. But what is context? Is it a place? A group of people? Context is a rather vague, catch-all concept, when you look hard at it. We need more focus.

Stepping back allows us to see that information exchanged on the internet is instrumental for co-operation. Co-operation involves a circle of participants that act together. It is what they do, rather than who they are that dictates what they need to know. So, sharing is inherently dependent on action in co-operation.

Consider how far we’ve come. Apparently, we need to work out who does what, in a co-operation, to be able to specify who should have access to what information. But by fully modelling the co-operation, aren’t we putting the cart before the horse? Our work on Perspectives shows that from such a model working software can be generated that supports the co-operation. We’re no longer specifying acces rules for an application; we’re creating the application itself.

Put that way, all of a sudden it seems rather silly to try to describe, once and for all, whom should be allowed access to a particular piece of data. Can I really think in advance of all circumstances that I will work with someone who needs to know my age? Or whom I am married to?

As long as we begin our thinking with ‘data’ and where it may go, we won’t get very far — just like the robot, sitting in its car, checking in advance all that might be wrong. Data is the wrong end of the equation. We need to start with what we want to do, with whom, and how to act in specific situations. Information flow follows. It does not lead!

This is the sixth column in a series. The previous one was: Perspectives Beyond Blockchain. Here is the series introduction.

¹ Well, that is a thought experiment, really; more realistically, they would have a robot try to navigate a room.

² This is not to say that no great work was done. The Semantic Web OWL languages might be considered a spin-off, as profiting from fundamental work done on the relation between expressiveness and computability.

³ Some people argue I confuse data and information, seeing the former as the embodiment and information as more abstract. However, I disagree. I’ve elaborated this point to some extent in Stop Talking About Data!

--

--