GIF from Tech Noir of Dune (1984)

Data Science, AI and Dune

The eerily familiar problem of the Mentats and the nature of adversarial attacks in machine learning

Alex Moltzau
DataSeries
Published in
5 min readSep 17, 2020

--

Why talk of Data Science, AI and Dune in the same breath?

When you have read this article I hope you will know better.

If you have not seen the trailer to the upcoming release of Dune this December 2020 I suggest you do so:

Even more so I would recommend that you read the first book in the series by the author Frank Herbert.

A hot tip: you can do so for free at this online archive.

I will not cover the book in detail. Rather, I will talk of a few momentous aspects of this book first published in 1965.

Namely I will discuss the Butlerian Jihad and the Mentat.

These may seem like strange terms, but stay with me.

You will come to understand, if you are not familiar already with these two concepts, that they are linked together.

Dune is a fictional universe and within this the Butlerian Jihad, although not immediately named, quickly becomes apparent:

Butlerian Jihad is a conflict taking place over 11,000 years in the future (and over 10,000 years before the events of Dune) which results in the total destruction of virtually all forms of “computers, thinking machines, and conscious robots”

That is, one of the first imaginable things happened, man fighting robot.

Powerful artificial general intelligence (AGI) was made and a fight ensued.

Yet, the god of machine-logic was overthrown by the masses and a new concept was raised.

“Man may not be replaced.”

In the beginning of the first book you can see a question raised by Paul (protagonist) when he is being tested for his humanity.

“Why do you test for humans?” he asked.

“To set you free.”

Not long after this a proclaimed historic quote is repeated in the book.

Screenshot of a Dune eBook by author the 17th of September.

Anti-AI laws had been placed in effect; the punishment of owning such AI device or any kind being immediate death.

“Thou shalt not make a machine

in the likeness of a human mind”

Thus, when AI is banned, instead in this fictional universe humans are trained to computer-like capabilities of computing.

The humans that are trained in this manner are called Mentat.

A Mentat is a fictional type of human.

They are trained to mimic the cognitive and analytical ability of computers.

However, they are no simple calculators. Mentats have memory and perception that enable them process large amounts of data.

Through this they device concise analyses.

Assessing in this manner both people and situations connected through the interpretation of minor changes in body language or intonation.

Already in the first book the limitations of this are presented.

The quote from Vladimir Harkonnen, the main antagonist of the first book.

Here he speaks to his guard commander, instructing him how to control a Mentat.

Why do I find this information?

Well, providing false information is not new — yet this cognitive ability and the computing is an interesting aspect.

August 2019 I wrote an article about adversarial machine learning and poisoning attacks.

In the article I wrote about recent research in IBM on the new kinds of security threats we were seeing.

One of these were called poisoning attack:

“Poisoning attacks: machine learning algorithms are often re-trained on data collected during operation to adapt to changes in the underlying data distribution. For instance, intrusion detection systems (IDSs) are often re-trained on a set of samples collected during network operation. Within this scenario, an attacker may poison the training data by injecting carefully designed samples to eventually compromise the whole learning process. Poisoning may thus be regarded as an adversarial contamination of the training data.”

In this manner — feeding false data — the overall purpose can be distorted.

One prominent example that has been repeated so often that it becomes a cliché is the chatbot Tay.

In 2016 Microsoft unleashed Tay, the teen-talking AI chatbot built to mimic and converse with users in real-time.

Tay was designed to mimic the language patterns of a 19-year-old American girl, and to learn from interacting with human users of Twitter.

It went very wrong.

It is possible to distort the input of algorithms.

Highly possible.

Sadly, this seems to be the case.

The way to control and direct machine learning is through its information input. False information — false results.

How do we control for error?

With billions of parameters, can we control for error responsibly?

The Mentat machine learning problem is unlikely to be overcame.

Then again, we — as humans, do often receive false information.

It is not like humans are not susceptible to manipulation.

Simply put, we should not expect machines to be beyond manipulation.

Especially concerning machine behaviour based on large datasets.

Do you see how a science fiction model from the 60’s can remind us both that we need to remember to be human, and that even extraordinarily made calculated decisions can be horribly wrong?

A fair warning for data scientists and those working in the field of artificial intelligence.

This is #500daysofAI and you are reading article 471. I am writing one new article about or related to artificial intelligence every day for 500 days.

--

--

Alex Moltzau
DataSeries

Policy Officer at the European AI Office in the European Commission. This is a personal Blog and not the views of the European Commission.