Is Artificial Intelligence a Threat?

François De Serres
PALOIT
Published in
5 min readMar 19, 2018

I have been pretty enthusiastic and impatient to attend the first Clojure/SYNC in New Orleans. Well, my expectations weren’t even close to what it literally was: a mind-blowing experience!

  • Which criteria need to be met for “safe” autonomous agents?
  • How hard is it to write programs that write programs that write…?
  • What is the path to becoming a successful free software contributor?

This conference easily qualifies as the most impactful hackers’ event I’ve ever seen, and it’ll certainly leave a long-lasting impression. It is plainly impossible to address every topics here and do justice to every contributor. However, to give you a good insight of the level and quality of this event, allow me to focus on the outstanding talk given by Gerald Jay Sussman.

AI by the AI-ist

“Should we be afraid of Artificial Intelligence? No. But beware of intelligence!”. As the proposition comes from one of the fathers of the discipline, you’d better pay attention. Luckily, paying attention to Prof. Sussman’s deep, inspired, and always fun presentation is anything but hard. He is certainly the best speaker I’ve ever seen. Pause here and hop to Wikipedia at https://en.wikipedia.org/wiki/Gerald_Jay_Sussman to get an idea of his background.

Yes, AI (as we know it) is a threat: to our privacy, civil rights, and socio-economic systems, primarily because of the full automation incurred by intelligent machines. “Predicting is always hard, especially predicting the future”. Nevertheless Sussman ascertains that the End Of Work is close. He is very much lucid regarding this upcoming crisis, and is very honest about his inability to quantify its forms and proportions. Right after this quite pessimistic statement, “Jay” (as his wonderful spouse and collaborator Laura calls him) receives a standing ovation when he asserts that “it is in fact very wrong that in the 21st century we ought to work in order to exist”. There will be several more ovations…

The End Of Work

AI as most of us know it, is Machine Learning, which is quite “dumb” (his word, not mine!). To make his point, he displays two images side by side, one of a man, the other of a car. Oooops! They have the very same bitmap signature, modern classification algorithms fail to “see” the difference, which can be quite annoying as we enter the driverless cars era… Comes an enlightening speech about the importance of forward mental imagery, aka. hallucination(!), an indispensable ingredient for any real intelligence. Briefly, we don’t make decisions only based on past events (as an ML system would), we constantly build and evaluate a multitude of imaginary situations to support our decisions.

Back to AI-enabled cars, Prof. Sussman demonstrates how it basically works today, versus how it should be: autonomous systems ought to be accountable for their decisions. Bummer, this is not baked into deep learning, at all! As I recounted after attending Strange Loop a few months ago, the fact that there is no logical explanation for the decision process of ML-powered automated systems might be a serious blocker. This matter nowadays gets a lot of attention and energy (financing!). Indeed, it is unacceptable that critical decisions could be taken by an autonomous agent outside of any logical, “reasonable” process.

Nevertheless, backtracking ML universal optimisation functions is not (yet?) in the realm of the possible.

Conditions for “safe” intelligent machines

However, Professor Jay has a trick, namely: Constraint Propagators. I won’t describe it here, suffice to know that this is a quite novel system architecture on which he and his peers have been working for nearly ten years (http://dspace.mit.edu/handle/1721.1/49525), source code this way: http://groups.csail.mit.edu/mac/users/gjs/propagators/propagator.tar. This programming paradigm does provide “accountability” for logical computations, amongst many other interesting properties (go on, the paper is deep, but readable by non-specialists).

Given that now, the autonomous car can explain how and why it chose to drift out of the road, it is aligned with the first of Sussman laws for intelligent machines: “explain actions and decisions”. Follows a literal minute of the car’s trial (questions are addressed to the car, not to a third party), ending with “What could you (the car) have done better?” and a sensible, valid plea from the system! The second of the “laws for safe intelligent machines” is therefore fulfilled: the system can be challenged itself in case of a failure, not merely its makers. Rests to fulfill the third law: an autonomous system ought to enable its users to correct it, or even disable it themselves. Does this sound familiar? Indeed, the conclusion is extremely explicit, and comes as a definitive warning:

“TRUSTED AUTONOMOUS AGENTS MUST BE BUILT WITH FREE SOFTWARE”

As a founding member of the Free Software Foundation, he’s obviously not talking of “free” as in “free beer”, but as free in “free speech”! The problem is, this is not at all the direction taken by the industry at the moment… let’s hope for the better, and act every day by refusing to allow closed source intelligent systems to impact our lives. Otherwise, AI will be a very serious threat. To make his point, Prof. Sussman reminds us of a recent, shocking public health affair, where thousands of lives would have been spared if only the above principle had been applied: cheating software in diesel vehicles.

Autonomous agents must be open.

All things not being equal, this wonderful talk made much sense to me in regard of PALO IT’s vision: “harnessing the power of technology for the greater good”. How hard is it for each of us to educate ourselves in order to prepare for a better future? Or a future at all, even? Follow Jay!

To conclude this part, let me say again that anyone serious about programming should read SICP (available online: https://sicpebook.wordpress.com). The lazy folks who already know some LISP can take a jump to section 4 to admire a beautiful Metacircular Interpreter implementation. The laziest may even enjoy the SICP lectures on the youtubes, by Dr Abelson: https://youtu.be/2Op3QLzMgSY. The description of Computer Science in the first minutes is such a pearl!

There are just too many things to share in one post and I wouldn’t want to hold you back any further. In my subsequent post, I will share on the conversations I had with David Nolen and Will Byrd and my most exciting takeaway from the conference. Stay tuned!

Francois De Serres is the Head of Digital Technology at PALO IT Singapore. Passionate about new technologies, and people, François has recruited and guided large development teams. He has also set up and led complex IT programs using both waterfall and agile, and constantly succeeds in helping stakeholders achieve their objectives for more than 20 years.

--

--