I just came back from the AI-2015 conference. In this post I will briefly talk about the conference workshops and presentations and also highlight some topics which I perceived especially interesting.
The conference took place for the 35th time now and the venue is the historic Peterhouse College, founded in 1284, in the city center of Cambridge.
The principal aims of the conference are to review recent technical advances in AI technologies and to show how this technology has been applied to real world problems. These were presented in two streams, the former one in the technical and the latter one in the application stream.
The conference lasted three days. On the first day several workshops took place about case-based reasoning, visual and cognitive analytics, and the Meta-Morphogenesis project. I will describe the latter two workshops briefly in this post.
I perceived the conference very positive as the presentations and workshops covered a wide range of topics while also going into the required depth. Also, everybody attending was really nice and approachable. There was a good mix of academics and industry practitioners and both graduate and undergraduate students with a bias towards full-time researchers.
Visual and Cognitive Analytics
The workshops in this stream tried to shed some light on the hypothesis whether people and machines in combinations are better at tasks like data analytics than either on their own. Machines can process huge amounts of data in a short amount of time and try to find meaningful signals across all possible combinations automatically. However, they are not able to recognize patterns as good as humans.
Research in this area is focused on how to form a symbiosis of people and machines that could possibly solve complicated analytics problems more efficiently and more effectively than individually.
A very interesting research project was presented by Advait Sarkar from the Cambridge Computer Lab. He and his colleagues evaluate how non-experts can be enabled to apply machine learning techniques via a familiar spreadsheet interface in a visual manner. At the moment it supports only the k-NN classifier, but they plan to implement more techniques. I am curious about the further progress, but at the same time also a little bit concerned about two issues. What happens if machine learning techniques are applied wrong? That is, key assumptions and prerequisites of statistical models are not taken into consideration. At best, the model performs poor. Worst case, the model looks valid at first sight and is subsequently rolled out. Also, it is not very clear to me whether this is applicable to larger data sets.
Evolved construction-kits for minds
This workshop was an interactive introduction to some important themes of the Turing-inspired Meta-Morphogenesis project by Prof. Aaron Sloman from the University of Birmingham. Prof. Sloman is in his own words a philosopher who does science informed by philosophy and engineering. The goal of the Meta-Morphogenesis project is to investigate the major transitions in information-processing since the earliest life forms on earth. It is assumed that there are some kind of universal construction kits provided by physics and chemistry that are not acknowledged in modern AI research. A lot of AI work on language acquisition assumes that language is learned from experts, which is a form of supervised learning. However, there is the example of deaf children in Nicaragua who invented their own language without expert supervision.
The same applies to the earliest mathematicians who had no mathematics teachers and yet made fundamental advances in their field. Another example mentioned are experiments conducted by Dr. Alex Taylor in problem-solving by crows. Crows were able to solve complex multi-step puzzles on their own without learning from experts.
Is AI an existential threat to humanity? If so, in what way and what can be done about it?
Recently, there has been some extensive and high-profile discussions whether AI poses an existential threat to humanity. No one less than Stephen Hawking warns that AI could end mankind.
The development of full artificial intelligence could spell the end of the human race.
The likes of Bill Gates and Elon Musk joined Stephen Hawking in his concerns. Much of this debate became momentum after the release of Nick Bostrom’s book Superintelligence: Paths, Dangers, Strategies. The panel members were Daniel Bennett (Editor BBC Focus Magazine), Prof. Max Bramer (VP International Federation for Information Processing), Prof. Susan Craw (Robert Gordon University Aberdeen), Dr. Tim Glover (BT), and Prof. Lars Nolle (JADE University of Applied Sciences, Germany). A consensus was not really reached. Some members of the panel and the audience were convinced that we are far, really far, really really far away from getting anywhere near to Artificial Super Intelligence. However, there were also opinions that there might be such a thing like Artificial Super Intelligence somewhere around 2050 or 2100. It was interesting to see that much more consensus was reached once the question was changed to whether AI can pose a threat to humans as individuals instead of the whole mankind. Here, most of the people had a common answer: yes!
I presented a paper called Fast Handwritten Digit Recognition with Multilayer Ensemble Extreme Learning Machine for which I received the best student paper award in the technical stream.
Briefly summarized, we evaluated a new training algorithm for artificial neural networks that trains the weights by solving a linear system. It is significantly faster than traditional backpropagation. It was applied to the task of handwritten digit recognition based on the MNIST data set. Also, we incorporated some concepts of convolutional neural networks and made the final prediction with an ensemble of models in order to avoid overfitting. We concluded that the new training algorithm has potential for situations where models need to be retrained very frequently and training time is a crucial success factor. The paper is available on Springer.
For me the highlights of the conference were in no particular order:
The best application paper: really great to see how AI inspired research can save lives!
The venue: one can literally smell the academic heritage of Cambridge ;-)
Meta-Morphogenesis: although I haven’t fully understood all the details of this project yet, it got me to think a lot how current machine learning and AI is done right now. Could it be possible that we are stuck in local optimum? Could the ideas of this project bring us closer to a global optimum? There is this toddler syndrome. Small toddlers interact continuously by random with their environment. This does not have an immediate reward. However, this helps to develop expertise and patterns which is then applied to new, unseen situations. For me the question came up what would happen if grown-up adults would take the time and regularly interact with their environment randomly. Would we increase any kind of skills and brain capacities? Or would it just not yield any benefit? But, this kind of behaviour is not rewarded in most of our societies as being focused and not ‘wasting’ time on irrelevant tasks is negatively rewarded. Would machine learning algorithms achieve better results if we would allow them to do some random variations for a while? Or even interact with other models and affect each others training?
Peterhouse College ghost: There are regular sightings of a ghost who is believed to be that of a former Peterhouse bursar who hanged himself. Shockingly, I can confirm that I saw the ghost as well during my stay ;-)