My takeaways from Devoxx UK 2018

George Leivadas
Travelex Tech Blog
Published in
5 min readAug 14, 2018

In the summer, I was lucky enough to be sponsored by Travelex to attend Devoxx UK, a community conference where developers all around Europe share awesome experiences and have the opportunity to learn about latest technologies, trends and other technological advancements that would be otherwise very hard to obtain in such a short amount of time.

Speakers from the biggest companies have put the effort to produce high quality content that aims to ‘educate’ all the attendees or at least provide the groundwork that will act as the stimulus for anyone who is interested in a particular topic or just seeks further personal development. It was a great opportunity to learn new things, socialise with like-minded people and of course have fun.

I attended a lot of very interesting presentations on a variety of topics such as security related like: common vulnerabilities found in applications, database related like handling data in distributed systems and building a distributed data store, a marvellous introduction to Kotlin and many more. One of my favourites was about deep learning, its practical applications and the research topics that are active today.

Although, I am not an expert by any means in this subject, I was however intrigued to learn and understand a little bit better the AI world. So I will try to explain in a few paragraphs what deep learning is all about and what achievements have been done so far in this field.

First of all, very often we hear buzzwords like artificial intelligence, neural networks, machine learning, deep learning without truly understand the differences among them (unless someone is an expert on the subject). Even worse some people use these terms interchangeably which makes it even more blurry to identify the differences between them. In order to clarify things up let me try to provide a brief explanation of each one of them.

Undoubtedly, artificial intelligence is a very broad term that is used to represent any type of machine behavior that exhibits some form of intelligence.

However not all forms of artificial intelligence have the same level of complexity or capabilities. There are four types of AI reactive machines, limited memory, theory of mind, self-awareness based on the level of cognition each one has.

These four types can fit in two 2 main categories of AI: General vs. Narrow (let’s not consider superintelligence as a category yet!)

Most (if not all) of the AI techniques that we are developing today fall in the category of narrow AI. Practically what that means is that for every problem, a very specific algorithm is devised which solves that particular problem well. The capabilities of that AI system are pretty much determined by the design. In order to make the system better it needs external intervention (whether it is a developer or a new set of data).

On the other side, general AI is much more powerful and includes the ability of a system to become proficient not only to a single task but to be able to excel in different tasks, pretty much like humans. A paramount step towards this direction is to create such an AI that is able to reuse accumulated knowledge to different problems.

AI can be expressed through many different techniques. One of the most profound is machine learning. Machine learning refers to algorithms that learn how to identify patterns and make decisions based solely on raw data. This enables the algorithm to identify the correlation between a trait and its representation in the data automatically without the need of a human intervention. It’s also possible to identify which traits are important and which are not.

The importance of automatic feature detection is twofold.

First it is not always possible (or an easy task) to ‘hardcode’ the mapping between a trait and its representation. Imagine an algorithm that does image recognition on identifying whether a real car is apparent or not. How do we formulate that by hand ? The car can be visible from any angle, have some portion of it hidden by another object or just exist in a photograph.

The second one is that the trait representation matters for the actual success of the algorithm. For example, let’s assume we try to find out whether a car is going to break down in the next 12 months. A detailed report of a car’s condition created by a human will produce better predictions compared to a picture of the car, since the pixels alone don’t capture the necessary information.

The algorithms that are able not only to discover the mappings between traits and their representation but also discover the best representation itself belong to a category called representation learning. One of the most difficult aspects of representation learning is the ability to identify different sources of influence on the data and discard those that are less relevant. The position of the car in an image is not that useful for car identification.

Deep learning algorithms approach this problem by building a layered architecture that is able to identify complex features based on simpler ones.

Figure 1: A network with 3 hidden layers. Source: Michael A. Nielsen, ‘’Neural Networks and Deep Learning’’, Determination Press, 2015

Each layer extracts increasingly abstract features. The first layer identifies edges based on pixel brightness among neighboring pixels. Subsequently, the second layer builds on top of that by extracting different face features such as eyes or noses. Finally, the third layer contains information about higher level features which were build based on the previous layer’s object parts.

Figure 2: Features extracted by each layer from Deep Belief Network. Source:Lee et al., 2009

Deep Learning Applications

Deep learning has been applied to a plethora of problems over the years. As research progress more and more spectacular use cases emerge.

1. Speech re-enactment

Researchers from the University of Washington used deep learning to synthesize video from audio samples of Obama. The artificial video looks very realistic and quite impressive.

2. Music composition

In 2016, Aiva Technologies a startup specializing in the field of AI music composition has developed an AI named AIVA (Artificial Intelligence Virtual Artist) which is able to compose world class music. Aiva is the first AI ever to officially acquire the worldwide status of composer.

3. Image colorization in old b&w photos

Another interesting usage of deep learning is to automatically colorize gray scale images.

4. Image description generator

Researchers from the University of Stanford, where able to develop a deep learning network that is able to describe a given photo in plain English. They also created this page where you can see a live demo.

In conclusion, deep learning is rising in our every day lives as more and more applications are discovered. It is still the beginning of a new era where the abundance of available data paired with the constant progression of computing power, together constitute a unique combination that will help us break an algorithmic barrier that so far our human brains were unable to do so.

--

--