How to use deep learning for marketplace?

leboncoin tech
leboncoin tech Blog
6 min readDec 12, 2018

by Julien Plu (Lab Developer)

The age of the modern artificial intelligence started in the middle of the 1940s. In 1950, Alan Turing presented the earliest artificial intelligence problem that was natural language processing oriented, called the Turing test. The goal of this test, as stated by Turing, can be seen as a game where a human is talking to two different interlocutors through a computer and he has to determine which one is human and which one is artificial. If the human cannot make the difference then we can assume that a machine can behave like a human.

Nevertheless, at that period we were seriously lacking of data to create an artificial intelligence that could be able to do anything close to this goal. Nowadays, the amount of data available on the Web through the Internet traffic is so huge that the artificial intelligence that we are able to create can beat humans for several complex tasks such as question answering [1], speech recognition, image recognition or the game of Go.

Source: https://www.statista.com/statistics/499431/global-ip-data-traffic-forecast

These artificial intelligences are made with what we call deep learning that are basically deep neural networks. Deep learning aims to emulate the human brain, and to be as efficient, the algorithms need a tremendous amount of data.

Source: https://www.slideshare.net/ExtractConf

What are the existing neural networks?

There are many problems that you can tackle by using a deep learning approach, and each problem has its own kind of neural network:

Feed-forward Neural Network (FFNN)

A feed-forward neural network acts as a classical classification or regression algorithm. It consists of a simple number of processing units called neurons, organized in layers.

Every neuron in a layer is connected with all the neurons in the previous layer. These connections are not all equal: each connection may have a different strength or weight.

The weights on these connections encode the knowledge of a network. Often, the neurons in a neural network are also called nodes. Data enters at the inputs and passes through the network, layer by layer, until it arrives at the outputs.

The name feed-forward comes from the fact that there is no feedback between the layers. In traditional feed-forward neural networks, a hidden layer neuron is a neuron where the output is connected to the inputs of other neurons and is not visible as a network output.

Architecturally, a deep feed-foward network has an input layer, an output layer and one or more hidden layers connecting them. A special type of deep feed-forward network is the autoencoder, which aims to learn a representation (encoding) for a set of data, mostly for reducing the representation of a vector.

Examples of such neural networks are price prediction and ad recommendations.

Recurrent Neural Network (RNN)

A recurrent neural network, contrary to the deep feed-forward neural network, takes as input not just the current example they see, but also what they have perceived previously in time.

The decision a recurrent neural network reached at time step t-1 affects the decision it will reach one moment later at time step t.

So recurrent neural networks have two sources of input, the present and the recent past, which combine to determine how they respond to new data, much as we do in life.

Recurrent neural networks are distinguished from feed-forward networks by that feedback loop connected to their past decisions, ingesting their own outputs moment after moment as input.

It is often said that recurrent networks have memory. Adding memory to neural networks has a purpose: there is information in the sequence itself, and recurrent neural networks use it to perform tasks that feed-forward networks cannot.

That sequential information is preserved in the recurrent network hidden state, which manages to span many time steps as it cascades forward to affect the processing of each new example.

It is finding correlations between events separated by many moments, called long-term dependencies, because an event downstream in time depends upon, and is a function of, one or more events that came before.

Examples of such neural networks are text and voice generation.

Convolutional Neural Network (CNN)

A convolutional neural network is a type of deep feed-forward neural network that contains one or more convolutional layers with a subsampling step and then followed by one or more fully connected layers being usual layers in deep feed-forward neural networks.

The architecture of a convolutional neural network is designed to take advantage of a 2D structure. This is achieved with local connections and linked weights followed by subsampling steps.

Another benefit of these neural networks is that they are easier to train and have fewer parameters than feed-forward neural network with the same number of hidden units.

There are two main subsampling steps in a convolutional neural network: max pooling and average pooling.

Max pooling aims to down-sample an input representation (image, hidden-layer output matrix, etc.) by reducing its dimensionality and allowing for assumptions to be made about features contained in the sub-regions.

This is done by applying a max filter over the non-overlapping sub-regions of the initial representation.

Average pooling is an alternative, where instead of doing a max, we do an average over the sub-regions.

These pooling strategies mostly help against overfitting by providing an abstracted form of the representation and reduce the computational cost by reducing the number of parameters to learn.

Examples of such neural networks are image processing and visual search.

Of course, one can also mix these kind of neural networks to create a hybrid neural network, for more complex tasks such as AlphaGo, but these approaches are out of the scope of this article.

How might we use deep learning at leboncoin?

At leboncoin, I currently work as an applied scientist. As you may know, it is a marketplace where people can, among other possibilities, resell their goods or find a job or an apartment. Thus, our main role is to create a seller and a buyer in a commercial relationship and to make it well. In order to realize this important task we might use deep learning to create or improve some of our features:

  1. Seller/buyer verification: Trust is the most important factor when we are in a commercial relationship. How do we know that we are talking to a trustworthy interlocutor? To tackle this problem we might take advantage of deep learning algorithms that take user reviews, previous sales or purchases, listed prices (by verifying if the price is fair or not) as data.
  2. Visual search: It may be useful to take a photo of a product with one’s mobile phone and search similar products on the website. Every ad has photos of the product which we can use to feed some deep learning algorithms to find similar images and, in turn, ads.
  3. Product recommendations: It is always useful for users to propose a products that are related to what we have previously seen or to what we are currently viewing. Even better, we can also imagine a predictive recommendation such as if we are viewing at a mobile phone ad, we can propose a compatible protective cover. To realize this recommendation task, we can use deep learning algorithms that are able to model an ad with its description, price, title, location and images.
  4. Advertising recommendations: Based on what the visitors are viewing we can try to propose the most appropriate advertisement. To realize this recommendation task, we can use deep learning algorithms that are able to detect how pertinent an advertisement can be for such or such product.
  5. Image quality assessment

This list is not exhaustive and is just an example. One can imagine many more usages as, obviously, this does not mean that we will work on this list or not. Our role is to find the best approach to satisfy as many of our users as possible, and deep learning could be a great direction.

[1]: see dataset in version 1.1

--

--