Can Deep Learning Change the Game for Time Series Forecasting? (Part II)

Comparing MQ Forecasters & Transformers to ML to Find Out

Lina Faik
data from the trenches
14 min readNov 18, 2021

--

The encoder-decoder framework is undoubtedly one of the most popular concepts in deep learning. Widely used to solve sophisticated tasks such as machine translation, image captioning, and text summarization, it has led to great breakthroughs.

However, when it comes to time series forecasting, the encoder-decoder framework has generated less noise. And yet, the recently emerged models that rely on this architecture have led to more accurate forecasts than classic approaches. They have also proven to be a more scalable solution from a business perspective.

Among them, two have stood out for their performance and scalability: the Multi-Quantile Recurrent Forecaster that comes in two flavors (MQ-RNN and MQ-CNN) and Transformers. Although they both share the same overall architecture, they rely on completely different approaches. Let’s deep dive into their mechanisms!

Objective

The goal of this article is to expose the key principles of these models.

It is the continuation of a two-part series (here’s part one) that aims to provide a comprehensive overview of the state-of-the-art deep learning models…

--

--

Lina Faik
data from the trenches

Senior ML Engineer & Data Scientist (Freelance) | Technical Writer | ex-Dataiku