Fast Fractional Differencing on GPUs using Numba and RAPIDS (Part 1)

Yi Dong
RAPIDS AI
Published in
4 min readOct 8, 2019

--

By: Yi Dong and Mark J. Bennett

Background

Fractional Differencing is a signal processing technique used to remove nonstationarity from time series while maintaining as much time-series memory as possible. Nonstationarity of prices often occurs in stock markets and other capital markets and it makes them hard to predict. If we can remove nonstationarity then the prices would be easier to predict. Econometric time series techniques have been studied for years. Fractional differencing is widely used today in the financial services industry for preparing training data for machine learning algorithms to generate signals for stock trading[1][2].

Fractional differencing models such as the Auto-Regressive Fractional Integrated Moving Average (ARFIMA) model are based upon the more familiar ARIMA(p,d,q) model where the differencing component, d, can be a fraction instead of the usual 1 or 0, difference 1 period vs. no differencing, hence the extra ‘F’ in the acronym. Our goal is to hardware accelerate the process on GPUs using two approaches: (1) the RAPIDS cuDF open-source, GPU-accelerated DataFrame library, and (2) Numba which is a high-performance Python compiler which utilizes the NVIDIA CUDA primitives to utilize GPU acceleration.

--

--

RAPIDS AI
RAPIDS AI

Published in RAPIDS AI

RAPIDS is a suite of software libraries for executing end-to-end data science & analytics pipelines entirely on GPUs.

No responses yet