Fast Fractional Differencing on GPUs using Numba and RAPIDS (Part 1)

Yi Dong
RAPIDS AI
Published in
4 min readOct 8, 2019

--

By: Yi Dong and Mark J. Bennett

Background

Fractional Differencing is a signal processing technique used to remove nonstationarity from time series while maintaining as much time-series memory as possible. Nonstationarity of prices often occurs in stock markets and other capital markets and it makes them hard to predict. If we can remove nonstationarity then the prices would be easier to predict. Econometric time series techniques have been studied for years. Fractional differencing is widely used today in the financial services industry for preparing training data for machine learning algorithms to generate signals for stock trading[1][2].

Fractional differencing models such as the Auto-Regressive Fractional Integrated Moving Average (ARFIMA) model are based upon the more familiar ARIMA(p,d,q) model where the differencing component, d, can be a fraction instead of the usual 1 or 0, difference 1 period vs. no differencing, hence the extra ‘F’ in the acronym. Our goal is to hardware accelerate the process on GPUs using two approaches: (1) the RAPIDS cuDF open-source, GPU-accelerated DataFrame library, and (2) Numba which is a high-performance Python compiler which utilizes the NVIDIA CUDA primitives to utilize GPU acceleration.

In this open-source project done by Ensemble Capital, fractional differencing computation is accelerated via the cudf.apply_chunks() method in the GPU. This GPU approach achieves hundreds of times acceleration (over 100x) compared with a CPU implementation, as explained in their blog.

Using the apply_rows()and apply_chunks() methods from the RAPIDS cuDF is the easiest way of customizing GPU computations. In this previous blog, we covered in detail how to take advantage of these two methods for customized computations. Through it, please remember that, while the cuDF approach is easier to use, the approach sacrifices some ultimate performance for convenience. The Numba compiler approach requires a steeper learning curve, but we improve Python program GPU performance.

In this blog, we are going to show how to use Numba to do fractional differencing computation efficiently. In the second part of the blog, we will show how easy it is for data scientists to…

--

--