gQuant — GPU-Accelerated examples for Quantitative Analyst Tasks

Yi Dong
RAPIDS AI
Published in
6 min readJul 16, 2019

--

by Yi Dong and Alex Volkov

gQuant Background:

Our prior blog gave a high-level overview of examples in the gQuant repository using GPU accelerated Python. Here we will dive more deeply into the technical details. The examples in gQuant are built on top of NVIDIA’s RAPIDS framework and feature fast data access provided by cuDF dataframes residing in high bandwidth GPU memory and benefit from the vast compute capabilities of modern GPUs. We demonstrate a task-centric workflow that models dependencies within a directed acyclic graph (DAG) using the idea of “dataframe-flow”. This shows how it is possible to develop workflows that manipulate the data and the compute at the graph level. Nodes are dataframe processors, and the edges are the directions of passing resulting dataframes. This graph approach can organize the quant’s workflow at a high level that addresses the complicated workflow challenge. By switching from the cuDF to Dask-cuDF dataframes, the computation automatically becomes multiple node and uses multiple GPUs for distributed computation.

In this blog, we present a simple example to show how easy it is to accelerate the quant workflow in the GPU and visualize the data flow.

A Toy Example:

--

--