Using PlaidML for deep learning on a Macbook Pro GPU

littlereddotdata
Dec 31, 2019 · 4 min read

I remember the first time I ran a deep learning model on a powerful GPU (an NVIDIA GTX 1080). The model zipped through each training epoch so fast, I felt like I had just switched from driving a sedan to riding in a sports car. 🚙

The training speed was exhilarating; experimenting with different models went a lot faster than normal. But since that project, accelerated deep learning has been a rare luxury. Compute time on a good GPU can be expensive. Our datasets or models are usually small enough that while training time can be slow, it’s not so slow that it would justify the cost of cloud compute / building a custom machine.

So I’ve been grinding along as it is, paying-as-I-go for Paperspace compute time when GPU acceleration is really needed while hoarding cloud credits for some point in the distant future when I can splurge it all on a P100.

Until this!

Image for post
Image for post
Keras creator Francois Chollet’s reaction to the latest release of PlaidML

Essentially, PlaidML makes it faster to run deep learning on a laptop / embedded device / or other computing hardware that have traditionally not been compatible for deep learning workloads.

In plain English, if you have a Mac / Windows / Linux laptop, or even a Raspberry Pi, you can install PlaidML and train a deep learning model using your device’s GPU.

When I came across this tweet, it sounded amazing, so I decided to research and write about PlaidML — what it is, how it works, and how to get started, using my 2017 Macbook Pro as an example.

What is PlaidML?

PlaidML is an open-source tensor compiler that can accelerate the process of training deep learning models and getting predictions from those models.

But..

Compilers are computer programs that convert higher-level instructions to lower-level machine-code so those instructions can be read and executed by a computer.

Within this context, tensor compilers bridge the gap between tensor operations used in deep learning (convolutions etc.) and the platform and chip specific code needed to perform those operations with good performance.

How does PlaidML work?

To perform this translation from high-level tensor operations to low-level machine code, PlaidML uses its Tile language to “generate precisely tailored OpenCL, OpenGL, LLVM or CUDA code on the fly” so this code can be run on an OpenCL / OpenGL / LLVM / CUDA-compatible device. The people (Intel AI) who released PlaidML wrote a blog post that explains in more detail how all this works. See here.

Getting started with PlaidML

(most of what follows is taken from the Quick Start section of PlaidML’s github page and has been adapted for running on a 2017 Macbook Pro.

Step 1: check which graphics card your computer has.

Image for post
Image for post

My 2017 Macbook Pro has an Intel HD Graphics 630 and a Radeon Pro 560. Both use OpenCL so they’re compatible with PlaidML. (full list of OpenCL-compatible Mac computers. Also, how to get started for other Operating Systems like Windows and Linux)

Image for post
Image for post
source: https://support.apple.com/en-us/HT202823

Step 2: install PlaidML (with judicious use of virtual environments)

virtualenv plaidml
source plaidml/bin/activate
pip install plaidml-keras plaidbench

Step 3: Setup PlaidML

plaidml-setup
Image for post
Image for post
Terminal output. Both graphics cards on the Macbook Pro have been detected. 🙌

Step 4: Choose your accelerator

Here I’m going to go with the Intel graphics card.

Image for post
Image for post

Step 5: Whew, that worked! Now to save settings

Image for post
Image for post

Step 6: Run benchmarks

PlaidML comes with a command-line tool plaidbench for benchmarking the performance of different cards across different frameworks.

Here we can run a mobilenet inference benchmark with one line on both the Radeon and Intel graphics cards to compare their performance.

plaidbench keras mobilenet

Image for post
Image for post
Running a mobilenet inference benchmark on the Intel HD Graphics 630
Image for post
Image for post
Running a mobilenet inference benchmark on the Radeon Pro 560

Looks like both cards deliver the same inference performance. Good to know!

The Startup

Medium's largest active publication, followed by +721K people. Follow to join our community.

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store