How to enable GPU support for TensorFlow or PyTorch on MacOS

Michael Hannecke
Bluetuple.ai
Published in
4 min readOct 6, 2023

Train your ML models faster with GPU support on macOS

Is your machine learning model taking too long to train? Do you wish you could speed things up? Well, you’re in luck! In this blog post, we’ll show you how to enable GPU support in PyTorch and TensorFlow on macOS.

GPUs, or graphics processing units, are specialized processors that can be used to accelerate machine learning workloads. By using a GPU, you can train your models much faster than you could on a CPU alone.

If you’re using a MacBook Pro with an M1 or M2 chip, you’re in for a special treat. These chips have built-in GPUs that are specifically designed for machine learning. This means that you can get even more speedup by enabling GPU support in PyTorch and TensorFlow.

So, let’s get started!

Most Machine Learning frameworks use NVIDIA CUDA, short for “Compute Unified Device Architecture,” which is NVIDIA’s parallel computing platform and API that allows developers to harness the immense parallel processing capabilities of NVIDIA GPUs.

Apple uses a custom-designed GPU architecture for their M1 and M2 CPUs. This architecture is based on the same principles as traditional GPUs, but it is optimized for Apple’s specific needs. ‘Older’ Apple computer with dedicated GPIU utilizes AMD Chips which are not directly compatible with NVidia’s CUDA Framework.

But help is near, Apple provides with their own Metal library low-level APIS to enable frameworks like TensorFlow, PyTorch and JAX to use the GPU chips just like with an NVIDIA GPU.

Let’s step through the steps required to enable GPU support on MacOS for TensorFlow and PyTorch.

Requirements

  • Mac computers with Apple silicon or AMD GPUs
  • macOS 12.0 or later (Get the latest beta)
  • Python 3.8 or later
  • Xcode command-line tools: xcode-select — install

TensorFlow

First we have to install a virtual environment, we’re going with venv this time but anaconda would do as well.

python -m venv ~/venv-tfmetal
source ~/venv-tfmetal/bin/activate
python -m pip install -U pip

Next we have to install the TensorFlow Base framework. For TensorFlow version 2.13 or later:

python -m pip install tensorflow

For TensorFlow version 2.12 or earlier:

python -m pip install tensorflow-macos

Now we must install the Apple metal add-on for TensorFlow:

python -m pip install tensorflow-metal

You can verify that TensorFlow will utilize the GPU using a simple script:

import tensorflow as tf
devices = tf.config.list_physical_devices()
print("\nDevices: ", devices)

gpus = tf.config.list_physical_devices('GPU')
if gpus:
details = tf.config.experimental.get_device_details(gpus[0])
print("GPU details: ", details)

You can test the performance gain with the following script. Run this script once with GPU (metal) support enabled and once in a virtual environment without metal installed. The difference is remarkable!

cifar = tf.keras.datasets.cifar100
(x_train, y_train), (x_test, y_test) = cifar.load_data()
model = tf.keras.applications.ResNet50(
include_top=True,
weights=None,
input_shape=(32, 32, 3),
classes=100,)

loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False)

model.compile(optimizer="adam", loss=loss_fn, metrics=["accuracy"])

model.fit(x_train, y_train, epochs=5, batch_size=64)

PyTorch

Again, start with a virtual environment, we’re going again with venv:

python -m venv ~/venv-ptmetal
source ~/venv-ptmetal/bin/activate
python -m pip install -U pip

Next, install the PyTorch framework as follows. Be careful- this may not work within a jupyter notebook, depending on your environment, please run this in terminal:

pip install --pre torch torchvision torchaudio \
--extra-index-url https://download.pytorch.org/whl/nightly/cpu

You can verify that PyTorch will utilize the GPU (if present) as follows:

#check for gpu
if torch.backends.mps.is_available():
mps_device = torch.device("mps")
x = torch.ones(1, device=mps_device)
print (x)
else:
print ("MPS device not found.")

Run the next script in a virtual environment with and without GPU support to measure the performance:

# GPU
start_time = time.time()

# syncrocnize time with cpu, otherwise only time for oflaoding data to gpu would be measured
torch.mps.synchronize()

a = torch.ones(4000,4000, device="mps")
for _ in range(200):
a +=a

elapsed_time = time.time() - start_time
print( "GPU Time: ", elapsed_time)

Conclusion

There you have it! You now know how to enable GPU support in PyTorch and TensorFlow on macOS. Go forth and train your models faster than ever before!

And remember, if you run into any problems, don’t be afraid to ask for help. There’s a large and supportive community of machine learning practitioners who are always happy to lend a hand.

Now, go forth and train some amazing machine learning models!

P.S.

If you find that your GPU is still not working after following these steps, don’t worry. You’re not alone. Sometimes, things just don’t work out as planned. In that case, you can always try using a cloud-based GPU service. There are many different services available, so you’re sure to find one that fits your needs.

If you have read to this point, thank you! You are a hero (and a Nerd ❤)! I try to keep my readers up to date with “interesting happenings in the AI world,” so please 🔔 clap | follow

--

--