Installing TensorFlow 1.2 / 1.3 / 1.6 / 1.7 from source with GPU support on macOS

Mattias Arro
3 min readAug 4, 2017

--

Sadly, TensorFlow has stopped producing pip packages with GPU support for macOS, from version 1.2 onwards. This is apparently because the NVIDIA drivers on macOS don’t work reliably enough and caused some test failures, however the problems are not likely to affect most common code paths and might not be an issue in day-to-day training. For those of us who would still like to rapidly prototype their models locally on TensorFlow version > 1.1 with a GPU, this potential unreliability is a reasonable trade-off.

This tutorial explains a little workaround that’s necessary for compiling TensorFlow 1.2 / 1.3 on macOS, as well as potential issues you might encounter.

Update: For instructions on how to install TensorFlow 1.6 with (e)GPU support (without disabling SIP), have a look at this gist. It also contains links to pre-built wheels for Python 2.7 and 3.6. A comment by KazW confirms that compiling on TensorFlow 1.7 with similar instructions also works.

  1. If you haven’t used a TensorFlow-GPU set-up before, I suggest first setting everything up with TensorFlow 1.0 or 1.1, where you can still do pip install tensorflow-gpu. Once you get that working, the CUDA set-up would also work if you’re compiling your TensorFlow package. If you have an external GPU, my stackexchange answer might help you get things set up.
  2. Follow the official tutorial “Installing TensorFlow from Sources”, but obviously substitute git checkout r1.0 with git checkout r1.2 or git checkout r1.3.
  3. When doing ./configure, pay attention to the Python library path: it sometimes suggests an incorrect one. I chose the default options in most cases, except for: Python library path, CUDA support and compute capacity. Don’t use Clang as the CUDA compiler: this will lead you to an error “Inconsistent crosstool configuration; no toolchain corresponding to 'local_darwin' found for cpu 'darwin'.”. Using /usr/bin/gcc as your compiler will actually use Clang that comes with macOS / XCode. Here’s my full configuration.
  4. TensorFlow 1.2 / 1.3 expects a C library called OpenMP, which is not available in the current Apple Clang. It should speed up multithreaded TensorFlow on multi-CPU machines, but it will also compile without it. We could try to build TensorFlow with gcc 4 (which I didn’t manage), or simply remove the line that includes OpenMP from the build file. In my case I commented out line 98 of tensorflow/third_party/gpus/cuda/BUILD.tpl, which contained linkopts = [“-lgomp”] (but the location of the line might obviously change). Some people had issues with zmuldefs, but I assume that was with earlier versions; thanks to udnaan for pointing out that it’s OK to comment out these lines.
  5. I had some problems building with the latest bazel 0.5.3, so I reverted to using 0.4.5 that I already had installed. But some discussion in a github issue mentioned bazel 0.5.2 also didn’t have the problem.
  6. Now build with bazel and finish the installation as instructed by the official install guide. On my 3.2 GHz iMac this took about 37 minutes.
  7. I posted the answer to StackOverflow; that SO thread might contain additional useful discussion.
  8. A note about macOS updates. With a given system update (e.g. from 10.12.5 to 10.12.6) your set-up might break. For one, the new old NVIDIA drivers might be incompatible with the new release. Usually NVIDIA releases new drivers within days of macOS releases, so it’s best to wait for a few days until it’s out. But even after updating the NVIDIA and CUDA drivers, my OS was unable to find my external GPU. Running the automate-eGPU script again solved the issue.

This is hacky, but as a (hopefully temporary) fix for the development environment, it’s acceptable. Hopefully NVIDIA and Apple will sort out the driver issues, so that TensorFlow can again officially support GPUs on macOS.

--

--