AMD ROCm GPU support for TensorFlow
Guest post by Mayank Daga, Director, Deep Learning Software, AMD
We are excited to announce the release of TensorFlow v1.8 for ROCm-enabled GPUs, including the Radeon Instinct MI25. This is a major milestone in AMD’s ongoing work to accelerate deep learning. ROCm, the Radeon Open Ecosystem, is our open-source software foundation for GPU computing on Linux. Our TensorFlow implementation leverages MIOpen, a library of highly optimized GPU routines for deep learning.
AMD provides a pre-built whl package, allowing a simple install akin to the installation of generic TensorFlow for Linux. We’ve published installation instructions, and also a pre-built Docker image.
In addition to supporting TensorFlow v1.8, we are working towards upstreaming all the ROCm-specific enhancements to the TensorFlow master repository. Some of these patches are already merged upstream, while several more are actively under review. While we work towards fully upstreaming our enhancements, we will be releasing and maintaining future ROCm-enabled TensorFlow versions, such as v1.10.
We believe the future of deep learning optimization, portability, and scalability has its roots in domain-specific compilers. We are motivated by the early results of XLA, and are also working towards enabling and optimizing XLA for AMD GPUs.
For more information on AMD’s work in this area, see www.amd.com/deeplearning