RiseMLinRiseML BlogComparing Google’s TPUv2 against Nvidia’s V100 on ResNet-50Google recently added the Tensor Processing Unit v2 (TPUv2), a custom-developed microchip to accelerate deep learning, to its cloud…Apr 26, 201810Apr 26, 201810
RiseMLinRiseML BlogTraining ImageNet on a TPU in 12.5 hours with GKE and RiseMLGoogle’s Tensor Processing Unit (TPU), a custom-developed accelerator for deep learning, offers a fast and cost-efficient alternative to…Apr 17, 2018Apr 17, 2018
RiseMLinRiseML BlogBenchmarking Google’s new TPUv2NOTE: We published a follow-up article with more up-to-date benchmark results here.Feb 23, 20188Feb 23, 20188
RiseMLinRiseML BlogAccelerating I/O bound deep learning on shared storageWhen training a neural network, one typically strives to make the GPU the bottleneck. All data should be read from disk, pre-processed, and…Feb 15, 20181Feb 15, 20181