RiseMLinRiseML BlogComparing Google’s TPUv2 against Nvidia’s V100 on ResNet-50Google recently added the Tensor Processing Unit v2 (TPUv2), a custom-developed microchip to accelerate deep learning, to its cloud…8 min read·Apr 26, 2018--10--10
RiseMLinRiseML BlogTraining ImageNet on a TPU in 12.5 hours with GKE and RiseMLGoogle’s Tensor Processing Unit (TPU), a custom-developed accelerator for deep learning, offers a fast and cost-efficient alternative to…3 min read·Apr 17, 2018----
RiseMLinRiseML BlogBenchmarking Google’s new TPUv2NOTE: We published a follow-up article with more up-to-date benchmark results here.6 min read·Feb 23, 2018--8--8
RiseMLinRiseML BlogAccelerating I/O bound deep learning on shared storageWhen training a neural network, one typically strives to make the GPU the bottleneck. All data should be read from disk, pre-processed, and…6 min read·Feb 15, 2018--1--1