Cloud TPU Beta released, Google makes it available to outside developers…
What if you suddenly come to know about the availability of your favorite stuff in the market?
Those little things you rely upon mean so much to you and you realize that in their absence. Similar is the case with the huge brands that require everything from those little chips to the bulky hardware components for better progress.
Ever thought about how do techies move on with AI and machine learning? So, here’s something for all of you out there who want to have a hands-on-experience in deep learning right now or sometime in the future. Google had earlier released the finest processors used by them only for limited nations.
What’s New with Google’s Processing
The Indian companies too can now get access to the TPUs unveiled by Google to explore the more sophisticated training phase of deep learning.
According to the latest reports, the company had announced last month that they will be providing limited sets of TPU models to groups that plan of machine learning setups on the Google Cloud platform.
These processing units are designed mainly for Artificial Intelligence. The company itself has benefited from TPUs in multiple ways.
-> Google now owns its hardware as well. Its dependency on the hardware — producing brands is reduced and it is more flexible to try out new inventions as and when the developers want to.
-> The collections made by the business apps linked to Google are now raised by a significant amount.
The real need to develop a processor that is meant to deal with tensors was felt with the decreasing efficiencies of CPUs and GPUs to some extent. Though GPUs could solve basic tensor flow operations, they still had their own drawbacks.
GPU vs CPU
This may probably be a little difficult for a mere beginner. Hope some graphical comparisons would make you more familiar with the basics of these processors.
Here’s a statistical representation of the variation of ratio of any two of the CPU, GPU and TPU to the relative performance scale measured for 1 watt.
Let’s have a look on how CPUs and GPUs work and what made TPUs capable enough to come up to all the expectations of a typical Cloud user involved in AI.
Time to Learn…
The CPU, Control Processing Unit, is meant just for basic arithmetic and thus, it comes with a low latency. The arithmetic operations that we wish to do through a CPU will be lightning speed in a CPU. But, CPU always comes with a certain amount of limitations that, in turn, paved way to the invention of GPU. In order to perform a series of arithmetic operations, the sequential principle of CPU’s internal working consumes more time. This is what proved CPU to be inefficient at times as it resulted in a low throughput while performing continuous calculations. So, it can be clearly said that CPU is the best choice to execute a simple addition as 2+3. On the other hand, working with a tensor of numbers like [1,2,3] + [7,8,9] can be a tedious job for CPU. What it actually does is known as sequential pattern for computation, i.e., adding the respective elements one after the other.
So how does a GPU overcome this?
Coming to the properties of a GPU, Graphics Processing Unit, it offers some specifications that have an edge over the traditional CPU. It has higher latency when compared to that of CPU, so it may not be as fast as a CPU when it comes to arithmetic but gives a higher throughput.
Having said that, it can now be understood that a GPU can perform the above additions simultaneously. So, while 1 and 7 are getting added, it can simultaneously execute (2+8) and (3+9) as well. Therefore, if on the basis of assumptions, a CPU takes 4 ns to complete a computation, the total time lapsed till it is done with the above job will be (3*4) ns = 12 ns. Now, supposing a GPU consumes 6 ns per computation, then it provides the entire output after 6 ns. This is so because, although GPU has higher latency, it exhibits parallel processing. It would, hence not be wrong to consider GPUs as the favourable ones for deep learning.
TPU, the newest processor…
Now, the Google team has most recently come up with TensorFlow Processing Unit (TPU) which is special hardware meant to perform tensor flow operations. Their efficiency is more than that of both CPU as well as GPU put together because of the fact that TPUs utilize quantized values. According to the team behind the technology at Google,
On production AI workloads that utilize neural network inference, the TPU is 15 times to 30 times faster than contemporary GPUs and CPUs
A graphical comparison between TPU, CPU and GPU is shown below.
What remains to be seen however, is whether the company is able to supply the required bulk and how do TPUs function when used in huge quantities at the same time. There can be probabilities of even these lacking due to some slight limitations. This can be answered only once TPUs come into wide usage all around the world’s companies.
We’ll be watching intently, that’s for sure.
To know more, you can check out the home page of the Cloud TPU Beta here.