Google’s Second Generation Tensor Processing Units Are Capable of Delivering 180 Teraflops of Computing Power

Omar Sohail
2nd gen Tensor Processing Unit

At the company’s I/O 2017 developer’s conference, Google unveiled the second-generation Tensor Processing Unit and it is capable of delivering a lot of computing power.

Second-Generation Tensor Processing Units Will Be Used for Faster Machine Learning Purposes Ranging from Google Translate, Google Photos, and More

Though machine learning is normally carried out by GPUs made by NVIDIA, Google has decided to build some its own hardware and optimize it to work well with its software.

Related StoryOmar Sohail
Samsung’s Unique Exynos SoC With Two ‘Cortex-X’ Cores Could Be in Development With Added Help From Google’s Tensor, AMD Radeon Teams

“Research and engineering teams at Google and elsewhere have made great progress scaling machine learning training using readily-available hardware. However, this wasn’t enough to meet our machine learning needs, so we designed an entirely new machine learning system to eliminate bottlenecks and maximize overall performance. At the heart of this system is the second-generation TPU we're announcing today, which can both train and run machine learning models.”

The company now claims that the second version of its TPU system is now completely operational and its being deployed across its Google Compute Engine. There are some other facts that Google has decided to relay regarding its Tensor Processing Units and they have been detailed below.

“Each of these new TPU devices delivers up to 180 teraflops of floating-point performance. As powerful as these TPUs are on their own, though, we designed them to work even better together. Each TPU includes a custom high-speed network that allows us to build machine learning supercomputers we call “TPU pods.” A TPU pod contains 64 second-generation TPUs and provides up to 11.5 petaflops to accelerate the training of a single large machine learning model. That’s a lot of computation!

Using these TPU pods, we've already seen dramatic improvements in training times. One of our new large-scale translation models used to take a full day to train on 32 of the best commercially-available GPUs—now it trains to the same accuracy in an afternoon using just one eighth of a TPU pod.”

These newer processing units are capable of doing both inference and training, and researchers can deploy more versatile AI experiments at a faster rate than before just as long as the software is built using TensorFlow.

Google has not delivered the power consumption metric for its Tensor Processing Units but we feel that it is going to be more efficient than NVIDIA’s graphics processors. What impression do you have regarding these chips? Tell us your thoughts down in the comments.


Deal of the Day