Google CEO Sundar Pichai announced this morning at Google I/O that they have been working, in stealth, on a hardware chip optimised for machine learning. The so called TPU (or Tensor Processing Unit) is an ASIC* that has been in the making for several years. Google have been deploying TPU’s for the past year, including in the system that beat Lee Sodol in Go a couple of months ago. The name is by virtue that the custom ASIC is tailored to TensorFlow, the machine learning framework that Google open sourced in December last year.
While it does not presently appear that Google will share the designs for the TPU, outsiders can still make use of Google’s machine learning hardware and software via their cloud services platform. In their press release Google say that they have found the TPU’s to deliver an order of magnitude better-optimized performance per watt for machine learning. “TPUs deliver an order of magnitude higher performance per watt than all commercially available GPUs and FPGA,” said the Google CEO. This is roughly equivalent to fast-forwarding technology about seven years into the future (three generations of Moore’s Law)!