32829 - Moore’s law, AI accelerator and Tensor Processing Unit

N. Lygeros

The slowing of Moore’s law that we observed in 2016, can be a suggestion for industry efforts to refocus on application led silicon design. The increasing power of chips with general purpose has been very useful for many applications via software that’s why in this context, the diversification of dedicated Artificial Intelligence accelerators makes sense. So the idea is not to upgrade CPU and GPU but to make the difference in a different way. This emerging class of microprocessor designed to accelerate artificial neural networks, machine learning algorithms for robotics, needs to be powerful. More of them are many core designs. In fact they try to mirror the nature of biological neural networks which are massively- parallel. In this framework, a tensor processing unit (TPU), which is an application- specific integrated circuit for machine learning, can be a powerful solution. The TPU is designed for a higher volume of reduced precision computation, in comparison with GPU. By this way, we emphasize all our efforts in the volume but not the precision. The notion of error at this level of precision is not so important because we need the order more than the exact value. The first generation TPUs were limited to integers but now with the second generation TPUs we can calculate in floating point. With this crucial modification TPUs are useful for both training and interference of machine learning models. One proof of the power of TPUs is the existence of AlphaGo and its results against the masters of Go Game.