Intel Demos Crest Lake ‘Nervana’ Neural Network Accelerator At NIPS 2017 – Complete Board Pictured

Author Photo
Dec 7
44Shares
Submit

What appear to be the first pictures of the complete Nervana accelerator have been spotted over at Twitter. The Nervana processor is the part of Intel’s Crest Lake platform which focuses on deep learning accelerators based primarily on the parallel approach to computing. These are essentially GPGPUs in all but name that are designed to brute force DNN training scenarios and allow Intel to take a piece of the pie of what is going to be the market of the future.

Intel ‘Nervana’ Neural Network accelerators pictured, will be powered by 2x 8 pin connectors

The Nervana ‘Neural Network Processor’ uses a parallel, clustered computing approach and is built pretty much like a normal GPU. It has 32 GB of HBM2 memory dedicated in 4 different 8 GB HBM2 stacks, all of which is connected to 12 processing clusters which contain further cores (the exact count is unknown at this point). Total memory access speeds combine to a whopping 8 terabits per second. An interposer has been used to full effect and Intel’s homegrown interconnect seals the deal.

russian-cybersecurity-company-kaspersky-labRelatedKaspersky Ban Gets Trump Approval – US President Signs Federal Ban on Russian Company into Law

Unfortunately, the only information we have (apart from the pictures of the complete board) is that the card will be using 2x 8 pin connectors as can  be seen in the picture. Benchmark numbers and specification details are still sparse and I assume those who know are under NDA. The HBM2 stacks as well as the actual die however, are readily visible as well as the interconnect ports and the PCIe finger.

The Crest Lake chip is entirely different to the Xeon Phi hardware. It is specifically designed to boost AI workloads at an unprecedented pace. Intel is using a new architecture called “Flexpoint” which will be used inside the arithmetic nodes of the Crest Lake chip. This will increase the parallelism of arithmetic operations for the chip by a factor of 10. The chip will also feature a MCM (Multi Chip Module) design.

Intel is claiming big numbers with the Nervana AI chip and has also revealed that it will be highly scalable which is something their CEO, Brian Krzanich, has already stated to be the path forward for AI learning. The chip will feature 12 bidirectional high-bandwidth links and seamless data transfer via the interconnects. These proprietary inter-chip links will provide bandwidth up to 20 times faster than PCI Express links.

Nervana itself might or might not make it big in the autonomous driving segment with the head start that NVIDIA has, but Intel has made sure that it has a fighter in the AI department and its toes in the DNN training industry. We cannot however speculate on the exact nature of the impact this chip might make since for 1) Intel has not disclosed the full specifications of the Nervana chip and 2) no benchmarks have been revealed so far to compare it against a homegrown GPGPU network running CuDNN.

Submit