NVIDIA, Intel & ARM Bet Their AI Future on FP8, Whitepaper For 8-Bit FP Published

Three major tech and AI companies,  Arm, Intel, and NVIDIA have joined hands to standardize the brand new FP8 or 8-Bit Floating Point standard. The companies have published a new whitepaper describing the concept of an 8-bit floating point specification and corresponding variations, called FP8 with the variants E5M2 and E4M3, to supply a standard interchangeable arrangement that can work for both artificial intelligence (AI) inference and training.

NVIDIA, ARM & Intel Set Eyes On FP8 "8-Bit Floating Point" For Their Future AI Endeavors

In theory, this new cross-industry spec alignment between these three tech giants will permit AI models to work and function across hardware platforms, speeding the development of AI software.

Related StoryOmar Sohail
Qualcomm Gets Sued by ARM for Its Nuvia Acquisition, Putting the Chipmaker’s Ambitious Custom Chip Plans in Jeopardy

Artificial intelligence innovation has become more of a necessity across both software and hardware to produce sufficient computational throughput so that the technology can advance. The requirements for AI computations have increased over the last few years, but more over the previous year. One such area of AI research that gains a fair deal of importance in addressing the computing gap is the reduction of requirements for numeric precision in deep learning, improving both memory and computational efficiency.

Image source: "FP8 Formats For Deep Learning," via NVIDIA, Arm, and Intel.

Intel intends to back the specification of the AI format across its roadmap that covers processors, graphic cards, and numerous AI accelerators. The company is working on one accelerator, the Habana Gaudi deep learning accelerator. The promise of reduced-precision methods allows for unearthing inherent noise-resilient properties in deep learning neural networks focused on improving compute efficiency.

Image source: "FP8 Formats For Deep Learning," via NVIDIA, Arm, and Intel.

The new FP8 specification will reduce deviations from the current IEEE 754 floating point formats with a comfortable level between software and hardware, leveraging current AI implementations, speeding up adoption, and enhancing developer productivity.

language-model-ai-training-1
language-model-ai-inference-1

The paper will fund the principle to leverage any algorithms, concepts, or conventions constructed on IEEE standardization between Intel, Arm, and NVIDIA. Having a more consistent standard between all companies will grant the most considerable latitude for the future of AI innovation while maintaining current conventions in the industry.

News Sources: Arm, FP8 specification

WccfTech Tv
Subscribe
Filter videos by
Order