NVIDIA Hopper H100 GPU Goes In Full Production, Ada Lovelace Comes To L40 Server GPU, Grace CPU Superchip Further Detailed

NVIDIA's GTC 2022 keynote was overshadowed by the gaming announcements earlier today which are definitely worth checking out over here but during the main GTC keynote, CEO, Jensen Huang, talked and revealed some brand new products such as the Ada Lovelace L40 GPU, the OVX, and IGX systems and confirming that Hopper H100 GPUs are in full production now.

NVIDIA Highlights Hopper H100 Availability, Ada Lovelace L40 GPU, IGX/OVX Systems & Grace CPU Superchips at GTC 2022

Starting with the flagship Hopper chip, NVIDIA has confirmed that the H100 GPU is now under full production and that its partners will be rolling out the first wave of products in October this year. It was also confirmed that the global rollout for Hopper will include three phases, the first will be pre-orders for NVIDIA DGX H100 systems & free hands of labs to customers directly from NVIDIA with systems such as Dell's Power Edge servers which are now available on NVIDIA LaunchPad.

Related StoryUle Lopez
GeForce NOW Adds 25 New Games in October Including A Plague Tale: Requiem and More

NVIDIA Hopper in Full Production

The 2nd phase will include leading OEM partners beginning to ship in the coming weeks with over 50 servers available in the market by the end of the year lastly, the company expects dozens more to enter the market by the first half of 2023.

Global Rollout of Hopper

For customers who want to immediately try the new technology, NVIDIA announced that H100 on Dell PowerEdge servers is now available on NVIDIA LaunchPad, which provides free hands-on labs, giving companies access to the latest hardware and NVIDIA AI software.

Customers can also begin ordering NVIDIA DGX H100 systems, which include eight H100 GPUs and deliver 32 petaflops of performance at FP8 precision. NVIDIA Base Command and NVIDIA AI Enterprise software power every DGX system, enabling deployments from a single node to an NVIDIA DGX SuperPOD supporting advanced AI development of large language models and other massive workloads.

H100-powered systems from the world’s leading computer makers are expected to ship in the coming weeks, with over 50 server models in the market by the end of the year and dozens more in the first half of 2023. Partners building systems include Atos, Cisco, Dell Technologies, Fujitsu, GIGABYTE, Hewlett Packard Enterprise, Lenovo and Supermicro.

Additionally, some of the world’s leading higher education and research institutions will be using H100 to power their next-generation supercomputers. Among them are the Barcelona Supercomputing Center, Los Alamos National Lab, Swiss National Supercomputing Centre (CSCS), Texas Advanced Computing Center and the University of Tsukuba.


The NVIDIA L40, powered by the Ada Lovelace architecture

The second major announcement is regarding the L40 GPU, a product focused on the Data Center segment and utilizing the newly announced Ada Lovelace GPU architecture. The L40 GPU's entire specs are unknown but it comes with 48 GB of GDDR6 memory (ECC), 4 DP 1.4a display outputs, a TBP of 300W, and a dual-slot passive cooler that measures 4.4" x 10.5". The card is powered by a single 16-Pin CEM5 connector.

The NVIDIA L40 GPU supports all major vGPU software such as NVIDIA vPC/vApps, and NVIDIA RTX Virtual Workstation (vWS) and comes with Level 3 NEBS support plus secure boot (root of trust) support. The most important aspect of this product is that it features three AV1 Encode & also 3x Decode units. This is already a bump from the RTX 6000 and other GeForce RTX 40 graphics cards that feature dual AV1 engines.

GPU Architecture NVIDIA Ada Lovelace Architecture
GPU Memory 48 GB GDDR6 with ECC
Display Connectors 4 x DP 1.4a
Max Power Consumption 300W
Form Factor 4.4" (H) x 10.5" (L) Dual Slot
Thermal Passive
vGPU Software Support* NVIDIA vPC/vApps, NVIDIA RTX Virtual Workstation (vWS)
NVENC | NVDEC 3x | 3x (Includes AV1 Encode & Decode)
Secure Boot with Root of Trust Yes
NEBS Ready Yes / Level 3
Power Connector 1x PCIe CEM5 16-pin

Grace Hopper Superchip Is Ideal for Next-Gen Recommender Systems

NVIDIA has also further detailed its Grace Hopper Superchip which it claims is ideal for recommender systems.

Related StoryHassan Mujtaba
NVIDIA GeForce RTX 4080 16 GB Graphics Card Benchmarks Leak Out, Up To 29% Faster in 3DMark Tests & 53 TFLOPs Compute

NVLink Accelerates Grace Hopper

Grace Hopper achieves this because it’s a superchip — two chips in one unit, sharing a superfast chip-to-chip interconnect. It’s an Arm-based NVIDIA Grace CPU and a Hopper GPU that communicate over NVIDIA NVLink-C2C. What’s more, NVLink also connects many superchips into a super system, a computing cluster built to run terabyte-class recommender systems.

NVLink carries data at a whopping 900 gigabytes per second — 7x the bandwidth of PCIe Gen 5, the interconnect most leading edge upcoming systems will use. That means Grace Hopper feeds recommenders 7x more of the embeddings — data tables packed with context — that they need to personalize results for users.

More Memory, Greater Efficiency

The Grace CPU uses LPDDR5X, a type of memory that strikes the optimal balance of bandwidth, energy efficiency, capacity, and cost for recommender systems and other demanding workloads. It provides 50% more bandwidth while using an eighth of the power per gigabyte of traditional DDR5 memory subsystems.

Any Hopper GPU in a cluster can access Grace’s memory over NVLink. It’s a feature of Grace Hopper that provides the largest pools of GPU memory ever. In addition, NVLink-C2C requires just 1.3 picojoules per bit transferred, giving it more than 5x the energy efficiency of PCIe Gen 5.

The overall result is recommenders get a further up to 4x more performance and greater efficiency using Grace Hopper than using Hopper with traditional CPUs (see chart below).


NVIDIA Announces OVX Computing Systems

NVIDIA has also revealed its brand new OVX system that makes use of the L40 GPUs which we just mentioned above, utilizing up to 8 Ada Lovelace chips in total for enhanced networking technology, to deliver groundbreaking real-time graphics, AI, and digital twin simulation capabilities. The OVX systems with L40 GPUs are expected to hit the market by early 2023 through leading partners like Inspur, Lenovo, and Supermicro.

NVIDIA also introduced its IGX system mainboard which is an edge-AI platform, purpose-built for industrial and medical environments

Powering the new OVX systems is the NVIDIA L40 GPU, also based on the Ada Lovelace GPU architecture, which brings the highest levels of power and performance for building complex industrial digital twins.

The L40 GPU’s third-generation RT Cores and fourth-generation Tensor Cores will deliver powerful capabilities to Omniverse workloads running on OVX, including accelerated ray-traced and path-traced rendering of materials, physically accurate simulations, and photorealistic 3D synthetic data generation. The L40 will also be available in NVIDIA-Certified Systems servers from major OEM vendors to power RTX workloads from the data center.

In addition to the L40 GPU, the new NVIDIA OVX includes the ConnectX-7 SmartNIC, providing enhanced network and storage performance and the precision timing synchronization required for true-to-life digital twins. ConnectX-7 includes support for 200G networking on each port and fast in-line data encryption to speed up data movement and increase security for digital twins.


WccfTech Tv
Filter videos by