Gigabyte Unveils HPC Systems Powered by NVIDIA’s Tesla A100 Ampere GPUs

May 16, 2020
Submit

GIGABYTE has announced four NVIDIA Tesla A100 Ampere GPU powered systems in its HPC lineup which include the G492-ZD0, G492-ID0, G262-ZR0, and the G262-IR0. These products are split by not only the processor, offering either a 2nd Gen AMD EPYC or 3rd Gen Intel Xeon Scalable processors installed into the server's chassis.

Gigabyte Unveils NVIDIA Tesla A100 Ampere GPU Powered HPC Systems in AMD EPYC & Intel Xeon Flavors

Another way that these products are defined by is how many GPUs the server can hold the G492 series can hold a total of eight GPUs, and the G262 series can hold a total of four GPUs. These servers are designed to be used in data centers, which will be utilized by scientists, researchers, and engineers. These users will make use of GPU-accelerated HPC and AI (artificial intelligence) programs to further their understanding.

Gigabyte Intros Z490 AORUS MASTER WATERFORCE Motherboard Featuring A 360mm AIO Liquid Cooling Solution For Intel’s 10th Gen CPUs

NVIDIA HGX A100 8-GPU NVIDIA HGX A100 4-GPU
2nd Gen AMD EPYC G492-ZD0 G262-ZR0
3rd Gen Intel Xeon Scalable G492-ID0 G262-IR0

These products will include NVIDIA NVSwitch, NVIDIA NVLink, and the NVIDIA A100 GPUs will allow these products to be completely scalable to the user's needs, the NVIDIA accelerated data center will also feature NVIDIA Mellanox HDR InfiniBand high-speed networking as well as NVIDIA Magnum IO software.

This Magnum IO software supports both GPUDirect RDMA and GPUDirect Storage, with the combination a single HGX A100 platform can be expanded from four GPUs to eight GPUs to even more if the situation calls for that power. The A100 Tensor Core GPU was built to accelerate further major deep learning frameworks and more than 700 HPC applications. These products also feature the NGC catalog of container software, which will allow developers to start and run programs easily.

Gigabyte's platform can support a large number of GPUs in either 2U or 4U spaces, the design of these server's chassis separates the GPUs and the CPU components. It makes use of a barrier design to form a large air tunnel and prevent heat conduction. Both of these products were built using 80_ high-efficiency power supplies, and these products implement N+1 redundancy to ensure no data loss during power storage or power outage.

Submit