Before NVIDIA acquired Mellanox was a data processing unit called "BlueField." It was not as widely discussed, especially when the company revealed details six years ago.
NVIDIA Accelerates DPU Adoption via Linux Foundation Project
The DPU allows accelerators to receive immediate access to the network without going through the standard x86 architecture. Since processors are more adequate for use with application management than PCIe traffic lane management, BlueField would make the process more sensible and take the pressure off other PC components. Currently, a small handful of companies are utilizing DPUs in today's workplace. NVIDIA plans to change this by adopting DPU support through the Linux Foundation Project.
The benefit of utilizing the DPU technology is enormous, which makes sense that NVIDIA would want to capitalize on this technology. When comparing standard Ethernet NICs that are widely used with the lesser-known DPUs, the latter has more power traveling to the processor, giving the appearance of a microcomputer than what would be considered a "data movement vehicle," reports website StorageReview.
The primary use of a DPU is to transfer data immediately. JBOF creates the same action without the use of the x86 architecture, making the usefulness of the process more attractive. The JBOF system employs two PCIe Gen3 expansion cables to push the storage array cabled from one or more servers. The JBOF enables all initial PCIe connections connected to the CPU to transfer directly to the NVMe drives on the system.
VAST Data utilizes NVIDIA's current DPU, based on the BlueField design, which are highly dense 1U boxes sharing a staggering 675 TB of raw flash memory. However, Fungible has created its DPU design which will allow for disaggregation. StorageReview has access to their array, and Fungible recently stated the creation of a GPU of their design.
Why are DPUs not as accessible in the current management circles? It all boils down to utilization. BlueField requires a heavy load of software to work with systems. Because of this, the product is not as simple to install into a system, meaning that adoption is much harder to accomplish. On top of that, standard storage companies would instead go with a more expedited approach to designing and manufacturing without the strain of DPUs requiring a new strategy for coding.
DPU adoption is where NVIDIA plans to make the technology more accessible in the marketplace by becoming a founding member of the Linux Foundation's Open Programmable Infrastructure project, also known as OPI. This move by the company will make the integration of DPUs into more systems highly accessible and faster. NVIDIA has already utilized the DOCA in more APIs recently in the open-source areas so that quicker adoption could be around the corner.
According to a blog post from NVIDIA, the "OPI project aims to create a community-driven, standards-based, open ecosystem for accelerating networking and other data center infrastructure tasks using DPUs."
DOCA includes drivers, libraries, services, documentation, sample applications, and management tools to speed up and simplify the development and performance of applications. It allows for flexibility and portability for BlueField applications written using accelerated drivers or low-level libraries, such as DPDK, SPDK, Open vSwitch, or Open SSL. We plan to continue this support. As part of OPI, developers will be able to create a common programming layer to support many of these open drivers and libraries with DPU acceleration.
DPUs offer more prospects to make infrastructures quicker, safer, and highly efficient. With more data centers popping up, showing the massive importance of efficient and green technology, DPUs are in a perfect place to see more welcoming situations in current infrastructures that are looking for new techniques to push efficiency faster.