NVIDIA DLSS Source Code Might Have Been Leaked in the Cyberattack

Alessio Palumbo
Streamline NVIDIA DLSS NVIDIA Invites Developers To Test Experimental DLSS Models Directly From Company's Supercomputer NVIDIA DLSS 2.5
NVIDIA DLAA will be the visual fidelity option to NVIDIA DLSS.

The source code of NVIDIA DLSS (Deep Learning Super Sampling) might have been leaked in the aftermath of the recent cyberattack that hit the company.

TechPowerUp just posted a screenshot (received through an anonymous tip) which shows plenty of source files and even a so-called NVIDIA DLSS Programming Guide document. For the record, today NVIDIA did confirm that proprietary information was stolen in the aforementioned cyberattack.

Related StoryHassan Mujtaba
NVIDIA’s FPS-Increasing DLSS 3 Tech Is About To Get Even Better, Major Improvements To Image Quality In Games

Needless to say, those in possession of the source code could analyze it for all sorts of purposes. It is even conceivable that AMD or Intel could look into the source code, if it was made publically available, to get ideas on how to improve the respective FSR and XeSS technologies (though they would have to do so while staying clear of lifting anything as-is for legal reasons).

NVIDIA DLSS debuted in 2018 with the Turing architecture, which included RT cores for real-time hardware ray tracing support and Tensor Cores for AI-based applications such as DLSS or DLDSR.

The first generation of Deep Learning Super Sampling, while innovative, suffered from a variety of issues. Firstly, it was very complex for game developers to implement as it required per-game training of the neural network; secondly, its implementation often resulted in blurry images and/or artifacts, mostly due to the single-frame approach.

For NVIDIA DLSS 2.0, which launched in 2020, both of these problems were resolved. The latest version of Deep Learning Super Sampling uses a multi-frame approach based on Temporal Antialiasing Upsampling. As such, data from previous frames (including raw low-resolution input, motion vectors, depth buffers, and exposure/brightness) is extensively used in the image reconstruction process as machine learning combines samples in past frames and the current frame to reduce aliasing and ensure finer details are preserved or even restored. Additionally, NVIDIA developed a generalized neural network model that can be applied to any game.

So far, DLSS offers the best combination of quality and performance. That may change soon with the launch of Intel's XeSS, however; unlike FSR, and similarly to NVIDIA DLSS 2.0, XeSS technology is also based on neural networks and won't need to be trained per game. Furthermore, XeSS won't leverage hardware features such as the Tensor Cores available in GeForce RTX graphics cards.

The first game due to support XeSS is the PC version of Death Stranding Director's Cut, scheduled to launch on March 30th.

Share this story

Deal of the Day