ML upscaling in FSR Redstone marks the point at which AMD is essentially pursuing the same principle on which NVIDIA’s DLSS has been based for years, but with its own technical and strategic twist. While AMD deliberately relied on purely shader-based, classic algorithms for FSR 1 to 3 in order to address the broadest possible hardware base, DLSS was designed for dedicated AI hardware and neural networks from the outset. With Redstone, AMD is now making the transition from analytical methods to a trained model that, as with DLSS, reconstructs a high-resolution result from low-resolution images, depth buffers and motion information, which should be as close as possible to the native resolution.
This means that there are two opposing philosophies that are now converging. NVIDIA focused on maximum image quality and AI infrastructure early on, even if this excludes older or competing GPUs. AMD initially took on as many systems as possible and is now paying back this compatibility advantage with a late but consistent entry into ML-based upscaling. Redstone is the point at which FSR and DLSS meet conceptually.
Comparison of input data and pipeline
In both FSR Redstone and DLSS, ML upscaling is anchored as a separate processing step in the render pipeline. In both cases, a finished frame is not simply upscaled, but the network receives several input channels in order to reconstruct an image that contains more information than is available in the pure raster image. On the AMD side, Redstone’s ML upscaling is fed from three central sources. The first channel is the color image rendered in reduced resolution, which represents the visible content of the scene. The second channel contains depth information, i.e. the distance of each pixel to the camera. The third channel contains motion vectors that describe how pixels move from one frame to the next.
DLSS uses exactly the same type of raw data, but with some greater emphasis on temporal history buffers, as NVIDIA trains its models very aggressively to accumulate and reuse information across multiple frames. As a result, both systems work on a similar data set. The difference lies less in the inputs than in the respective model architecture, the training and the way this information is weighted.
In the practical pipeline, both FSR Redstone and DLSS initially render at a lower internal resolution to save computing time. ML then takes over upscaling and generates the final high-resolution frame from the combined inputs, which then undergoes downstream post-processing steps. The difference in everyday life is that DLSS has been offering this step as standard for several GPU generations, while AMD is using a similarly complex model for the first time with Redstone.
Training process and model characteristics
The core of ML upscaling is the neural network, which is trained in a complex offline process. AMD and NVIDIA basically follow the same approach here. First, scenes are rendered in very high quality and resolution to serve as a reference. Artificially low-resolution versions are generated from these images, which correspond to what is later generated in real time. In parallel, depth buffers, motion vectors and other auxiliary data are recorded.
During training, the network therefore sees a combination of a reduced input and the ideal target output. The aim is to learn an image that can later be generalized to previously unseen scenes. The optimization process minimizes the deviation between the reconstructed output and the reference. In both cases, AMD and NVIDIA use large amounts of data and specialized accelerators in the data center.
The difference lies in the structure and alignment of the models. NVIDIA generally trains DLSS very strongly on specific render paths of selected engines and games, sometimes with game-specific adaptations. The models are closely linked to the respective DLSS major release. AMD traditionally pursues a more generic approach, which is designed to be more universal on the model side and can be integrated into many engines via API without each title requiring its own network status.
In practical terms, this means that DLSS can deliver a very aggressively optimized, game-like reconstruction in some titles, while FSR Redstone tends to use a broadly applicable model that does not need to be so tightly integrated with individual engines. The price for this is that AMD has to pay particular attention to generalization in order to achieve stable results in as many titles as possible.
Architecture and hardware support
The hardware basis is a major difference. NVIDIA has been using dedicated tensor cores optimized for mixed-precision matrix operations since the first DLSS generation. These units are designed for neural inference from the outset and significantly reduce the load on shaders and classic computing units. ML reconstruction is thus processed in a separate path, which is designed to be highly scalable and achieves very high throughputs in the newer RTX generations.
AMD, on the other hand, only decided to introduce dedicated AI blocks at this depth with RDNA 4 and Radeon RX 9000 and to use them consistently for ML upscaling. Previously, ML-like approaches had to be implemented via shaders, which quickly reached its efficiency limits, especially with older GPUs. With Redstone, a hardware basis is now available for the first time that is conceptually in the same direction as NVIDIA’s Tensor Cores.
This has several consequences. Firstly, AMD officially restricts the ML upscaling of Redstone to the current GPU generation, as only there is the full inference performance available without unacceptable performance losses. On the other hand, AMD can design the network in such a way that it is optimally served by the AI blocks, similar to how DLSS adapts its architecture to the tensor cores. NVIDIA benefits from a time advantage here, while AMD has the advantage of being able to adapt its first dedicated AI implementation to modern framework conditions.
Reconstruction quality and detail rendering
In essence, the reconstruction quality determines whether an ML upscaler is just a replacement for classic upscaling or actually offers added value compared to native resolution. Both DLSS and FSR Redstone are trained to restore details that are not fully present in the low-resolution input. This applies to fine patterns such as fences, vegetation, thin cables, fonts or distant geometries.
DLSS has a clear advantage here due to its long development time. Earlier DLSS versions had artifacts and a sometimes soft appearance, but from DLSS 2 onwards, NVIDIA has gradually achieved a very high level of detail, which is subjectively higher than native resolution in many scenes. DLSS is particularly strong at recognizing recurring patterns and reconstructing them precisely because it has encountered these patterns in a lot of training data.
FSR Redstone catches up precisely in this area. Compared to FSR 2 and 3, the neural network is much better at estimating what a detail should look like in higher resolution instead of just sharpening or smoothing it. Edge fringing, moiré effects or pixel noise can be better controlled because the network has an idea of the ideal structure. Early comparisons have already shown that FSR Redstone can achieve an image sharpness in static scenes and higher quality modes that is in the range of current DLSS versions, or at least matches them in many game situations.
DLSS currently has a slight advantage in the temporal dimension because NVIDIA has spent a long time optimizing history processing and ghosting suppression. AMD’s ML Upscaling reduces typical temporal artifacts compared to FSR 2, but in combination with frame generation and ray tracing reconstruction, it must first show its full strength in many titles. The practical implementation per game will be decisive here.
Temporal stability and integration in frame generation
ML upscaling is closely linked to temporal processing. Both manufacturers use motion vectors and frame histories to collect information across multiple images and thus reconstruct more structure than a single frame can provide. At NVIDIA, this connection is particularly pronounced because DLSS, frame generation and ray reconstruction are understood as related components of a uniform DLSS pipeline. DLSS receives not just one, but several past frames and can therefore track edge progressions and object movements very finely. The frame generation based on this benefits because it works on an already very clean reconstructed basis.
AMD is pursuing the same direction with Redstone, but with its own structure. ML upscaling first generates the high-resolution frame, ray regeneration ensures a cleaner RT signal and only then does frame generation begin. If all three components are harmoniously coordinated, FSR Redstone can in practice generate a very stable and high-frequency output video from a relatively coarse input signal, just like DLSS. Whether this reaches or exceeds the DLSS level in all scenes depends not only on the models themselves, but also on the engine integration, the quality of the motion vectors and the type of post-processing effects used. In engines that were already strongly oriented towards NVIDIA, DLSS will naturally be more optimally embedded. Redstone has to work hard to achieve this symbiosis.
Latency behavior in comparison
ML upscaling is always an additional processing step and therefore always a latency factor. NVIDIA has the advantage here of consistently using the tensor cores as units working in parallel while the rest of the GPU can continue rendering. Thanks to various optimizations, the additional latency contribution of DLSS inference is now relatively low compared to the cost of doubling the raster resolution. AMD also uses dedicated AI units with RDNA 4, which should work similarly in parallel. The additional effort for ML upscaling is therefore also less than what a higher native resolution would cost. However, caution is advised in combination with frame generation. In their documentation, both manufacturers recommend achieving a certain base frame rate before frame generation is switched on in order to keep input latency and motion fluidity within acceptable limits.
The difference here is more practical and historical. DLSS has been optimized over several generations for latency and interaction with Reflex and similar technologies. Redstone is just beginning this cycle. Technically, the foundation is now in place at AMD, but it will take several iterations and rounds of feedback with developers before the interplay of ML upscaling, frame generation and anti-lag mechanisms looks as mature as DLSS does in many titles today.
Platform openness and strategic positioning
Despite all the convergence, one key difference remains. NVIDIA regards DLSS as a premium function of its own ecosystem. The models are proprietary, run exclusively on RTX hardware and are closely intertwined with NVIDIA’s proprietary toolchains and SDKs. This is logical from NVIDIA’s point of view, but restricts the user base to its own platform. AMD is basically trying to take a more open approach with FSR. Formally, FSR is openly documented as a technology and is GPU-agnostic in principle. In practice, however, Redstone’s ML upscaling is limited to current Radeon GPUs, as this is the only way to achieve the required inference performance economically. The difference is therefore less technical than strategic.
FSR Redstone can theoretically be designed to run on other platforms in the long term, be it on older GPUs for testing purposes, on consoles or even on competing hardware as soon as a suitable AI infrastructure is available. DLSS is unlikely to go down this route because it is central to the RTX positioning. For users, this means that DLSS is the more mature and widespread ML upscaling system today, while FSR Redstone emerges as a lagging but more open alternative that has the potential to play a role in future generations beyond the Radeon world, if AMD does indeed go down this path.
Evaluation of the two ML upscaling approaches
FSR Redstone’s ML upscaling closes the gap that FSR has had over DLSS in image reconstruction for years. The two systems are now very similar in terms of input, the basic idea of temporal reconstruction and the claim to deliver more detail than was originally rendered. NVIDIA retains the lead in terms of maturity, ecosystem and deep integration into engines and toolchains. DLSS has been productively tested for several generations and covers a wide range of scenarios.
With Redstone, AMD brings the advantage of a fresh, consistent approach conceived as a suite, in which ML upscaling, frame generation, ray regeneration and, in the future, radiance caching have been planned together. The system is younger, but has a more stringent design and is fundamentally more open.
Technically, the two worlds are therefore no longer fundamentally different, but rather two variants of the same idea. The competition is increasingly shifting away from the question of whether ML upscaling works to the question of who can integrate it more consistently, broadly and efficiently in everyday life.
- 1 - Introduction, three looks back and one forward
- 2 - ML Radiance Caching in Detail
- 3 - ML Ray Regeneration in Detail
- 4 - ML Upscaling in Detail
- 5 - ML Frame Generation in Detail
- 6 - Aktiviation of FSR4 in game or in the Adrenaline drivers
- 7 - Benchmarks and Metrics
- 8 - Quality Comparison and Conclusion






































11 Antworten
Kommentar
Lade neue Kommentare
Urgestein
Urgestein
Veteran
1
1
Urgestein
1
Moderator
Veteran
Mitglied
Neuling
Alle Kommentare lesen unter igor´sLAB Community →