ML-based frame generation is the logical development of previous efforts to generate a significantly higher output frame rate from a limited frame rate. While classic upscaling increases the resolution at a constant frame rate, frame generation shifts the focus to the time axis. The aim is to create additional intermediate images that look as if they were actually rendered by the game. AMD is positioning ML Frame Generation in FSR Redstone as an integral part of an end-to-end AI rendering pipeline that builds on ML upscaling, ray regeneration and perspective radiance caching. NVIDIA is taking a similar approach with DLSS Frame Generation, but has been relying on an established infrastructure of tensor cores and a separate optical flow accelerator for years. Both manufacturers are pursuing the same goal, but with different histories and a slightly different focus.
Basic principle of frame generation at AMD and NVIDIA
The basic principle is comparable for both approaches. The game continues to render real frames that serve as reference points. Between two such “real” images, a neural network calculates a synthetic intermediate frame based on motion information and image content from the surrounding images. The output to the monitor then takes place at a higher frequency because each rendered frame is followed by an interpolated frame or additional images are inserted in a specific pattern.
NVIDIA established this principle with DLSS 3, AMD initially implemented it with a more heuristic frame generation in the driver environment and is now moving to a fully-fledged ML model with Redstone, which is integrated deeper into the render path. The decisive factor is that the generated frames do not simply fade linearly between two states, but must track and reconstruct the movements of objects in space.
Input data and information sources
Both NVIDIA and AMD use several information channels to generate the synthetic frames. The most important role is played by motion vectors and depth information. The game provides vector fields that describe how pixels move from one frame to the next relative to the camera. There are also depth buffers that encode the spatial position of each pixel in relation to the camera. The color information of the already rendered frames is also available.
NVIDIA can also make use of a dedicated optical flow accelerator, which generates a vector field from two neighboring frames that describes the optical movement of structures. This optical flow data is independent of the motion vectors provided by the engine and forms an additional layer of security if the game vectors are imprecise or incomplete.
AMD also receives the motion vectors and depth values from the game with Redstone. However, AMD uses shaders and the new AI units on the GPU for optical motion estimation within the ML model, not a separate hardware block as with NVIDIA. The presentations make it clear that a combination of classic motion vector reprojection and ML-based optical flow processing takes place.
As a result, both systems have comparable input data, but differ in the way they calculate and combine optical motion. NVIDIA benefits from dedicated hardware, AMD from an approach more closely coupled to the rest of the ML pipeline.
Internal model architecture and reconstruction strategy
At its core, ML frame generation is a temporal reconstruction problem. The model sees at least two real frames, knows their motion vectors and depth information and must derive a plausible image state at a point in time in between.
NVIDIA divides this task into several stages. First, the motion vectors of the game and the optical flow derived from the Optical Flow Accelerator are merged. Based on this, a neural network calculates a motion estimate for each pixel. A colour image is then generated that takes these movements into account and synthesizes additional details, such as motion blur or temporal smoothing.
AMD follows a similar structure with Redstone, but shifts parts of the logic into the ML model itself. Redstone Frame Generation combines motion vectors, depth buffers and optical flow calculated by the mesh in a common model. This mesh is explicitly mentioned in the reviewer material as the central element that interprets the optical motion and determines the color output of the generated frame.
During training, both systems learn how objects must look from experience between two known states. They use the history of many games and scenes to recognize patterns. The decisive difference lies in the distribution of work between dedicated hardware and the neural network itself, not in the basic concept.
Training basis and data diversity
The training of ML Frame Generation is based on a large number of example sequences. Both AMD and NVIDIA allow their models to view composite scenes in which native high-frequency reference videos are available. From these, they create synthetic variants with a lower frame rate so that the model learns what a missing intermediate frame should look like.
Thanks to its early entry into DLSS, NVIDIA has a long experience lead in the collection and processing of such data. Many internal and external projects have already been equipped with DLSS variants, which in turn serve as a data source. Integration into our own demos and partner projects enables a targeted selection of scenes that are particularly suitable for training problem cases such as fast camera pans, transparent objects or particle effects.
AMD is building this database with Redstone. The advantage is that AMD can design the training pipeline for upscaling, ray regeneration and frame generation at the same time. This creates a consistent data set in which several AI modules from the same suite benefit from each other. The downside is that NVIDIA’s time advantage remains noticeable in the form of already very mature models, which can be seen particularly in rare special cases such as unusual shader effects or extremely stylized games.
Dealing with artifacts and problem areas
Frame Generation is susceptible to typical image errors that do not occur with classic render output. These include double contours during fast movements, flickering edges, ghost images around fast-moving objects and unstable positions of UI elements.
NVIDIA uses the Optical Flow Accelerator and its history processing to minimize such artefacts. A particularly critical point is the separation of foreground and background. When an object moves in front of a background, the model must correctly recognize which pixels really belong to the object and which to the background. Errors in this assignment lead to streaks or incorrect contours. DLSS has greatly reduced many of these problems in recent generations due to continuous optimization of the model and the interaction with Engine Motion Vectors.
AMD addresses the same issues in Redstone. The combination of motion vectors, depth buffers and ML-based optical flow reconstruction makes it possible to trace object edges more cleanly. The Redstone documentation explicitly emphasizes that demanding scenarios such as shadow areas and highly dynamic movements are part of the training focus. Nevertheless, the quality of a frame generation system depends largely on how clean the input data from the engine is. Even a good model can only compensate for poor or incomplete motion vectors to a limited extent.
In practice, this means that NVIDIA often delivers slightly more stable results in engine environments that have been optimized for DLSS for years, while AMD first has to work on this fine-tuning with Redstone. Both systems show impressively few artifacts in well-connected engines, but visibly reach their limits in exotic effect scenes.
Latency, input lag and game feel
A frequently voiced concern in connection with frame generation is latency. As the screen displays more frames per second than the game calculates internally, there is a risk that inputs will feel delayed.
NVIDIA has addressed this problem with a clear approach. Although the frame generation generates additional frames, the input logic remains tied to the “real” game frames. This means that the input path always refers to the last rendered frame, not to the synthetic intermediate frame. In addition, measures such as Reflex are added to reduce the input latency. In practice, this means that the perceived delay is higher than without frame generation, but significantly lower than pure refresh rate doubling, for example through VSync or other linear methods.
AMD follows a very similar philosophy with Redstone. The synthetic frames are primarily used to improve motion perception, not to increase the logically processed simulation frequency. Anti lag mechanisms are intended to ensure that the input path to the simulation remains as short as possible. The actual render chain has additional processing steps with ML upscaling and frame generation, the latency of which is to be minimized by the new AI units.
In direct comparison, NVIDIA has the advantage here that DLSS Frame Generation has already been tried and tested in many titles and there are clear recommendations for minimum FPS before activation. AMD will communicate similar limits, as Redstone also requires a minimum base of natively rendered FPS in order to prevent the gaming experience from being carried too heavily by interpolated frames.
Depth of integration in games and engines
A key difference between AMD and NVIDIA lies in the integration history. DLSS Frame Generation was integrated early on in large engines such as Unreal and in numerous AAA titles. The toolchain and documentation are designed to incorporate DLSS relatively early in the development cycles. As a result, many of the typical stumbling blocks have already been identified and addressed in earlier projects.
AMD has yet to establish a similar depth of integration with Redstone. While FSR is already present in many games, ML Frame Generation with Redstone requires further customization because it requires additional input data and cleanly prepared motion vectors that exactly match the requirements of the model. The advantage of FSR is that it is platform agnostic and designed with multiple architectures in mind. However, the ML variant of Redstone is tailored to Radeon RX 9000 for the time being, which shifts the focus back to a defined hardware base.
In the long term, the decisive factor will be how early Redstone is integrated into development processes. Whether it becomes a standard component of many engines, like DLSS, or rather an optional feature in selected projects, will largely determine its relevance in everyday life.
Interaction with ML upscaling and ray regeneration
One of the biggest advantages of ML Frame Generation in Redstone is its integration into a larger AI pipeline. It works on images that have already been reconstructed by ML Upscaling and freed from ray tracing noise by Ray Regeneration. This means that the starting point for frame generation is cleaner and more detailed than would often be the case with purely classic rendering. NVIDIA pursues a similar idea with DLSS and Ray Reconstruction. Here, a path tracing signal is first improved by an ML model and then scaled up before the frame generation works on the basis of these optimized frames.
In both cases, there is a synergetic effect. The better the quality of the input frames, the easier it is for the frame generation to create plausible intermediate frames. The difference is that AMD has redefined all these modules as a coherent package with Redstone, while NVIDIA has carried out this evolution step by step over several DLSS generations.
Comparative assessment of the ML Frame Generation
The ML Frame Generation is not a gimmick for both manufacturers, but a central building block for meeting future graphics requirements. Increasingly higher resolution displays and more intensive ray tracing are difficult to combine with high frame rates without such technologies. With DLSS Frame Generation, NVIDIA currently has the advantage of a longer maturation process, broadly established engine integration and dedicated hardware units that have been optimized for several product generations. In many of today’s titles, DLSS Frame Generation appears mature, even if artifacts in difficult scenes have not completely disappeared.
AMD is catching up technologically with Redstone and is bringing an ML Frame Generation that is conceptually in the same direction, but is more strongly embedded in a comprehensive ML suite. Success will depend on how consistently developers implement the necessary engine adjustments and how quickly AMD incorporates feedback from the field into model updates and SDK versions.
From a technical perspective, there are now more similarities than differences between the approaches of AMD and NVIDIA. Both use ML models, motion vectors, depth buffers and temporal histories to generate intermediate frames. Differences arise from hardware architecture, integration history and the question of how aggressively the models were trimmed for certain game scenes. As a result, the discussion is increasingly shifting from the theoretical ability to frame generation to the practical question of which ecosystem implements this consistently in more titles and with fewer side effects.
- 1 - Introduction, three looks back and one forward
- 2 - ML Radiance Caching in Detail
- 3 - ML Ray Regeneration in Detail
- 4 - ML Upscaling in Detail
- 5 - ML Frame Generation in Detail
- 6 - Aktiviation of FSR4 in game or in the Adrenaline drivers
- 7 - Benchmarks and Metrics
- 8 - Quality Comparison and Conclusion







































11 Antworten
Kommentar
Lade neue Kommentare
Urgestein
Urgestein
Veteran
1
1
Urgestein
1
Moderator
Veteran
Mitglied
Neuling
Alle Kommentare lesen unter igor´sLAB Community →