Whether you love it or hate it, frame generation is here to stay. Nvidia’s DLSS Frame Generation, specifically, is the current flagship solution to this self-imposed issue, and it’s time to dispel all myths about it once and for all. It’s not perfect, but it does have a use case.
Crucially, the thing to understand about frame generation as a concept is that it’s not a baseline performance uplift as such. The fact that, in Nvidia’s case, it comes bundled under the Deep-Learning Super Sampling (DLSS) umbrella should be telling, as the company’s upscaler, too, was not initially designed to be a performance assist. At least, not in the sense that it’s being used nowadays. Let’s start from the beginning, though.
What is Frame Generation?
Frame generation, and the “branded” RTX Frame Generation from Nvidia, specifically, is a game rendering technique that inserts “fake” AI-generated frames in between “real” conventionally generated frames of gameplay. The end result is a massively improved perceived smoothness of the image at the cost of some added latency.
Now, when I say “massively improved”, note that this comes with a substantial caveat that Nvidia goes out of its way to disregard, and frame generation naysayers keep repeating it ad nauseam. Frame generation, as a rule, generates visual artifacts and the input latency it introduces to gameplay is not a trivial problem. The more frames your game can generate without FG on, the less artifacting will you get with FG on. So, it really behaves precisely the same way as all the modern upscalers do: the better your baseline frame-rate/resolution, the better your generated frames/upscaled image will look and feel. Some features, like Nvidia’s Reflex or AMD’s Radeon Anti-Lag, hack their way to latency improvements while FG is enabled, but that’s a different story altogether.
In the simplest terms possible, frame generation takes two “real” frames and then uses them to approximate a “fake” frame to interject between them. If that sounds temporally weird in the context of a video game, you’re on the right track: frame generation needs to essentially hold a frame hostage until the interim “fake” frame is generated. It then shows you the “fake” generated frame followed by the “real” frame. And so it goes. This happens exceedingly quickly, of course, but it’s where the added latency comes from, and why it’s not technically wrong to say that turning on frame-gen is a performance net negative.
Yet, the kicker is that running the game at a high enough frame rate (more on that later) and then turning on frame-gen leads to a manageable level of input latency. Further, if your output frame rate is high enough, it’s going to be exceedingly difficult to see any artifacting brought about by frame-gen. The more frames you generate, the shorter the time they’re displayed on your screen, and so the whole thing looks much smoother and with virtually no perceptible artifacting.
Frame-gen can also side-step problems with CPU overhead. If you’re CPU bottlenecked in a given video game, your frame-gen capable GPU might be able to leverage frame-gen technology to overcome this bottleneck and straight-up generate more performance for you on the go. Since not many games are CPU bottlenecked nowadays, however, this isn’t a very common use case.
Problems with Frame Generation
The boons of frame generation are very contextual, you see. I do not recommend using it if your GPU can’t crank out at least 60 FPS in a given game, and ideally a fair bit more than that for optimal performance. The slower your game runs, the longer “fake” frames stick around on your display, which in turn makes problems far easier to spot. To say nothing of the egregiously high input latency that comes part and parcel with this setup.
What this means in practice is that, in much the same way as DLSS once looked awful at 1080P, so too is frame generation an extremely shoddy stand-i🍰n for good performance.
Nvidia’s new DLSS4 Multi-Frame Generation can deliver mind-boggling performance improvements if you’re happy with the way it’s being delivered. Instead of just interjecting one generated frame between two “real” frames, it’s now going to be able to add two or even three extra frames in between. This, Nvidia claims, comes about without adding even more input latency to the process, so it shouldn’t feel any worse than RTX F♔rame Generation already does.
Yet, there’s no way to avoid the increase in input latency. If you’re sensitive to it and want to go back to the good old days of single-digit latency, frame generation of any sort is not the solution. Instead, it’s an additional way of making ray-traced and path-traced gaming at high resolutions a feasible option for the average gamer. Not a perfect option, mind, but feasible. Really, that’s all it is.
When should I be using frame generation?
This brings me back to my earlier point about the designated use cases of upscalers and frame-generators. DLSS wasn’t conceived as a performance assist for low-spec PCs, as such. Instead, Nvidia’s newfangled anti-aliasing solution and a way to make ray-traced visuals performant on RTX 2000 series GPUs. The fact that it eventually turned into a performance crutch came after the fact, when ever🎐yone realized how powerful DLSS (and FSR, and XeSS, and so on) could be in such a situation.
As for frame generation, primarily as a way to boost already high frame rates higher still. Ideally, you’re not supposed to use it to make 30 FPS look like it’s running at 43-87 FPS, because the perceived latency is going to be meaningfully worse than it would be with frame gener🎉ation disabled. Instead, you should be using FG to achieve a rock-solid 120 FPS from a somewhat stable 80 or 90 FPS, just to illustrate a good use case.
The higher your baseline frame rate, the better your input latency with frame generation enabled, and it’s as simple as that. Do use frame-gen to enhance an already performant video game further still. Do not use frame-gen to make a 30 FPS game run at an unstable 45-50 FPS. Note that Nvidia’s RTX 5000 reveal showed Cyberpunk 2077 running at sub-30 FPS with path-tracing enabled at a native 4K resolution on the RTX 5090. The other side of the screen showed it off running at over 200 FPS with fully maxed-out RTX Frame-Gen… but it didn’t take DLSS downsampling into account.
In practice, Nvidia’s Frame-Gen side of the screen didn’t use native 4K for its “real” frames because it’s not nearly performant enough. Instead, some type of DLSS was used to increase the performance baseline, and then the Frame-Gen portion of the equation wasღ enabled.
All of this is to say that you should always, always try to get as solid of a native frame rate as possible before enabling frame generation of any sort. Otherwise, it’s just going to be a mess.
Frame Generation is the future, whether we like it or not
Naturally, I’m not going deep into the nuance of frame-gen here because there’s so much to talk about when it comes to this topic. Yet, the basics should be easier to figure out now that we’ve gone over them.
Frame generation and RTX Frame-Gen, in particular, are here to stay. Nvidia is going all-in on AI features and it’s becoming increasingly more obvious that good old gen-on-gen performance gains are more and more difficult to claw back.
If you don’t have a Nvidia card but would like to give proper frame generation a shot, I suggest looking into . This little app can inject all manner of upscaling into games that could never support it otherwise. One of its many features is frame generation, too, and while it’s not nearly as potent as Nvidia’s RTX Frame-Gen, it can come in handy in select use cases. It’s also a reasonable demo of how generated frames behave in gameplay.
Otherwise, I also suggest hanging in for solid Nvidia RTX 5000 and Radeon RX9000 benchmarking from the likes of Digital Foundry. That should supply us with enough hard data to see where this is all going, and whether multi-frame generation is all it’s been amped up to be. Can Nvidia fight physics via fake input latency, though?
Published: Jan 14, 2025 03:59 am