Here’s how the new RTX 50-series cards perform against the previous generation of GeForce GPUs

When the RTX Blackwell GPUs were first announced in CEO Jen-Hsun Huang’s keynote speech at CES 2025, the big news was that the RTX 5070 could offer the same point of gaming functionality as the RTX 4090. This was a new $549 GPU that provided frame rates. similar to a $1,600 card from the last generation.

It then became clear that such functionality claims came with the big caveat that DLSS 4 and Multi Frame Generation had to be allowed to deliver the same frame rates as an RTX 4090 and suddenly the question of functionality from generation to generation has a hot topic.

Now the Nvidia RTX Blackwell Editor’s Day embargo is up we’re allowed to talk about the gen-on-gen performance numbers the company provided during a talk on the new RTX 50-series cards by Justin Walker, Nvidia’s senior director of product management. And if you held any illusions the silicon-based performance jump might be in some way similar to what we saw shifting from Ampere to Ada, I’m afraid I’m probably about to disappoint you.

You can see in the slides below where the other four Blackwell RTX GPUs stand, either in terms of build-to-gen functionality, as well as what they look like when it comes with the multi-gen AI strength of DLSS four and Multi Frame. Generation.

As you’d expect, the RTX 5090, with its healthy dose of additional TSMC 4N CUDA cores, is the winner of all the new cards. You get a direct build-up of around 30% functionality over the RTX 4090 when you take Frame Gen out of the equation, and a doubling of functionality when you do.

Still, this is a real gain in functionality over the last generation. However, it has to be said that it is significantly more expensive and doesn’t come close to the functionality improvement we saw from the RTX 3090 to the RTX 4090. I just checked again in our updated graphics card check set. day, with the tough AMD. Ryzen 7 9800X3D at its core, and the Ada card featured about 80% stackup from one generation to the next without touching DLSS or Frame Gen.

The weakest of the four is the RTX 5080. Sure, its $999 price is less than the RTX 4080’s $1,200 and matches the RTX 4080 Super, but you’re just looking at a frame that’s about 15% gen. to generation. generation. rate accumulating over the RTX 4080. The upgrade from the RTX 3080 to the RTX 4080 represented a picture of approximately 60% speed up to 4K, for reference. Even with multi-frame rendering enabled, the RTX 5080 fails to duplicate the complex functionality of the past.

Then we come to the RTX 5070 cards and their own ~20% performance bumps. And, you know, I’m kinda okay with that, especially for the $549 RTX 5070. I’d argue that in the mid-range the benefits of MFG are going to be more apparent, and feel more significant. Sure, that 12 GB VRAM number is going to bug people, but with the new Frame Gen model demanding some 30% less memory to do its job, that’s going to help. And then the new DLSS 4 transformer model boosts image quality, so you could potentially drop down a DLSS tier, too, and still get the same level of fidelity.

So, why aren’t we getting the same level of performance boost we saw from the shift from the RTX 30-series to the RTX 40-series? For one thing, the RTX 40-series was almost universally more expensive compared to the cards they replaced, and that’s not a sustainable model for anyone, not even a trillion dollar company.

Basically, silicon is expensive, especially when you need to get serious functionality, getting the same production node and necessarily the same transistors. What I heard from the Nvidia people over the last week was: “You know Moore’s Law is dead, right?” And he’s not a squirrel-faced YouTuber either.

The original economic ‘law’ first proposed by Intel’s Gordon Moore stated that the reducing cost of transistors would lead to the doubling of transistor density in integrated circuits every year, which was then revised down to every two years. But it no longer really works that way, as the cost of transistors in advanced nodes has increased to counter such positive progression.

This means that even if it were physically viable, it’s financially prohibitive to include a ton of extra transistors in your next-gen GPU while still maintaining value that doesn’t give your consumers a pain in their mouths. And neither Nvidia nor AMD wants to cut their margins or average advertising values in the face of investors who don’t care at all about reaching 240fps in their favorite games.

So what do you do? Locate other tactics to achieve consistent performance. He leverages and leverages his decades of work in AI to find tactics that generate more frames per instant from necessarily the same silicon. That’s what Nvidia has done here, and it’s hard to argue against it. We would all have taken the gallows if he hadn’t pulled the AI ​​levers and simply built bigger, much more expensive GPUs, and made us pay through the nose for the privilege of being frequency consistent. 20% more according to photographs on most of what we already have.

Instead, whatever you think about “fake images,” we can now use AI to generate 15 out of 16 pixels with DLSS four and Multi Frame Generation and end up with a $5four9 graphics card that can give us triple that figure . rates at 1fourfour0p on all newer games. When you get a smooth, smooth, artifact-free gaming experience, how concerned are you about which pixel is generated and which is refined?

Maybe I sound like an Nvidia apologist right now, and maybe the bottomless coffee at the Editor’s Day was actually the koolaid, but I see the problem. Hardware is hard, and we have a new AI way to tackle the problem, so why not use it? A significant process node shrink, down to 2N or the like, could help in the next generation, but until we get functional, consumer-level chiplet GPUs another huge leap in raw silicon performance is too costly to be likely any time soon.

So once again, I myself bow to our new AI overlords.

Dave has been gaming since the days of Zaxxon and Lady Bug on Colecovision, as well as the Commodore Vic 20 codebooks (Death Race 2000!). He built his first gaming PC at age 16 and finally finished fixing bugs in the Cyrix-based formula about a year later. When he dropped it out the window. He started writing for Official PlayStation Magazine and Xbox World decades ago, then moved to PC full-time and then to PC Gamer, TechRadar, and T3, among others. Now he’s back and writing about the nightmare market for graphics cards, CPUs with more cores than senses, gaming laptops hotter than the sun, and SSDs bigger than a Cybertruck.

If AMD isn’t spilling the beans on the RX 9070 and 9070 XT at least online retailers are, but a release isn’t imminent

It turns out there’s “a giant supercomputer at Nvidia. . . running 24 hours a day, 365 days a year, DLSS. ” And it’s been like this for six years.

Dead or Alive’s publisher requests between 2,000 and 3,000 fan art removals per year, considering the game’s characters “like girls. “

PC Gamer is from Future plc, a foreign media organization and leading virtual publisher. Visit our corporate site.

Leave a Comment

Your email address will not be published. Required fields are marked *