Nvidia GeForce RTX 5070 review: No, it’s not “4090 performance at $549”

sorten

Ars Praetorian
566
Subscriptor++
Wow, really unfortunate to read about the power and heat situation with the new NVidia cards. I play almost all games at 1080p, so I'm more interested in efficient and quiet cards. I'm shopping down at the 5060 price range, but I can only assume that the power and heat trend will apply to the whole line of 50xx cards.
 
Upvote
1 (3 / -2)
I'm not qualified to question the allegations of laziness due to a perceived lack of competition. But I'm wondering if we aren't just bumping up against diminishing returns for graphics technology such that generational improvements (that don't come from die-shrinks, more VRAM, and image processing tricks) are just not on the table anymore.
You are spot on.

The limits of physics in silicon, and the massive budgets (and layoffs/poor working conditions) associated with making AAA/AA games no longer push salivating gamers to upgrade their gpus every 18-36 months. Nvidia and AMD have taken note, and seek to emulate the Apple model where they charge buyers 2-3x the price to make up for the slowed pace of consumer demand.
 
Upvote
3 (5 / -2)

matt_w

Ars Scholae Palatinae
1,165
My 3070 is starting to get a bit old, I'll probably replace it with a 9070 xt if I can manage to get one. Which will be the first non-Nvidia card I've bought for myself in about 20 years.

I'm also on the 3070ti and am considering the 9070xt. 3070ti is my first nvidia card ever. Had about 5 AMD (ATI) cards before that going back to the 9800. If AMD catches up in RT and FSR4, its a no brainer for me.
 
Last edited:
Upvote
0 (2 / -2)

Mopbucket

Smack-Fu Master, in training
3
Seems like it would be a worthy upgrade from my 2070... at €550... but I put the odds of getting one under 1k as basically non-existent which makes the whole idea moot.
With 12 GB of VRAM it's not worth it. It'll already be struggling in some games at 1440p and it's only going to get worse.
 
Upvote
9 (11 / -2)

MrNorwood

Smack-Fu Master, in training
3
Subscriptor
I have a 6 year old 2070, play all games at 60hz ultrawide 1440p. I can almost always play at a high enough quality that a further increase in quality is marginal. I can't help but realize I want a 5070 simply due to FOMO and have money to burn. (For example, all I heard about Indiana Jones was how demanding it "is." But I was able to get a perfectly acceptable og DLSS 60fps even setting it slightly higher than the mid settings it gave me as default, and it was absolutely gorgeous (notably in the beginning artifact museum), with amazing shadows and lighting. FOMO! My 2070 would of course be absolutely ridiculed as nearly worthless by the videocardz mafia. Sad.)
The interesting thing to me is, if you were to go out and buy an RTX 2070 today it's still selling for around $700 new.
 
Upvote
5 (5 / 0)

David Mayer

Wise, Aged Ars Veteran
970
I guess for any playback that isn’t responsive to user input. Seems like a post-processing interpolation tech they got to scale to the point of working in ‘real-time’ that turned into a marketing point.

Your brain rebels against the concept because putting it front and center is scummy and moronic.
I guess for any playback that isn’t responsive to user input.
This tech is specifically markets as a tool to enhance interactive gaming.
 
Upvote
3 (3 / 0)

David Mayer

Wise, Aged Ars Veteran
970
You've realized that it is actually interpolation and the next step to realize what impact that has on latency. because it has to wait for 2 native frames before giving 4 frames to the user, it has to add 2 native frames of latency to the output.
What makes you think I haven't realized that?
 
Upvote
-3 (0 / -3)
Only at MSRP. Which we know is a fabrication. The very next chart in that video is comparing market pricing and the 5070 is more than halfway down the list, which is topped by the 7800XT and the 7700XT.

Edit: I was somewhat incorrect, it was two charts ahead and the 5070 is closer to 3/4s of the way to the bottom not halfway down. And keep in mind this is comparing current retail pricing in Australia in AUD. But, yea, the Nvidia cards are nowhere NEAR good value.
View attachment 104202
QFT.

After years of delaying an upgrade from my gtx 1660 ti laptop, I found fantastic deal on an rtx 4080 laptop (basically, a desktop rtx 3080/4070 with weaker 1% lows). The new kit performs as expected and meets my needs, but I have been surprised by just how crappy Frame Generation is in non-competitive/single player games.

Across ~15 offline/single player games, Frame Generation sucks. This is especially true of two Nvidia showcase titles: Cyberpunk & The Witcher 3. I strongly recommend anyone upgrading ignore Nvidia’s marketing hype that encourages you to focus on 4k, ray-tracing, and especially Frame Generation. These three are mostly an upsell that preys on your FOMO. That’s just my opinion, but in all but a few games, my experience is better with frame rates in the 50-120 range versus enabling Frame Gen and dealing with all sorts of problems. Can I game at 4k? Yes, dlss is awesome. Does ray-tracing look great in some games? Yes, but most of the time the performance hit isn’t worth it. Is Frame Generation useful? On rare occasions yes, but mostly it just sucks.

For added perspective that Nvidia (and AMD) are trying to upsell you, let’s not forget that Nvidia released their top-of-the-line rtx 2080 & 2080ti over six years ago (yes 6+). Nvidia also kicked off the hype train with their viral SIGGRAPH video of developers errupting to raucous applause nearly 7 years ago. Now those of you with rtx 2080ti/3080/4070 hardware (because they are all roughly equivalent), ask yourself how many games do you actually play at 4k with ray tracing? When I ask my friends (many of whom own rtx 3080 or 4090), only the 4090 owners do both. The 3080 guys almost always prefer higher min/avg fps or else play at 1440/1080p. Steam’s Hardware survey back this up, too.

So what is an upgrade-lusting gamer to do? Do not despair friends! For those of you unaware, it is a glorious time because nowadays we can have both a 4k OLED display for workflow, and then easily switch that delicious screen to half-resolution and enjoy the frugal yet glorious fun that is 1080p gaming. THIS fact has for many people mooted the need to buy a 4k video card. Screw the GPU duopoly folks, buy a new monitor instead.

Bottom line: delay upgrading your video card until you improve your display first. High-refresh, higher hdr oled screens are more useful than letting Nvidia squeeze you into paying $1000+ for a video card.
 
Last edited:
Upvote
12 (13 / -1)
Seems like everyone involved has an incentive to slow development and maximize distribution of thinner and thinner slices of improvement.

If it’s down to greed vs. the limits of material science, I’m guessing that greed is the factor limiting performance improvements.
Yeah, that's really not the case.

Dennard Scaling (which allowed for clock frequency gains and significant power consumption improvements simultaneously) stopped working ~2004. Density improvements continued for another few decades, but density alone only gets you so far. Modern density scaling is not as good as the historical trend, either.

AMD used chiplet-based design for RDNA3 and... well, it's not clear exactly what happened, there, but the fact that RDNA4 is midrange-only with no chiplet version suggests that AMD wasn't well-pleased with the results. Chiplet-based designs have worked well in CPUs where they allow companies to focus on shrinking lithography for the parts that benefit the most, but GPUs need vastly more cross-chiplet bandwidth than CPUs do.

I'm certain that there are more things that can be done to continue pushing rasterization forward -- it is surely no accident that Nvidia is hyper-focused on AI solutions when AI has eaten its entire business -- but I think that if the solution to packing higher RT, rasterization, and AI performance into a single core were easy, someone would've done it by now.

If you compare the rate of improvement in game graphics from 2016 - 2025, it's a lot lower than 2007 --> 2016, which in turn was much lower than the gains from 1999 --> 2007. The rate of improvement has been slowing for a very long time and it hasn't been helped by rampantly high prices in the GPU market, effectively slowing the rate at which new technologies like RT can even be adopted.
 
Upvote
9 (9 / 0)
I would almost welcome that. Modern games, at least, tend to be graphically extremely impressive but often lacking in so many other areas, it'd be interesting to see which way development would push if graphical improvements were suddenly off the table.

I've seen the argument that games don't have to look better than Red Dead Redemption 2, as in beyond that there is no point to it, and I absolutely agree with that. And a sidenote is that it feels like every game is chasing "realism" and they end up look the same to me. Meanwhile look at old The Longest Dark, it runs well on really low end computers but still manage to be just stunning in the right spots.
 
Upvote
-7 (2 / -9)
If I'm reading a review to help inform a purchasing decision, it's when I arrive at 12 GB of VRAM when I stop reading. Nope. Hard stop.
I've got no problem with 12GB of VRAM at the right price point. The B580 at $250 (if actually available) with 12GB of VRAM seems like a great option.

But I absolutely agree that if Intel can put 12GB of VRAM on a $250 card, it shouldn't be hard for Nvidia to exceed that -- except, of course, if Nvidia puts more VRAM on lower-end GPUs, it also makes them more attractive for AI.

I think it's very convenient for Nvidia that putting more VRAM on its lower-end consumer models might make shortages worse. My problem is, I can see the argument.

The idea of DirectStorage ought to be able to offer some help here, since streaming game resources in from an SSD could theoretically reduce pressure on VRAM, but in practice it seems to have made games slower. (Judging by a handful of articles on the topic).

I've wondered before if the solution might not be a GPU-mounted SSD. Direct-connect it to the GPU with additional PCIe lane allocation on the card. It'd be invisible to the operating system but could operate as a streaming texture cache for games -- and it wouldn't be at all useful for AI.
 
Upvote
9 (9 / 0)

Voldenuit

Ars Tribunus Angusticlavius
6,533
Wow, really unfortunate to read about the power and heat situation with the new NVidia cards. I play almost all games at 1080p, so I'm more interested in efficient and quiet cards. I'm shopping down at the 5060 price range, but I can only assume that the power and heat trend will apply to the whole line of 50xx cards.
This is a FE problem. GN tested some AIB cards as well and they were in the ~60C temp range vs the 5070 FE which was in the 70s.

Of course, power consumption still isn't good this gen, and these cards still aren't good value or a good performance uplift. As Steve Burke alluded to in his 5070 review, "wait for the 9070XT [reviews]".
 
Upvote
1 (3 / -2)

Xyler

Ars Scholae Palatinae
1,162
You are spot on.

The limits of physics in silicon, and the massive budgets (and layoffs/poor working conditions) associated with making AAA/AA games no longer push salivating gamers to upgrade their gpus every 18-36 months. Nvidia and AMD have taken note, and seek to emulate the Apple model where they charge buyers 2-3x the price to make up for the slowed pace of consumer demand.
TSMC also charges a LOT for their 4nm and 3nm processes, so making GPUs is more expensive than it ever was... and now with the Tariffs, expect them to get even more expensive.
 
Upvote
8 (8 / 0)

OptimusP83

Ars Praefectus
3,659
Subscriptor
With 12 GB of VRAM it's not worth it. It'll already be struggling in some games at 1440p and it's only going to get worse.
This is absolutely true. My 7800XT uses just under 12GB of VRAM running Horizon Zero Dawn at 4k with Quality FSR (so rendered at 1440p and upscaled to 4k). While average framerates might not drop significantly with only 12GB of VRAM, the stuttering and pop-in from having to load new data to VRAM is incredibly annoying. Some games don't even try once VRAM is maxed. Control, Halo Infinite, and others will gladly just load some 16px texture and never even try to load the proper texture. So in Control, with my old 6600XT (8GB VRAM) I'd get paintings on the wall that are complete blurs and regardless of what I do, they'll never load back in after I've been playing for a few minutes and the VRAM is full. Framerates would be decent but the textures were just missing.

Now you can blame the game engine or the devs, but ultimately the problem is caused by insufficient VRAM and the solution is more VRAM.
 
Upvote
6 (6 / 0)

Anony Mouse

Smack-Fu Master, in training
77
Subscriptor++
If AMD's new RT architecture can significant'y close the gap with Nvidia, this could be a worrisome development for Nvidia.

Ahh, who am I kdding, everyone will just ignore that AMD provides killer price/performance (barring RT where its anywhere from terrible to not half bad) and go buy the shit Nvidia is shoveling because Nvidia is so good at occupying that sweet sweet real estate in our heads.

Interested to see how the 9070s end up falling once the embargoes are all gone.
Unfortunately for those of us who use their GPU for games and compute (in my case CUDA-powered photo editing tools), nVidia is the only game in town until you can persuade engineers to support AMD cards. So I will buy the shit they produce, though clearly not this pile.
 
Upvote
3 (5 / -2)
This is absolutely true. My 7800XT uses just under 12GB of VRAM running Horizon Zero Dawn at 4k with Quality FSR (so rendered at 1440p and upscaled to 4k). While average framerates might not drop significantly with only 12GB of VRAM, the stuttering and pop-in from having to load new data to VRAM is incredibly annoying. Some games don't even try once VRAM is maxed. Control, Halo Infinite, and others will gladly just load some 16px texture and never even try to load the proper texture. So in Control, with my old 6600XT (8GB VRAM) I'd get paintings on the wall that are complete blurs and regardless of what I do, they'll never load back in after I've been playing for a few minutes and the VRAM is full. Framerates would be decent but the textures were just missing.

Now you can blame the game engine or the devs, but ultimately the problem is caused by insufficient VRAM and the solution is more VRAM.
So, a few things on this:

VRAM usage reporting is complicated. The most common metrics tell you how much memory a card is using, yes, but not how much of that is actually taken up with the textures and data you are currently using. In some cases, a game won't flush VRAM until you hit the next level or area. Some games also don't flush until they get close to VRAM capacity. Behavior varies by title.

Without profiling HZD under those conditions, I have no idea how it behaves -- but having a lot of data loaded into VRAM doesn't mean that the card is actually using it for currently useful work.

Not withstanding the above, I believe you about the effect you are describing, but that problem is the byproduct of a bad approach to an issue. Whether that's the game engine, in-driver optimizations, or something else, I can't say -- but the solution is only "more VRAM", inasmuch as it allowed you to brute force the situation in this specific case. I've got a lot of sympathy for that kind of approach -- it works from the consumer side -- but the responsibility ultimately is on devs and/or AMD to fix that kind of problem.

8GB, after all, is scarcely an uncommon amount of VRAM. A lot of the most popular GPUs in the world are still 8GB or less.
 
Upvote
5 (6 / -1)

Dzov

Ars Legatus Legionis
14,753
Subscriptor++
Because the frames you're generating fall between the latest and the previous real frame.
Did you look at his flow chart?
The latest frame is step 2.

Frame gen is using interpolation. From my understanding the order of Nvidia frame gen is basically this:
1. real frame is displayed on screen
2. next real frame is rendered and held in queue
3. 1 to 3 frames are generated between the last real frame and next real frame
4. generated frames are displayed on screen
5. real frame in queue displayed on screen
Repeat
 
Upvote
-3 (0 / -3)

eldakka

Ars Tribunus Militum
1,648
Subscriptor
Only at MSRP. Which we know is a fabrication. The very next chart in that video is comparing market pricing and the 5070 is more than halfway down the list, which is topped by the 7800XT and the 7700XT.

Edit: I was somewhat incorrect, it was two charts ahead and the 5070 is closer to 3/4s of the way to the bottom not halfway down. And keep in mind this is comparing current retail pricing in Australia in AUD. But, yea, the Nvidia cards are nowhere NEAR good value.
View attachment 104202
To be fair, with the incoming US tariffs, those might not be too far off the prices in the US in $US. :rollpoop:
 
Upvote
-2 (0 / -2)
I've got no problem with 12GB of VRAM at the right price point. The B580 at $250 (if actually available) with 12GB of VRAM seems like a great option.

But I absolutely agree that if Intel can put 12GB of VRAM on a $250 card, it shouldn't be hard for Nvidia to exceed that -- except, of course, if Nvidia puts more VRAM on lower-end GPUs, it also makes them more attractive for AI.

I think it's very convenient for Nvidia that putting more VRAM on its lower-end consumer models might make shortages worse. My problem is, I can see the argument.

The idea of DirectStorage ought to be able to offer some help here, since streaming game resources in from an SSD could theoretically reduce pressure on VRAM, but in practice it seems to have made games slower. (Judging by a handful of articles on the topic).

I've wondered before if the solution might not be a GPU-mounted SSD. Direct-connect it to the GPU with additional PCIe lane allocation on the card. It'd be invisible to the operating system but could operate as a streaming texture cache for games -- and it wouldn't be at all useful for AI.
Your solution is marvelous! I'd love such an invention/development.
 
Upvote
-1 (0 / -1)
Seems like NVidia basically bet all of their transistor budget on frame generation for 5xxx and unfortunately it is getting pretty soundly rejected by gamers.
I don't think Nvidia bet all its transistor budget on MFG. I think Nvidia bet its transistor budget and memory bandwidth improvements on AI, and MFG is the best feature Nvidia engineers could come up with (for now) to showcase those AI capabilities in a consumer context.

Nvidia has been pretty clear that gaming is now a side business relative to its AI data center sales. An important, historical side business, sure, but one that's currently like 10% of company revenue.
 
Upvote
8 (8 / 0)

sarusa

Ars Praefectus
3,066
Subscriptor++
Everything I've seen about the 5xxx series so far is that it's shameless lies from Jen Huang (because this is now a shameless world, why should he be any better than Elmo or the Angry Toddler?).

Certainly interested for the new AMD stuff, but the last time I had an AMD video card the lack of game support causing weird issues in new games was very common, and I still hear my recent AMD video card owning friends complaining about this. Maybe this fiasco will sell enough AMD video cards to get game companies to actually do some testing of their games on AMD cards before they release them?
 
Upvote
-1 (3 / -4)

Baeocystin

Ars Legatus Legionis
17,416
Subscriptor
I have three different machines in my house, one with a 3060Ti, one with a 3080, and one with a 3090Ti. There's literally nothing I play that I can't use any of the three and have an excellent experience. It's been a long time since video card upgrades were truly a game-changing experience. The 3090 is noticably better when I'm doing renders, playing with LLMs, stuff like that. But gaming? Hardly matters any more, especially if you're playing with a 1080p display.

This is all good news for AMD and Intel's efforts, IMO. I know there's always a segment that wants the best of the best, and I get it. But look how successful the Steam Deck has been, and how limiting its GPU is compared to almost any discrete part! We're well past quality minimums for almost everything.

(Before anyone says anything, these are all hand me downs from clients. I work in IT, and happily accept cast-off parts. My personal budget is actually pretty limited.)
 
Upvote
-3 (2 / -5)

ERIFNOMI

Ars Tribunus Angusticlavius
15,459
Subscriptor++
Did you look at his flow chart?
The latest frame is step 2.
Yes, I know how it works. You're the one not following.

To generate frames between N-1 and N, you have to have both of those frame already rendered. If you display N before you have a chance to generate and display the interpolated frames between N-1 and N, the interpolated frames are now stale and you can't display them. Well, you could display them, but they'd be frames in the past and now out of order. They have to come before N. They are generated frames that try to represent what would have been rendered between N-1 and N.
 
Upvote
1 (1 / 0)

SportivoA

Ars Scholae Palatinae
749
50x0 in a nutshell, but also with driver problems, defective chips, higher power draw.
And sudden changes in legacy code compatibility without pre-announcement.

Plus with the prior generation's mid-to-high performance chips already completely cut from production (3050 and 4060 not withstanding, maybe).
 
Upvote
2 (3 / -1)

Voldenuit

Ars Tribunus Angusticlavius
6,533
Not withstanding the above, I believe you about the effect you are describing, but that problem is the byproduct of a bad approach to an issue. Whether that's the game engine, in-driver optimizations, or something else, I can't say -- but the solution is only "more VRAM", inasmuch as it allowed you to brute force the situation in this specific case. I've got a lot of sympathy for that kind of approach -- it works from the consumer side -- but the responsibility ultimately is on devs and/or AMD to fix that kind of problem.

8GB, after all, is scarcely an uncommon amount of VRAM. A lot of the most popular GPUs in the world are still 8GB or less.

A lot of games have followed OP's described behavior in recent years (missing or low res textures), even on 12 GB cards - Halo Infinite, Forspoken, Plague Tale Requiem. There are also games that run out of VRAM at 12 GB - RE4 at launch would crash on 12 GB nvidia cards with ray-tracing enabled because of this, and TLOU had stuttering issues on 12 GB cards at 1440p and above.

The fault may lie with developers, since, as you say, 8 GB cards are still commonplace (especially in laptops). But the game devs either don't care or don't have the bandwidth to optimize, so as consumers, the best way to avoid the problem is to overprovision on VRAM. And in 2025 (really, since 2023 for some games), 16 GB is what you'd want to aim for if you do a lot of AAA gaming.
 
Upvote
6 (6 / 0)
A lot of games have followed OP's described behavior in recent years (missing or low res textures), even on 12 GB cards - Halo Infinite, Forspoken, Plague Tale Requiem. There are also games that run out of VRAM at 12 GB - RE4 at launch would crash on 12 GB nvidia cards with ray-tracing enabled because of this, and TLOU had stuttering issues on 12 GB cards at 1440p and above.

The fault may lie with developers, since, as you say, 8 GB cards are still commonplace (especially in laptops). But the game devs either don't care or don't have the bandwidth to optimize, so as consumers, the best way to avoid the problem is to overprovision on VRAM. And in 2025 (really, since 2023 for some games), 16 GB is what you'd want to aim for if you do a lot of AAA gaming.
I'm a little surprised to hear that there are games that just leave low-rez assets loaded without any real option for the player to do something beyond buying a better GPU. Typically I'd at least expect some messing around in menu options to yield a more satisfactory result.

Your anecdotal results imply that there may even be a 12GB / 16GB variance in RT perf between AMD and NV, if the 9070 XT is good enough at ray tracing to make the situation competitive in the first place.

I agree with you that 16GB should've been standard a long time ago, lower in the stack than it is now, even with the 9070's launch imminent.
 
Upvote
2 (2 / 0)
I'm not qualified to question the allegations of laziness due to a perceived lack of competition. But I'm wondering if we aren't just bumping up against diminishing returns for graphics technology such that generational improvements (that don't come from die-shrinks, more VRAM, and image processing tricks) are just not on the table anymore.
No. This is entirely down to planned obsolescence, segmentation, and silicon rationing within the tiers below the top.
 
Upvote
0 (1 / -1)

Voldenuit

Ars Tribunus Angusticlavius
6,533
I'm a little surprised to hear that there are games that just leave low-rez assets loaded without any real option for the player to do something beyond buying a better GPU. Typically I'd at least expect some messing around in menu options to yield a more satisfactory result.

Your anecdotal results imply that there may even be a 12GB / 16GB variance in RT perf between AMD and NV, if the 9070 XT is good enough at ray tracing to make the situation competitive in the first place.

I agree with you that 16GB should've been standard a long time ago, lower in the stack than it is now, even with the 9070's launch imminent.
Some games have multiple levels of texture detail loaded into VRAM, which is useful for LOD (you don't need high res textures for faraway objects). When the video card runs out of VRAM, the high res textures tend to get flushed first, because you can load the low res texture onto the same model even when it's close.

This issue was quite well documented on Forspoken and Halo Infinite, at the time, most people were using 8-10 GB VRAM cards, so most techtubers blamed the devs for poor optimization. However, a lot of these problems went away with 16, 20 and 24 GB cards. Still doesn't fully excuse not optimizing the loading or asset queueing, but nvidia did not help matters by continuing to push 6 GB and 8 GB cards for the longest time.

EDIT: The issue with popping and loading textures has been around for a long time. Carmack's Rage was infamous for texture pop, and he was never able to solve the problem with the hardware available at the time. This is exacerbated in open world games where the texture count can be very large, and assets are usually streamed in on an as-needed basis.
 
Last edited:
Upvote
3 (3 / 0)

zyzzyplyx

Smack-Fu Master, in training
19
What game were you playing that "struggled" at 1080p Medium on an RTX 3080?

I would typically define struggling as "dips below 30 FPS." I have a hard time believing any game "struggles" by that definition on an RTX 3080 @ 1080p Med, but perhaps you have a higher threshold definition for that.
Indiana Jones and the Great Circle. Turn it past medium with a 3080, and the game tells you you don't have enough VRAM. Even at medium, I had stuttering and occasional dips below 30. It's actually the reason I decided to get a new card. I also see similar uplift in Hogwarts Legacy and Cyberpunk, the other two games that I quit playing due to performance issues on the 3080. I like raytracing, so sue me. Most other stuff was playable, but even simpler indie games would get my card spun up to crazy RPMs at 4k. I prefer not to feel like I'm playing games in a wind tunnel. Now I have a 5080 and all of those problems are gone!

I'm quickly learning that the Ars comment section has little tolerance for positive takes on negative articles about tech companies so I guess I'll just go back to enjoying my games and you all can keep on hating.
 
Upvote
-1 (7 / -8)