Nvidia GeForce RTX 5070 review: No, it’s not “4090 performance at $549”

Rob_Arctor

Seniorius Lurkius
7
Can you please link the relevant timestamp, I'm not going through a 30 minute video for this.
Took a quick skim, from your video, a graphic showing extrapolation:
View attachment 104188
The time stamp is 12:47, the marketing is intentionally deceptive. Jensen claims on stage that it can predict the future.

In a nutshell, the second frame is rendered and held for a bit while the frame gen tech does its thing.
 
Last edited:
Upvote
12 (12 / 0)

David Mayer

Wise, Aged Ars Veteran
1,052
The time stamp is 12:47, the marketing is intentionally deceptive. Jensen claims on stage that it can predict the future.
Thanks, this is really quite absurd, the tech to "predict the future" actually does exist and is used now for VR. Why TF are they deliberately adding latency?
 
Upvote
3 (5 / -2)

SNESChalmers

Smack-Fu Master, in training
53
Pedantry against the trend: "interpolated frames", these are extrapolated frames, interpolation draws from the data on both sides of the created data point (the frame or frames before and the frame or frames after), extrapolation draws from the data on only one side of the data point (the frame or frames before or after) not both. This tech extrapolates as it completes generation of the new frame(s) before subsequent ones exist.

All that said, I'm late to the party and there's no hope of overriding this linguistic trend, so, sucks to be a pedant I guess.
Frame gen is using interpolation. From my understanding the order of Nvidia frame gen is basically this:
1. real frame is displayed on screen
2. next real frame is rendered and held in queue
3. 1 to 3 frames are generated between the last real frame and next real frame
4. generated frames are displayed on screen
5. real frame in queue displayed on screen
Repeat
 
Upvote
19 (19 / 0)
Can you please link the relevant timestamp, I'm not going through a 30 minute video for this.
Edit1: Took a quick skim, from your video, a graphic showing extrapolation:
View attachment 104188


Edit2: Why TF would you interpolate anyway, that would deliberately add latency.
"Adding latency" is exactly what it does, which is why Nvidia Reflex needs to be enabled to reduce it again... It's why a lot of people don't like framegen. Digital Foundary goes into a lot of detail about this in several videos, but here's their review of the 5070. The image above (like many Nvidia presentations) is very misleading. What it should show at the end is "Rendered Frame 2." It IS interpolating between frames, NOT extrapolating and it adds significant latency - upwards of 60ms in some cases.
 
Upvote
21 (22 / -1)

Rob_Arctor

Seniorius Lurkius
7
Can you please link the relevant timestamp, I'm not going through a 30 minute video for this.
Edit1: Took a quick skim, from your video, a graphic showing extrapolation:
View attachment 104188


Edit2: Why TF would you interpolate anyway, that would deliberately add latency.

Edit3: This was driving me a little nuts, I found the timestamp, 12:48, thank goodness for searchable transcripts.



In the future please include a timestamp and a quote in your video citations.
Thanks for the kind advice.
In the future before ranting about linguistic trends please make sure you're factually correct.
 
Upvote
7 (14 / -7)
As someone who has been out of the gaming PC building arena since the 5700-era, I can't believe how shitty the market is now. Not just for PCs, but also consoles. I can't believe you need a PS5 Pro to run MH: Wilds at a capped 60 fps using FSR1. What the hell happened?
It looks like it's in a pretty miserable state. I wouldn't necessarily extrapolate from any single game to larger trends about the industry.
 
Upvote
1 (1 / 0)

ranthog

Ars Tribunus Angusticlavius
13,648
I'm not qualified to question the allegations of laziness due to a perceived lack of competition. But I'm wondering if we aren't just bumping up against diminishing returns for graphics technology such that generational improvements (that don't come from die-shrinks, more VRAM, and image processing tricks) are just not on the table anymore.
I would guess that we're more so seeing relatively linear improvements in graphics technology, while the tasks they're running up against are scaling exponentially. Especially when you start looking at things like ray tracing.
 
Upvote
4 (4 / 0)

Voldenuit

Ars Tribunus Angusticlavius
6,545
Pedantry against the trend: "interpolated frames", these are extrapolated frames, interpolation draws from the data on both sides of the created data point (the frame or frames before and the frame or frames after), extrapolation draws from the data on only one side of the data point (the frame or frames before or after) not both. This tech extrapolates as it completes generation of the new frame(s) before subsequent ones exist.

All that said, I'm late to the party and there's no hope of overriding this linguistic trend, so, sucks to be a pedant I guess.

Edit, I'm wrong, and nvidia is actually interpolating, according to this video at 12:58 they are analyzing two frames and generating frames in between them. I made the mistake because I assumed they wouldn't just throw away latency to get these results.
Yeah, it's a misconception nvidia purposefully cultivated, to make the idea of frame generation seem more appealing.

Jensen, at CES: "it [the 5090] can predict the future".

I mean, I can predict the future too, just give me a one second latency.
 
Upvote
11 (12 / -1)

robco

Ars Scholae Palatinae
780
Subscriptor++
I was going to say that I hope Nvidia is ready to mend fences with the gaming community after the AI bubble bursts, but it looks like the economy (at least in the US) is headed for a major recession if not a depression. The cards become more expensive and more power-hungry every year. AAA gaming is becoming a rather expensive hobby.
 
Upvote
-3 (2 / -5)

ranthog

Ars Tribunus Angusticlavius
13,648
"Adding latency" is exactly what it does, which is why Nvidia Reflex needs to be enabled to reduce it again... It's why a lot of people don't like framegen. Digital Foundary goes into a lot of detail about this in several videos, but here's their review of the 5070. The image above (like many Nvidia presentations) is very misleading. What it should show at the end is "Rendered Frame 2." It IS interpolating between frames, NOT extrapolating and it adds significant latency - upwards of 60ms in some cases.
To be fair, it still makes sense as an option. Maybe that additional latency is a problem in certain games, but it isn't in all games.
 
Upvote
-1 (2 / -3)
While the stock market would scream and howl at that level of honesty and candor, it'd feel a lot more fair if Nvidia, if not saying the words, would at least price accordingly for the real-world market. Instead we get patented frame-smearing technology and told it's a mid-range powerhouse.

Edit: typos, clarity
The reason nobody is saying those words is because the long-term hope+goal is that AI will improve. I don't think it's an accident that everyone is leaning on AI as a method of improving visual quality. It's an effort to return to the era when GPU gains were faster and you could get more immediate improvements.

Now, of course you are right that there are game-specific tradeoffs and visual errata for those improvements that may not exist otherwise. But DLSS and even FSR3 have advanced to the point that they're a useful way to boost performance on low-end systems, even if they come with some visual tradeoffs.

Some technologies, like MFG, really rely on high frame rates to start with. Others, like DLSS / FSR, can help boost lower frame rates without interpolating "fake" frames. But taken collectively, it's clear that the GPU industry has pivoted to AI in the hopes that it can be used to collectively improve rendering, even if not every experiment pans out. And not every experiment will. It's easy to forget now, but once upon a time there were a lot of companies competing to be "the future" of 3D graphics, and some of those approaches were quite off-the-wall. The original NV1 worked on quadrilaterals, not triangles. Voxel-based games had a bit of a moment at the dawn of 3D. There were companies like S3/Virge, with their own "Metal" API, 3dfx had Glide.

The usefulness of DLSS suggests to me that AI will have a permanent role in game rendering going forward, for boosting low-end GPU performance if nothing else.
 
Upvote
6 (8 / -2)

ranthog

Ars Tribunus Angusticlavius
13,648
(Apologies for the double post, that reply came in while I was typing the previous one.)

I already got my 7900XT, but it'll be well within the EU's two-weeks refund window when the 9070s drop. Current game plan is to see if I can get a Sapphire or XFX one for reasonably close to the €720-ish that the 7900 cost, then return that one. I was, perhaps foolishly, banking on AMD rolling low for this generation.
I'm just hoping that the new card can help drop some of the prices further down the market. Obviously the 50 series cards aren't really having any effect due to a lack of availability.
 
Upvote
1 (2 / -1)
So don't trust communications directly from the source, got it.
I'm sorry you got tripped up on the extrapolation / interpolation thing. I see where you were coming from.

I would say in this case that Nvidia sacrificed technical accuracy on the altar of making more impressive top-line comparisons. "Predict the future" sounds so much better than "Frame interpolation," especially when some people are aware that they hate interpolated frames on TVs thanks to motion smoothing. Ditto for the "It's an RTX 4090, but an RTX 5070."

It was instantly obvious that the RTX 5070 would never and could never match the RTX 4090 in raw performance -- and it can't. It doesn't even come close.
 
Upvote
12 (13 / -1)

Voldenuit

Ars Tribunus Angusticlavius
6,545
So don't trust communications directly from the source, got it.
Pretty much. First party manufacturer claims have a vested interest in presenting their product in the best light possible, even when it's all smoke and mirrors. This is why third party independent reviewers and technical experts are so valuable in validating (and refuting) the numbers.
 
Upvote
11 (12 / -1)
Post content hidden for low score. Show…

OptimusP83

Ars Praefectus
3,659
Subscriptor
Yeah, it really really sucks when you basically only have a day to get the card, if that.

FWIW the Gamers Nexus review is doing a lot of wink wink nudge nudge by comparing it to some AMD cards that they are heavily implying the 9070 roughly equates. If the point they're making about the 9700 XT Hellhound is true the AMD cards are going to be pretty predictably heads and shoulders better in raster and slightly to significantly worse at RT depending on title.

I still want to see apples to apples comparisons, though, especially with RT because that's where the real test is for AMD this time around.
If AMD's new RT architecture can significant'y close the gap with Nvidia, this could be a worrisome development for Nvidia.

Ahh, who am I kdding, everyone will just ignore that AMD provides killer price/performance (barring RT where its anywhere from terrible to not half bad) and go buy the shit Nvidia is shoveling because Nvidia is so good at occupying that sweet sweet real estate in our heads.

Interested to see how the 9070s end up falling once the embargoes are all gone.
 
Upvote
5 (8 / -3)
I finally got a 5080 after a month of desperately mashing "add to cart" and I have to say, I am pretty happy with it.

It's tough for Nvidia to explain advances in architecture to the general public besides just "number go up" but the 5080 is definitely a step up from the 30 and 40 series cards in terms of AI features. DLSS and framegen now finally look good enough that I would consider them viable, whereas the ghosting and artifacts on my 3080 made me want to puke. Games that struggled at 1080p Medium settings on my 3080 are now getting 90fps at 4k thanks to all the AI magic. In other words, I think I kind of buy their argument that they can start counting fake frames.

I am also getting a solid 20% OC out of the card while still maintaining temperatures well below 60C under load with a quiet fan profile, which is amazing.
What game were you playing that "struggled" at 1080p Medium on an RTX 3080?

I would typically define struggling as "dips below 30 FPS." I have a hard time believing any game "struggles" by that definition on an RTX 3080 @ 1080p Med, but perhaps you have a higher threshold definition for that.
 
Upvote
25 (25 / 0)

Mustachioed Copy Cat

Ars Praefectus
4,795
Subscriptor++
I'm not qualified to question the allegations of laziness due to a perceived lack of competition. But I'm wondering if we aren't just bumping up against diminishing returns for graphics technology such that generational improvements (that don't come from die-shrinks, more VRAM, and image processing tricks) are just not on the table anymore.
Seems like everyone involved has an incentive to slow development and maximize distribution of thinner and thinner slices of improvement.

If it’s down to greed vs. the limits of material science, I’m guessing that greed is the factor limiting performance improvements.
 
Upvote
1 (2 / -1)

OptimusP83

Ars Praefectus
3,659
Subscriptor
I'm not qualified to question the allegations of laziness due to a perceived lack of competition. But I'm wondering if we aren't just bumping up against diminishing returns for graphics technology such that generational improvements (that don't come from die-shrinks, more VRAM, and image processing tricks) are just not on the table anymore.
I think its about where you're spending your R&D money. Gen over gen they're making huge gains in AI performance. Why would that necessarily mean that raster performance is at the point of diminishing returns?

AI is what is what is getting Nvidia the insane profit margins someone shared earlier, so of course they're dumping their money into developing that and keeping their lead in that domain. Raster/RT performance improvements could to be minimal to nonexistent because they don't need to improve it to sell their enterprise/AI cards and they don't need to do a damn thing to sell their consumer GPUs.
 
Upvote
4 (4 / 0)

OptimusP83

Ars Praefectus
3,659
Subscriptor
View attachment 104168

Looks like a good mid-range value.
Only at MSRP. Which we know is a fabrication. The very next chart in that video is comparing market pricing and the 5070 is more than halfway down the list, which is topped by the 7800XT and the 7700XT.

Edit: I was somewhat incorrect, it was two charts ahead and the 5070 is closer to 3/4s of the way to the bottom not halfway down. And keep in mind this is comparing current retail pricing in Australia in AUD. But, yea, the Nvidia cards are nowhere NEAR good value.
Screenshot 2025-03-04 at 9.55.32 AM.png
 
Last edited:
Upvote
18 (18 / 0)

Dzov

Ars Legatus Legionis
14,763
Subscriptor++
Frame gen is using interpolation. From my understanding the order of Nvidia frame gen is basically this:
1. real frame is displayed on screen
2. next real frame is rendered and held in queue
3. 1 to 3 frames are generated between the last real frame and next real frame
4. generated frames are displayed on screen
5. real frame in queue displayed on screen
Repeat
Why not just display the real frame if it's already there? That's the definition of latency.
 
Upvote
6 (7 / -1)

Mustachioed Copy Cat

Ars Praefectus
4,795
Subscriptor++
Edit2: Why TF would you interpolate anyway, that would deliberately add latency.
I guess for any playback that isn’t responsive to user input. Seems like a post-processing interpolation tech they got to scale to the point of working in ‘real-time’ that turned into a marketing point.

Your brain rebels against the concept because putting it front and center is scummy and moronic.
 
Upvote
1 (1 / 0)

OptimusP83

Ars Praefectus
3,659
Subscriptor
What game were you playing that "struggled" at 1080p Medium on an RTX 3080?

I would typically define struggling as "dips below 30 FPS." I have a hard time believing any game "struggles" by that definition on an RTX 3080 @ 1080p Med, but perhaps you have a higher threshold definition for that.
I, too, would like to know this game...
 
Upvote
5 (6 / -1)

Incarnate

Ars Tribunus Angusticlavius
8,890
There is no way in hell I'm dropping six figures on a 10-15% increase in performance along with AI generated frames. Also I will not support the scarcity markets and second hand scalpers - they can all go DIAF.
6 figures? Isn't that a minimum of $100,000? How many are you buying?
 
Upvote
3 (3 / 0)

Jivejebus

Ars Scholae Palatinae
705
Subscriptor++
I finally got a 5080 after a month of desperately mashing "add to cart" and I have to say, I am pretty happy with it.

It's tough for Nvidia to explain advances in architecture to the general public besides just "number go up" but the 5080 is definitely a step up from the 30 and 40 series cards in terms of AI features. DLSS and framegen now finally look good enough that I would consider them viable, whereas the ghosting and artifacts on my 3080 made me want to puke. Games that struggled at 1080p Medium settings on my 3080 are now getting 90fps at 4k thanks to all the AI magic. In other words, I think I kind of buy their argument that they can start counting fake frames.

I am also getting a solid 20% OC out of the card while still maintaining temperatures well below 60C under load with a quiet fan profile, which is amazing.
I find it very hard to believe a 3080 was struggling at 1080p medium. I have RX6800, a much less powerful card, running 1440p at mostly high settings in modern games without much issue.

If anything the reviews for the 5000 series have shown very little architectural improvements over the 4000 series since we just see higher power consumption and minimal improvements in frame rates
 
Upvote
13 (13 / 0)

matt_w

Ars Scholae Palatinae
1,165
Pedantry against the trend: "interpolated frames", these are extrapolated frames, interpolation draws from the data on both sides of the created data point (the frame or frames before and the frame or frames after), extrapolation draws from the data on only one side of the data point (the frame or frames before or after) not both. This tech extrapolates as it completes generation of the new frame(s) before subsequent ones exist.

All that said, I'm late to the party and there's no hope of overriding this linguistic trend, so, sucks to be a pedant I guess.

Edit, I'm wrong, and nvidia is actually interpolating, according to this video at 12:58 they are analyzing two frames and generating frames in between them. I made the mistake because I assumed they wouldn't just throw away latency to get these results.

You've realized that it is actually interpolation and the next step to realize what impact that has on latency. because it has to wait for 2 native frames before giving 4 frames to the user, it has to add 2 native frames of latency to the output.
 
Upvote
0 (3 / -3)

zk78751

Seniorius Lurkius
34
I have a 6 year old 2070, play all games at 60hz ultrawide 1440p. I can almost always play at a high enough quality that a further increase in quality is marginal. I can't help but realize I want a 5070 simply due to FOMO and have money to burn. (For example, all I heard about Indiana Jones was how demanding it "is." But I was able to get a perfectly acceptable og DLSS 60fps even setting it slightly higher than the mid settings it gave me as default, and it was absolutely gorgeous (notably in the beginning artifact museum), with amazing shadows and lighting. FOMO! My 2070 would of course be absolutely ridiculed as nearly worthless by the videocardz mafia. Sad.)
 
Upvote
7 (8 / -1)

Dzov

Ars Legatus Legionis
14,763
Subscriptor++
The hilarious thing is Steve absolutely rips into the 5070. They cherry pick a single graph without any of the context.
I was just watching that video. He mentions several times that we should wait for the 9070/9070 xt review tomorrow. I'm taking this advice. Actually, I have half a mind to take the 6th off and camp out at MicroCenter.
 
Upvote
7 (7 / 0)