The time stamp is 12:47, the marketing is intentionally deceptive. Jensen claims on stage that it can predict the future.Can you please link the relevant timestamp, I'm not going through a 30 minute video for this.
Took a quick skim, from your video, a graphic showing extrapolation:
View attachment 104188
Thanks, this is really quite absurd, the tech to "predict the future" actually does exist and is used now for VR. Why TF are they deliberately adding latency?The time stamp is 12:47, the marketing is intentionally deceptive. Jensen claims on stage that it can predict the future.
Frame gen is using interpolation. From my understanding the order of Nvidia frame gen is basically this:Pedantry against the trend: "interpolated frames", these are extrapolated frames, interpolation draws from the data on both sides of the created data point (the frame or frames before and the frame or frames after), extrapolation draws from the data on only one side of the data point (the frame or frames before or after) not both. This tech extrapolates as it completes generation of the new frame(s) before subsequent ones exist.
All that said, I'm late to the party and there's no hope of overriding this linguistic trend, so, sucks to be a pedant I guess.
"Adding latency" is exactly what it does, which is why Nvidia Reflex needs to be enabled to reduce it again... It's why a lot of people don't like framegen. Digital Foundary goes into a lot of detail about this in several videos, but here's their review of the 5070. The image above (like many Nvidia presentations) is very misleading. What it should show at the end is "Rendered Frame 2." It IS interpolating between frames, NOT extrapolating and it adds significant latency - upwards of 60ms in some cases.Can you please link the relevant timestamp, I'm not going through a 30 minute video for this.
Edit1: Took a quick skim, from your video, a graphic showing extrapolation:
View attachment 104188
Edit2: Why TF would you interpolate anyway, that would deliberately add latency.
AMD AFMF works largely the same as DLSS Frame GenWhile I'm here, does AMDs product do the same thing?
Thanks for the kind advice.Can you please link the relevant timestamp, I'm not going through a 30 minute video for this.
Edit1: Took a quick skim, from your video, a graphic showing extrapolation:
View attachment 104188
Edit2: Why TF would you interpolate anyway, that would deliberately add latency.
Edit3: This was driving me a little nuts, I found the timestamp, 12:48, thank goodness for searchable transcripts.
In the future please include a timestamp and a quote in your video citations.
It looks like it's in a pretty miserable state. I wouldn't necessarily extrapolate from any single game to larger trends about the industry.As someone who has been out of the gaming PC building arena since the 5700-era, I can't believe how shitty the market is now. Not just for PCs, but also consoles. I can't believe you need a PS5 Pro to run MH: Wilds at a capped 60 fps using FSR1. What the hell happened?
I would guess that we're more so seeing relatively linear improvements in graphics technology, while the tasks they're running up against are scaling exponentially. Especially when you start looking at things like ray tracing.I'm not qualified to question the allegations of laziness due to a perceived lack of competition. But I'm wondering if we aren't just bumping up against diminishing returns for graphics technology such that generational improvements (that don't come from die-shrinks, more VRAM, and image processing tricks) are just not on the table anymore.
So don't trust communications directly from the source, got it.Thanks for the kind advice.
In the future before ranting about linguistic trends please make sure you're factually correct.
Yeah, it's a misconception nvidia purposefully cultivated, to make the idea of frame generation seem more appealing.Pedantry against the trend: "interpolated frames", these are extrapolated frames, interpolation draws from the data on both sides of the created data point (the frame or frames before and the frame or frames after), extrapolation draws from the data on only one side of the data point (the frame or frames before or after) not both. This tech extrapolates as it completes generation of the new frame(s) before subsequent ones exist.
All that said, I'm late to the party and there's no hope of overriding this linguistic trend, so, sucks to be a pedant I guess.
Edit, I'm wrong, and nvidia is actually interpolating, according to this video at 12:58 they are analyzing two frames and generating frames in between them. I made the mistake because I assumed they wouldn't just throw away latency to get these results.
To be fair, it still makes sense as an option. Maybe that additional latency is a problem in certain games, but it isn't in all games."Adding latency" is exactly what it does, which is why Nvidia Reflex needs to be enabled to reduce it again... It's why a lot of people don't like framegen. Digital Foundary goes into a lot of detail about this in several videos, but here's their review of the 5070. The image above (like many Nvidia presentations) is very misleading. What it should show at the end is "Rendered Frame 2." It IS interpolating between frames, NOT extrapolating and it adds significant latency - upwards of 60ms in some cases.
The reason nobody is saying those words is because the long-term hope+goal is that AI will improve. I don't think it's an accident that everyone is leaning on AI as a method of improving visual quality. It's an effort to return to the era when GPU gains were faster and you could get more immediate improvements.While the stock market would scream and howl at that level of honesty and candor, it'd feel a lot more fair if Nvidia, if not saying the words, would at least price accordingly for the real-world market. Instead we get patented frame-smearing technology and told it's a mid-range powerhouse.
Edit: typos, clarity
I'm just hoping that the new card can help drop some of the prices further down the market. Obviously the 50 series cards aren't really having any effect due to a lack of availability.(Apologies for the double post, that reply came in while I was typing the previous one.)
I already got my 7900XT, but it'll be well within the EU's two-weeks refund window when the 9070s drop. Current game plan is to see if I can get a Sapphire or XFX one for reasonably close to the €720-ish that the 7900 cost, then return that one. I was, perhaps foolishly, banking on AMD rolling low for this generation.
I'm sure someone out there is both crazy and rich enough to try gaming on a DGX SuperPOD...Im sorry... HOW much for you think these retail for?
Six figures refers to a number that has six digits, specifically any amount between $100,000 and $999,999.
I'm sorry you got tripped up on the extrapolation / interpolation thing. I see where you were coming from.So don't trust communications directly from the source, got it.
in many ways, this is barely an upgrade.
Pretty much. First party manufacturer claims have a vested interest in presenting their product in the best light possible, even when it's all smoke and mirrors. This is why third party independent reviewers and technical experts are so valuable in validating (and refuting) the numbers.So don't trust communications directly from the source, got it.
If AMD's new RT architecture can significant'y close the gap with Nvidia, this could be a worrisome development for Nvidia.Yeah, it really really sucks when you basically only have a day to get the card, if that.
FWIW the Gamers Nexus review is doing a lot of wink wink nudge nudge by comparing it to some AMD cards that they are heavily implying the 9070 roughly equates. If the point they're making about the 9700 XT Hellhound is true the AMD cards are going to be pretty predictably heads and shoulders better in raster and slightly to significantly worse at RT depending on title.
I still want to see apples to apples comparisons, though, especially with RT because that's where the real test is for AMD this time around.
What game were you playing that "struggled" at 1080p Medium on an RTX 3080?I finally got a 5080 after a month of desperately mashing "add to cart" and I have to say, I am pretty happy with it.
It's tough for Nvidia to explain advances in architecture to the general public besides just "number go up" but the 5080 is definitely a step up from the 30 and 40 series cards in terms of AI features. DLSS and framegen now finally look good enough that I would consider them viable, whereas the ghosting and artifacts on my 3080 made me want to puke. Games that struggled at 1080p Medium settings on my 3080 are now getting 90fps at 4k thanks to all the AI magic. In other words, I think I kind of buy their argument that they can start counting fake frames.
I am also getting a solid 20% OC out of the card while still maintaining temperatures well below 60C under load with a quiet fan profile, which is amazing.
Seems like everyone involved has an incentive to slow development and maximize distribution of thinner and thinner slices of improvement.I'm not qualified to question the allegations of laziness due to a perceived lack of competition. But I'm wondering if we aren't just bumping up against diminishing returns for graphics technology such that generational improvements (that don't come from die-shrinks, more VRAM, and image processing tricks) are just not on the table anymore.
I think its about where you're spending your R&D money. Gen over gen they're making huge gains in AI performance. Why would that necessarily mean that raster performance is at the point of diminishing returns?I'm not qualified to question the allegations of laziness due to a perceived lack of competition. But I'm wondering if we aren't just bumping up against diminishing returns for graphics technology such that generational improvements (that don't come from die-shrinks, more VRAM, and image processing tricks) are just not on the table anymore.
Only at MSRP. Which we know is a fabrication. The very next chart in that video is comparing market pricing and the 5070 is more than halfway down the list, which is topped by the 7800XT and the 7700XT.
Why not just display the real frame if it's already there? That's the definition of latency.Frame gen is using interpolation. From my understanding the order of Nvidia frame gen is basically this:
1. real frame is displayed on screen
2. next real frame is rendered and held in queue
3. 1 to 3 frames are generated between the last real frame and next real frame
4. generated frames are displayed on screen
5. real frame in queue displayed on screen
Repeat
I guess for any playback that isn’t responsive to user input. Seems like a post-processing interpolation tech they got to scale to the point of working in ‘real-time’ that turned into a marketing point.Edit2: Why TF would you interpolate anyway, that would deliberately add latency.
I, too, would like to know this game...What game were you playing that "struggled" at 1080p Medium on an RTX 3080?
I would typically define struggling as "dips below 30 FPS." I have a hard time believing any game "struggles" by that definition on an RTX 3080 @ 1080p Med, but perhaps you have a higher threshold definition for that.
6 figures? Isn't that a minimum of $100,000? How many are you buying?There is no way in hell I'm dropping six figures on a 10-15% increase in performance along with AI generated frames. Also I will not support the scarcity markets and second hand scalpers - they can all go DIAF.
I find it very hard to believe a 3080 was struggling at 1080p medium. I have RX6800, a much less powerful card, running 1440p at mostly high settings in modern games without much issue.I finally got a 5080 after a month of desperately mashing "add to cart" and I have to say, I am pretty happy with it.
It's tough for Nvidia to explain advances in architecture to the general public besides just "number go up" but the 5080 is definitely a step up from the 30 and 40 series cards in terms of AI features. DLSS and framegen now finally look good enough that I would consider them viable, whereas the ghosting and artifacts on my 3080 made me want to puke. Games that struggled at 1080p Medium settings on my 3080 are now getting 90fps at 4k thanks to all the AI magic. In other words, I think I kind of buy their argument that they can start counting fake frames.
I am also getting a solid 20% OC out of the card while still maintaining temperatures well below 60C under load with a quiet fan profile, which is amazing.
Pedantry against the trend: "interpolated frames", these are extrapolated frames, interpolation draws from the data on both sides of the created data point (the frame or frames before and the frame or frames after), extrapolation draws from the data on only one side of the data point (the frame or frames before or after) not both. This tech extrapolates as it completes generation of the new frame(s) before subsequent ones exist.
All that said, I'm late to the party and there's no hope of overriding this linguistic trend, so, sucks to be a pedant I guess.
Edit, I'm wrong, and nvidia is actually interpolating, according to this video at 12:58 they are analyzing two frames and generating frames in between them. I made the mistake because I assumed they wouldn't just throw away latency to get these results.
The hilarious thing is Steve absolutely rips into the 5070. They cherry pick a single graph without any of the context.You should source your graph so we can at least judge the conditions it was derived from.
I was just watching that video. He mentions several times that we should wait for the 9070/9070 xt review tomorrow. I'm taking this advice. Actually, I have half a mind to take the 6th off and camp out at MicroCenter.The hilarious thing is Steve absolutely rips into the 5070. They cherry pick a single graph without any of the context.
maybe they mean yenIm sorry... HOW much for you think these retail for?
Six figures refers to a number that has six digits, specifically any amount between $100,000 and $999,999.
Both Steves do, in fact. (Steve from Hardware Nexus and Steve from Hardware Unboxed)The hilarious thing is Steve absolutely rips into the 5070. They cherry pick a single graph without any of the context.