You are spot on.I'm not qualified to question the allegations of laziness due to a perceived lack of competition. But I'm wondering if we aren't just bumping up against diminishing returns for graphics technology such that generational improvements (that don't come from die-shrinks, more VRAM, and image processing tricks) are just not on the table anymore.
My 3070 is starting to get a bit old, I'll probably replace it with a 9070 xt if I can manage to get one. Which will be the first non-Nvidia card I've bought for myself in about 20 years.
With 12 GB of VRAM it's not worth it. It'll already be struggling in some games at 1440p and it's only going to get worse.Seems like it would be a worthy upgrade from my 2070... at €550... but I put the odds of getting one under 1k as basically non-existent which makes the whole idea moot.
The interesting thing to me is, if you were to go out and buy an RTX 2070 today it's still selling for around $700 new.I have a 6 year old 2070, play all games at 60hz ultrawide 1440p. I can almost always play at a high enough quality that a further increase in quality is marginal. I can't help but realize I want a 5070 simply due to FOMO and have money to burn. (For example, all I heard about Indiana Jones was how demanding it "is." But I was able to get a perfectly acceptable og DLSS 60fps even setting it slightly higher than the mid settings it gave me as default, and it was absolutely gorgeous (notably in the beginning artifact museum), with amazing shadows and lighting. FOMO! My 2070 would of course be absolutely ridiculed as nearly worthless by the videocardz mafia. Sad.)
I guess for any playback that isn’t responsive to user input. Seems like a post-processing interpolation tech they got to scale to the point of working in ‘real-time’ that turned into a marketing point.
Your brain rebels against the concept because putting it front and center is scummy and moronic.
This tech is specifically markets as a tool to enhance interactive gaming.I guess for any playback that isn’t responsive to user input.
What makes you think I haven't realized that?You've realized that it is actually interpolation and the next step to realize what impact that has on latency. because it has to wait for 2 native frames before giving 4 frames to the user, it has to add 2 native frames of latency to the output.
QFT.Only at MSRP. Which we know is a fabrication. The very next chart in that video is comparing market pricing and the 5070 is more than halfway down the list, which is topped by the 7800XT and the 7700XT.
Edit: I was somewhat incorrect, it was two charts ahead and the 5070 is closer to 3/4s of the way to the bottom not halfway down. And keep in mind this is comparing current retail pricing in Australia in AUD. But, yea, the Nvidia cards are nowhere NEAR good value.
View attachment 104202
Yeah, that's really not the case.Seems like everyone involved has an incentive to slow development and maximize distribution of thinner and thinner slices of improvement.
If it’s down to greed vs. the limits of material science, I’m guessing that greed is the factor limiting performance improvements.
I would almost welcome that. Modern games, at least, tend to be graphically extremely impressive but often lacking in so many other areas, it'd be interesting to see which way development would push if graphical improvements were suddenly off the table.
If I'm reading a review to help inform a purchasing decision, it's when I arrive at 12 GB of VRAM when I stop reading. Nope. Hard stop.With 12 GB of VRAM it's not worth it. It'll already be struggling in some games at 1440p and it's only going to get worse.
I've got no problem with 12GB of VRAM at the right price point. The B580 at $250 (if actually available) with 12GB of VRAM seems like a great option.If I'm reading a review to help inform a purchasing decision, it's when I arrive at 12 GB of VRAM when I stop reading. Nope. Hard stop.
Brain fart of the likes that would clear a room and create a hazmat situation. Four figures before the period... facepalm corrected6 figures? Isn't that a minimum of $100,000? How many are you buying?
This is a FE problem. GN tested some AIB cards as well and they were in the ~60C temp range vs the 5070 FE which was in the 70s.Wow, really unfortunate to read about the power and heat situation with the new NVidia cards. I play almost all games at 1080p, so I'm more interested in efficient and quiet cards. I'm shopping down at the 5060 price range, but I can only assume that the power and heat trend will apply to the whole line of 50xx cards.
Corrected... facepalmIm sorry... HOW much for you think these retail for?
Six figures refers to a number that has six digits, specifically any amount between $100,000 and $999,999.
TSMC also charges a LOT for their 4nm and 3nm processes, so making GPUs is more expensive than it ever was... and now with the Tariffs, expect them to get even more expensive.You are spot on.
The limits of physics in silicon, and the massive budgets (and layoffs/poor working conditions) associated with making AAA/AA games no longer push salivating gamers to upgrade their gpus every 18-36 months. Nvidia and AMD have taken note, and seek to emulate the Apple model where they charge buyers 2-3x the price to make up for the slowed pace of consumer demand.
This is absolutely true. My 7800XT uses just under 12GB of VRAM running Horizon Zero Dawn at 4k with Quality FSR (so rendered at 1440p and upscaled to 4k). While average framerates might not drop significantly with only 12GB of VRAM, the stuttering and pop-in from having to load new data to VRAM is incredibly annoying. Some games don't even try once VRAM is maxed. Control, Halo Infinite, and others will gladly just load some 16px texture and never even try to load the proper texture. So in Control, with my old 6600XT (8GB VRAM) I'd get paintings on the wall that are complete blurs and regardless of what I do, they'll never load back in after I've been playing for a few minutes and the VRAM is full. Framerates would be decent but the textures were just missing.With 12 GB of VRAM it's not worth it. It'll already be struggling in some games at 1440p and it's only going to get worse.
Unfortunately for those of us who use their GPU for games and compute (in my case CUDA-powered photo editing tools), nVidia is the only game in town until you can persuade engineers to support AMD cards. So I will buy the shit they produce, though clearly not this pile.If AMD's new RT architecture can significant'y close the gap with Nvidia, this could be a worrisome development for Nvidia.
Ahh, who am I kdding, everyone will just ignore that AMD provides killer price/performance (barring RT where its anywhere from terrible to not half bad) and go buy the shit Nvidia is shoveling because Nvidia is so good at occupying that sweet sweet real estate in our heads.
Interested to see how the 9070s end up falling once the embargoes are all gone.
So, a few things on this:This is absolutely true. My 7800XT uses just under 12GB of VRAM running Horizon Zero Dawn at 4k with Quality FSR (so rendered at 1440p and upscaled to 4k). While average framerates might not drop significantly with only 12GB of VRAM, the stuttering and pop-in from having to load new data to VRAM is incredibly annoying. Some games don't even try once VRAM is maxed. Control, Halo Infinite, and others will gladly just load some 16px texture and never even try to load the proper texture. So in Control, with my old 6600XT (8GB VRAM) I'd get paintings on the wall that are complete blurs and regardless of what I do, they'll never load back in after I've been playing for a few minutes and the VRAM is full. Framerates would be decent but the textures were just missing.
Now you can blame the game engine or the devs, but ultimately the problem is caused by insufficient VRAM and the solution is more VRAM.
Because the frames you're generating fall between the latest and the previous real frame.Why not just display the real frame if it's already there? That's the definition of latency.
Did you look at his flow chart?Because the frames you're generating fall between the latest and the previous real frame.
Frame gen is using interpolation. From my understanding the order of Nvidia frame gen is basically this:
1. real frame is displayed on screen
2. next real frame is rendered and held in queue
3. 1 to 3 frames are generated between the last real frame and next real frame
4. generated frames are displayed on screen
5. real frame in queue displayed on screen
Repeat
To be fair, with the incoming US tariffs, those might not be too far off the prices in the US in $US.Only at MSRP. Which we know is a fabrication. The very next chart in that video is comparing market pricing and the 5070 is more than halfway down the list, which is topped by the 7800XT and the 7700XT.
Edit: I was somewhat incorrect, it was two charts ahead and the 5070 is closer to 3/4s of the way to the bottom not halfway down. And keep in mind this is comparing current retail pricing in Australia in AUD. But, yea, the Nvidia cards are nowhere NEAR good value.
View attachment 104202
Your solution is marvelous! I'd love such an invention/development.I've got no problem with 12GB of VRAM at the right price point. The B580 at $250 (if actually available) with 12GB of VRAM seems like a great option.
But I absolutely agree that if Intel can put 12GB of VRAM on a $250 card, it shouldn't be hard for Nvidia to exceed that -- except, of course, if Nvidia puts more VRAM on lower-end GPUs, it also makes them more attractive for AI.
I think it's very convenient for Nvidia that putting more VRAM on its lower-end consumer models might make shortages worse. My problem is, I can see the argument.
The idea of DirectStorage ought to be able to offer some help here, since streaming game resources in from an SSD could theoretically reduce pressure on VRAM, but in practice it seems to have made games slower. (Judging by a handful of articles on the topic).
I've wondered before if the solution might not be a GPU-mounted SSD. Direct-connect it to the GPU with additional PCIe lane allocation on the card. It'd be invisible to the operating system but could operate as a streaming texture cache for games -- and it wouldn't be at all useful for AI.
I don't think Nvidia bet all its transistor budget on MFG. I think Nvidia bet its transistor budget and memory bandwidth improvements on AI, and MFG is the best feature Nvidia engineers could come up with (for now) to showcase those AI capabilities in a consumer context.Seems like NVidia basically bet all of their transistor budget on frame generation for 5xxx and unfortunately it is getting pretty soundly rejected by gamers.
That's the 4070, 5070, and 4070 Super, in order. You must be looking at the 4070 Ti Super or the 5070 Ti."The 5070's CUDA core count falls right in between the RTX 4070 and the 4070 Super's"
Your chart says otherwise.
Yes, I know how it works. You're the one not following.Did you look at his flow chart?
The latest frame is step 2.
And sudden changes in legacy code compatibility without pre-announcement.50x0 in a nutshell, but also with driver problems, defective chips, higher power draw.
Not withstanding the above, I believe you about the effect you are describing, but that problem is the byproduct of a bad approach to an issue. Whether that's the game engine, in-driver optimizations, or something else, I can't say -- but the solution is only "more VRAM", inasmuch as it allowed you to brute force the situation in this specific case. I've got a lot of sympathy for that kind of approach -- it works from the consumer side -- but the responsibility ultimately is on devs and/or AMD to fix that kind of problem.
8GB, after all, is scarcely an uncommon amount of VRAM. A lot of the most popular GPUs in the world are still 8GB or less.
I'm a little surprised to hear that there are games that just leave low-rez assets loaded without any real option for the player to do something beyond buying a better GPU. Typically I'd at least expect some messing around in menu options to yield a more satisfactory result.A lot of games have followed OP's described behavior in recent years (missing or low res textures), even on 12 GB cards - Halo Infinite, Forspoken, Plague Tale Requiem. There are also games that run out of VRAM at 12 GB - RE4 at launch would crash on 12 GB nvidia cards with ray-tracing enabled because of this, and TLOU had stuttering issues on 12 GB cards at 1440p and above.
The fault may lie with developers, since, as you say, 8 GB cards are still commonplace (especially in laptops). But the game devs either don't care or don't have the bandwidth to optimize, so as consumers, the best way to avoid the problem is to overprovision on VRAM. And in 2025 (really, since 2023 for some games), 16 GB is what you'd want to aim for if you do a lot of AAA gaming.
No. This is entirely down to planned obsolescence, segmentation, and silicon rationing within the tiers below the top.I'm not qualified to question the allegations of laziness due to a perceived lack of competition. But I'm wondering if we aren't just bumping up against diminishing returns for graphics technology such that generational improvements (that don't come from die-shrinks, more VRAM, and image processing tricks) are just not on the table anymore.
Some games have multiple levels of texture detail loaded into VRAM, which is useful for LOD (you don't need high res textures for faraway objects). When the video card runs out of VRAM, the high res textures tend to get flushed first, because you can load the low res texture onto the same model even when it's close.I'm a little surprised to hear that there are games that just leave low-rez assets loaded without any real option for the player to do something beyond buying a better GPU. Typically I'd at least expect some messing around in menu options to yield a more satisfactory result.
Your anecdotal results imply that there may even be a 12GB / 16GB variance in RT perf between AMD and NV, if the 9070 XT is good enough at ray tracing to make the situation competitive in the first place.
I agree with you that 16GB should've been standard a long time ago, lower in the stack than it is now, even with the 9070's launch imminent.
Indiana Jones and the Great Circle. Turn it past medium with a 3080, and the game tells you you don't have enough VRAM. Even at medium, I had stuttering and occasional dips below 30. It's actually the reason I decided to get a new card. I also see similar uplift in Hogwarts Legacy and Cyberpunk, the other two games that I quit playing due to performance issues on the 3080. I like raytracing, so sue me. Most other stuff was playable, but even simpler indie games would get my card spun up to crazy RPMs at 4k. I prefer not to feel like I'm playing games in a wind tunnel. Now I have a 5080 and all of those problems are gone!What game were you playing that "struggled" at 1080p Medium on an RTX 3080?
I would typically define struggling as "dips below 30 FPS." I have a hard time believing any game "struggles" by that definition on an RTX 3080 @ 1080p Med, but perhaps you have a higher threshold definition for that.