Even setting aside Frame Generation, this is a seriously fast, power-hungry GPU.
See full article...
See full article...
Seriously. Talking about damning with faint praise... In case the a/b headline testing changes things, we're both reacting to this headline:So the follow up to the 4090 the 5090 has better performance then its predecessor? The article was informative but the title needs work.
It’s essentially saying water is wet.
So the follow up to the 4090 the 5090 has better performance then its predecessor? The article was informative but the title needs work.
It’s essentially saying water is wet.
So the follow up to the 4090 the 5090 has better performance then its predecessor? The article was informative but the title needs work.
It’s essentially saying water is wet.
The only thing that has me even thinking about the 50xx series is potential raytracing gains. My 3080Ti is still plenty powerful for 4K/120fps, but the few times I've tried running raytracing at those resolutions it feels like I may as well just using the integrated graphics on my CPU.It is unquestionably a great performance card in every way we would of expected such as raw performance, it is also undoubtedly bad in all the ways we expected in terms of price and power consumption.
I've an 7900xtx and it is still far exceeding my requirements, not that I would spend nearly double for the 5900.
SLI dying was more a result of developers not wanting to optimize games for it since it added more overhead, and while there were performance gains to be had, it often wasn't as effective as just buying a better single GPU.So it is basically a SLI version of a 4090.. I am assuming that is why they got rid of SLI, too easy for people to upscale performance themselves.
It also makes things like TAA very hard to do since you need access to to the previous frames but in SLI most of the time one gpu would have one frame and another gpu would be drawing the other.The only thing that has me even thinking about the 50xx series is potential raytracing gains. My 3080Ti is still plenty powerful for 4K/120fps, but the few times I've tried running raytracing at those resolutions it feels like I may as well just using the integrated graphics on my CPU.
SLI dying was more a result of developers not wanting to optimize games for it since it added more overhead, and while there were performance gains to be had, it often wasn't as effective as just buying a better single GPU.
In my opinion that's a feature.It also makes things like TAA very hard to do since you need access to to the previous frames but in SLI most of the time one gpu would have one frame and another gpu would be drawing the other.
It's faster. People used to pay a huge premium for even less additional speed. If you don't need it, don't buy it. I'm looking forward to the 5080 series myself.What the hell is the point of a card that's 30% faster while costing 30% more and using 30% more power?
More than just that, once reviewers started benchmarking more than just average FPS, it became apparent that SLI's gains came at the cost of latency and consistency. So, not only was it rarely worth it in practice to buy a second old card instead of one new card, and not only did it require some level of developer support to work well, but it was just functionally worse than one fast GPU.SLI dying was more a result of developers not wanting to optimize games for it since it added more overhead, and while there were performance gains to be had, it often wasn't as effective as just buying a better single GPU.
Yeah, it's a fascinating piece of engineering that I have no intention to ever buy.I'm never, ever, ever going to spend $2k on a GPU but, damn if that isn't a good looking card.
I mean, given what we saw with CPUs last year, specifying that it is fact notably faster than its predecessor isn't totally irrelevant.So the follow up to the 4090 the 5090 has better performance then its predecessor? The article was informative but the title needs work.
It’s essentially saying water is wet.
Seems to be power limited. 25-30% more performance, 25-30% more power consumption. I vaguely recall reading that both generations were made on the same TSMC process node, so I guess that shouldn’t be too much of a shock.It’s kind of crazy how much larger this GPU is than the 4090 (in core count, area, and power budget) compared to its uplift. They must be hitting some sort of scalability wall in their design.
So the follow up to the 4090 the 5090 has better performance then its predecessor? The article was informative but the title needs work.
It’s essentially saying water is wet.
Seems to be power limited. 25-30% more performance, 25-30% more power consumption. I vaguely recall reading that both generations were made on the same TSMC process node, so I guess that shouldn’t be too much of a shock.
In other words: FG (Frame Gen) is only suitable for HFR-Gaming (HighFrameRate).So DLSS FG can be useful for turning 120 fps into 240 fps, or even 60 fps into 120 fps. But it's not as helpful if you're trying to get from 20 or 30 fps up to a smooth 60 fps.
This is good to know. I remember people doing that with the 3090 to great effect.For anyone curious, Der8auer has posted a great preliminary look at undervolting, seeing some pretty strong efficiency gains. Running the card at a 70-80% power envelope barely touches the performance in some titles. I'd just do that by default, honestly.
Big disagree. It is the future, which all but guarantees future titles will take it for granted and require it to hit their frame-rate targets, which means the future of gaming is a laggy 15fps with a glorified motion-smoothing filter.I can also see the writing on the wall here. This is going to be the future. If you insist on a zillion fps, then AI can get you there; and Nvidia can provide it. And future titles are going to support it.
I think it's impressive tech.
*For varying values of "just as good"I can get a space heater just as good for under $50.
These companies use racks of liquid-cooled specialty GPUs at something like $80,000 apiece, six to a server, with the gobs of VRAM needed to run commercial models. They're too expensive for gamers even used, and the main reason NVIDIA constrains the VRAM on their gaming cards is to keep them from cannibalizing their commercial AI market.I am thinking this card is less about rich gamers, and more about enterprise AI data centers... OpenAI, Google, Meta, X, Tesla, etc.
For them, money is object and performance is king, right?
On that note, as these AI companies upgrade, are they offloading lots of good GPUs from a year or two ago, cheap, in the secondary market?
Not to mention less stable. I had 2x GTX 970 in SLI and eventually just sold one because even the games that supported it tended to crash more and that was by far more annoying than poor frame rate, particularly when it ruined a multiplayer game. I only tried that one config so YMMV but I consider that purchase a mistake.SLI dying was more a result of developers not wanting to optimize games for it since it added more overhead, and while there were performance gains to be had, it often wasn't as effective as just buying a better single GPU.