Next GPU releases: 2025 edition

IceStorm

Ars Legatus Legionis
25,451
Moderator
Hardware Unboxed is back with their monthly GPU pricing video:


View: https://youtu.be/6KmbZaoQTD0?si=N1AiP33-MBahUfJ_


He confirms what we've been hearing from other sources:

  • AMD and nVidia were both supposed to launch their cards much earlier. nVidia in November, 2024, and AMD in January, 2025

  • AMD and nVidia both wound down existing generation orders with the earlier launch dates in mind,

  • The remaining stock in the channel was not enough to bridge the delay gap, which is why almost all the previous generation parts have sold out.

  • The lack of GPUs in the channel increased demand/FOMO.

Combine all this, and we have about a three month dearth of cards in the channel. There was no way nVidia nor AMD could fill that demand at launch.

He then goes into specifics for AMD :

  • As per normal, AIBs built cards, and retailers bought those cards, using a placeholder MSRP. This happens with every new GPU launch. That placeholder MSRP is typically on the high end of whatever MSRP pricing window AMD comes up with. In this instance, the 9070 XT's MSRP was estimated to be around $700.

  • Normally, what AMD will do is give retailers rebates to offset whatever difference there is between the placeholder MSRP and the MSRP they decide to launch at. This rebate normally covers ALL card models.

  • With this launch, not only did AMD not offer rebates for all card models, they only offered rebates for a small volume of the initial card shipments. Overclockers.co.uk confirmed in a forum post that while they had 2000 Sapphire, 1000 Powercolor, and 1000 ASrock cards in stock, the MSRP was capped for only a few hundred cards. This confirms AMD did not rebate ALL cards, or even the majority.

  • Initial restocks will be at higher prices because there are no rebates from AMD, and the initial restocks were ordered before MSRP was set. This isn't very far off normal GPU launches, so things would all smooth out anyway in a month or so.

  • Going forward, things can go one of two ways:
    • On the good path, AMD sells the GPUs to AIBs at a price that enables the AIBs to hit MSRP pricing. Pricing may settle down in a month or so.
    • On the bad path, AMD continues to sell GPUs to AIBs at a price that exceeds MSRP, and continues using selective rebates to bring down the price on only some cards. Alternatively, AIBs and retailers scalp prices due to high demand, regardless of what AMD sells the GPUs to AIBs for. On this path, Hardware Unboxed believes it's safe to call the MSRP fake.
  • They reiterate Frank Azor's public statement that MSRP isn't fake. Whatever, Frank.

nVidia is a mess:

  • Retailers claim that supply is far too low.

  • nVidia held up their claims about they've shipped more than the 40 series in the same time.

  • Hardware Unboxed calls them on the elephant in the room - the number of models launched in that timeframe
    • 4090 was launched October 12, 2022
      4080 was launched November 16, 2022
    • 4070 Ti was launched January 5, 2023
    • 4070 launched six MONTHS after the 4090, April 13, 2023
    • RTX 50 series launched its same four SKUs in a 5 week window, so a 2x increase in units sold doesn't impress.
  • Retailers say supply is less than 40 series launches. The 5090 supply is particularly low compared to the 4090. The 5080 is close to the 4080 launch supply, but the other two are less than their 40 series supply.

  • Retailers are also reporting higher demand for less favorable models.

  • It's clear supply that is far lower than it should be, but there are varying rumors as to why:
    • Silicon availability due to AI/professional products

    • GDDR7 not being readily available

    • Production issues (apparently why the 5070 FE is delayed)

    • 5070 Ti having lower supply than the 5080 is probably down to there not being enough defective dies to feed the 5070 Ti supply. This would explain why 5080 supply is far better than the 5070 Ti. Also, you know, the whole 5080 and 5070 Ti being effectively the same product from a manufacturing perspective, but the 5070 Ti being $250 less to sell, means there's little reason to supply 5070 Tis over 5080s to the market.

    • There are also rumors of distributors selling GPUs into Singapore for the Singapore-to-China supply.

    • For the 5090 specifically, there are rumors that exporters are willing to pay waaaaay over MSRP for 5090 supply to sell to AI customers. This dos not explain the lack of supply for cards lower in the stack.
  • None of this explains why, when there is supply, the prices are so high. The simple explanation is that nVidia doesn't care. He goes into the various methods that nVidia can use to get prices back to MSRP, from restricting supply of the GPUs to AIBs, to making more FE cards to provide pricing competition. nVidia isn't doing any of this,

  • There's still no explanation as to why prices are so high on card tiers above MSRP. 5070 Tis jumping to $900 for the next tier of cards. 5080s jumping to $1250 for the next tier of cards. 5090 jumping to $2600 for the next tier of cards. You can't use the "AIB rebate" argument that AMD used, because nVidia announced prices well before retailers bought cards to put on shelves,

  • The argument that prices are high because nVidia wants them there may make some sense for the 5090 and 5080, where they have no competition, but it falls apart with the 5070 Ti and 5070, where AMD does compete.

  • The one exception is that the 5070's next tier of cards only goes up about 10% to about $600. That one card, there is some hope when it starts restocking it won't be much over MSRP.

  • At this point, it's safe to assume that the MSRP for the 5090 and 5080 are fake. New cards coming in are nowhere near MSRP. The 5070 Ti has no supply, so it's not even worth talking about.

It's a mess, and it's not going to get better anytime soon. They say wait for better pricing, particularly for the 5070 and 5070 Ti due to AMD competing.

My $0.02 is stalk Zotac's Open Box page for a 40 series, especially if you like PhysX in your games :)
 

IceStorm

Ars Legatus Legionis
25,451
Moderator
Warframe used to have a lot of PhysX particle effects that were pretty cool, even if they didn't actually mesh with the game and looked like they came from a different universe. Since they're live service, they phased them out (although it might be a new software only implementation of nvidia tech, still) many years ago and the things they replaced them with are more artistically substantial. None are as cool though.
You make it sound like they had a choice:

1741958743914.png

nVidia dropped particle support. DE was left to build their own.

Guess they saw the writing on the wall back in 2018. Might explain why they haven't bothered to add many other vendor-specific features. They only support FSR 2.2 and whatever form of DLSS they have, they don't have a way to separately enable just the AA part (DLAA).
 
  • Like
Reactions: SportivoA

Xavin

Ars Legatus Legionis
30,551
Subscriptor++
For PhysX, there's got to be a short term fix. Games haven't been unplayable on AMD and Intel GPUs all these years, so maybe it's just the game detecting GPU Vendor as Nvidia and forcing physx on. Feels like there'll be an easy way to workaround that.
As far as I am aware it's people intentionally turning it on. It does make a big difference in how things look in a few games (emphasis on few), but you can always turn it off to get the game to play. And it has indeed run like shit on AMD and Intel GPUs since the beginning.

I assume Nvidia made a hard cutoff because it was a big drain to drag that old code around and there were probably security issues they didn't want to/couldn't fix. The application I work on for work is old and has compatibility with stuff up to 20 years old and the main reason we get rid of old features/code is because one of our security scanners starts screaming about a new exploit and it's not worth the dev time to update it. At that point it's policy, we have to fix it or remove it, we aren't given the option to delay because we don't have time to deal with it properly right now. This feels like that kind of situation.

I go through phases of consuming this stuff, but the more I watch, the more I feel like problems with the consistency of their viewpoints comes down to feeding the YouTube beast so they can make their living.

The Q&A clips are pretty painful to watch some of the time as they parse the blahblahblah in real-time.

I mean DF is only like 4-5 full-time guys and some part-timers, covering PC hardware, consoles, retro, and games on all of it. Stuff is going to slip through the cracks. Same goes for GN, HU, and the part of LTT dedicated to PC hardware. There are still no better options, they are all miles ahead of any traditional media reviews on stuff, who these days typically have much less knowledgeable people doing reviews with far less time and resources.

The differences were pretty impressive. And just watching that video of the Hitman level with deformable walls made me think, gee, why aren't games doing that today very often?
It's not because games can't, the physics systems available today are much better than back then and could do it, but it's because it's a whole lot of dev work for something that's not core to the game. Once you allow deformable/destructible things you take away one of the core assumptions of the game, that your level geometry is static, and have to start thinking about how to keep players in-bounds, what to put behind things that are destroyed and looking out into the skybox, etc. What if the player destroys the stairs they need to get to the next area? What if an interactable key object falls down into a hole they blasted and they can't pick it up anymore? It's possible to work around those issues, but it's a huge amount of dev and QA work unless your game is centered around destructibility. For most cases these days, having a few pre-planned destructible objects is good enough.

There's also the fact that those old PhysX effects certainly looked cool, but they weren't realistic in the slightest. They look very early 00s graphics demo, where having a whole bunch of particles on the screen moving in a semi-realistic without slowing down the framerate way was impressive. We have kind of moved past that. Games either want way more realism or go for more abstract hand crafted particle effects.

They only support FSR 2.2 and whatever form of DLSS they have, they don't have a way to separately enable just the AA part (DLAA).
They are going to support the newer stuff, they just haven't had the time yet. Their engine devs have been working on some stuff to get the game running on Android and they are also switching the lighting of the game to GI, so they have been busy.
 
... Nvidia isn't big on keeping things proprietary. I'm honestly not even sure they care much about feature lock-in, they are just the ones with all the R&D and researchers that come up with this stuff, so they build hardware to run it and have a natural monopoly until everyone else catches up.

PhysX and G-Sync were blatantly obvious lock-in tactics using discrete hardware. Until now, I've never seen even the staunchest Nvidia promoters argue otherwise. Just wow.

Hardware Unboxed is back with their monthly GPU pricing video:


View: https://youtu.be/6KmbZaoQTD0?si=N1AiP33-MBahUfJ_


He confirms what we've been hearing from other sources:

  • AMD and nVidia were both supposed to launch their cards much earlier. nVidia in November, 2024, and AMD in January, 2025

  • AMD and nVidia both wound down existing generation orders with the earlier launch dates in mind,

  • The remaining stock in the channel was not enough to bridge the delay gap, which is why almost all the previous generation parts have sold out.

  • The lack of GPUs in the channel increased demand/FOMO.

Combine all this, and we have about a three month dearth of cards in the channel. There was no way nVidia nor AMD could fill that demand at launch.

He then goes into specifics for AMD :

  • As per normal, AIBs built cards, and retailers bought those cards, using a placeholder MSRP. This happens with every new GPU launch. That placeholder MSRP is typically on the high end of whatever MSRP pricing window AMD comes up with. In this instance, the 9070 XT's MSRP was estimated to be around $700.

  • Normally, what AMD will do is give retailers rebates to offset whatever difference there is between the placeholder MSRP and the MSRP they decide to launch at. This rebate normally covers ALL card models.

  • With this launch, not only did AMD not offer rebates for all card models, they only offered rebates for a small volume of the initial card shipments. Overclockers.co.uk confirmed in a forum post that while they had 2000 Sapphire, 1000 Powercolor, and 1000 ASrock cards in stock, the MSRP was capped for only a few hundred cards. This confirms AMD did not rebate ALL cards, or even the majority.

  • Initial restocks will be at higher prices because there are no rebates from AMD, and the initial restocks were ordered before MSRP was set. This isn't very far off normal GPU launches, so things would all smooth out anyway in a month or so.

  • Going forward, things can go one of two ways:
    • On the good path, AMD sells the GPUs to AIBs at a price that enables the AIBs to hit MSRP pricing. Pricing may settle down in a month or so.
    • On the bad path, AMD continues to sell GPUs to AIBs at a price that exceeds MSRP, and continues using selective rebates to bring down the price on only some cards. Alternatively, AIBs and retailers scalp prices due to high demand, regardless of what AMD sells the GPUs to AIBs for. On this path, Hardware Unboxed believes it's safe to call the MSRP fake.
  • They reiterate Frank Azor's public statement that MSRP isn't fake. Whatever, Frank.

nVidia is a mess:

  • Retailers claim that supply is far too low.

  • nVidia held up their claims about they've shipped more than the 40 series in the same time.

  • Hardware Unboxed calls them on the elephant in the room - the number of models launched in that timeframe
    • 4090 was launched October 12, 2022
      4080 was launched November 16, 2022
    • 4070 Ti was launched January 5, 2023
    • 4070 launched six MONTHS after the 4090, April 13, 2023
    • RTX 50 series launched its same four SKUs in a 5 week window, so a 2x increase in units sold doesn't impress.
  • Retailers say supply is less than 40 series launches. The 5090 supply is particularly low compared to the 4090. The 5080 is close to the 4080 launch supply, but the other two are less than their 40 series supply.

  • Retailers are also reporting higher demand for less favorable models.

  • It's clear supply that is far lower than it should be, but there are varying rumors as to why:
    • Silicon availability due to AI/professional products

    • GDDR7 not being readily available

    • Production issues (apparently why the 5070 FE is delayed)

    • 5070 Ti having lower supply than the 5080 is probably down to there not being enough defective dies to feed the 5070 Ti supply. This would explain why 5080 supply is far better than the 5070 Ti. Also, you know, the whole 5080 and 5070 Ti being effectively the same product from a manufacturing perspective, but the 5070 Ti being $250 less to sell, means there's little reason to supply 5070 Tis over 5080s to the market.

    • There are also rumors of distributors selling GPUs into Singapore for the Singapore-to-China supply.

    • For the 5090 specifically, there are rumors that exporters are willing to pay waaaaay over MSRP for 5090 supply to sell to AI customers. This dos not explain the lack of supply for cards lower in the stack.
  • None of this explains why, when there is supply, the prices are so high. The simple explanation is that nVidia doesn't care. He goes into the various methods that nVidia can use to get prices back to MSRP, from restricting supply of the GPUs to AIBs, to making more FE cards to provide pricing competition. nVidia isn't doing any of this,

  • There's still no explanation as to why prices are so high on card tiers above MSRP. 5070 Tis jumping to $900 for the next tier of cards. 5080s jumping to $1250 for the next tier of cards. 5090 jumping to $2600 for the next tier of cards. You can't use the "AIB rebate" argument that AMD used, because nVidia announced prices well before retailers bought cards to put on shelves,

  • The argument that prices are high because nVidia wants them there may make some sense for the 5090 and 5080, where they have no competition, but it falls apart with the 5070 Ti and 5070, where AMD does compete.

  • The one exception is that the 5070's next tier of cards only goes up about 10% to about $600. That one card, there is some hope when it starts restocking it won't be much over MSRP.

  • At this point, it's safe to assume that the MSRP for the 5090 and 5080 are fake. New cards coming in are nowhere near MSRP. The 5070 Ti has no supply, so it's not even worth talking about.

It's a mess, and it's not going to get better anytime soon. They say wait for better pricing, particularly for the 5070 and 5070 Ti due to AMD competing.

My $0.02 is stalk Zotac's Open Box page for a 40 series, especially if you like PhysX in your games :)

Thanks for that synopsis.

As for Zotac, they're listing an open box 4070ti ... for $864.99. :(
 

IceStorm

Ars Legatus Legionis
25,451
Moderator
PhysX and G-Sync were blatantly obvious lock-in tactics using discrete hardware.
Eh, that's a gross generalization.

PhysX, they bought the company that made it, and that company had it running on discrete hardware at the time. Continuing that until such time as they could port it over to more modern instructions? Not sure if I call that lock-in or just lack of time to do the porting. This is clearly laid out in GN's video because they note that later versions of PhysX 2.x supported SSE, and PhysX 3.0 supports multiple threads. Later versions of PhysX work fine on CPUs, it's only the very old stuff that is stuck using x87 code when on CPU that is a problem.

G-SYNC, they made their own display controller because nothing on the market did what the G-SYNC module did. To this day, monitors that are "G-SYNC Compatible" still force the user to select an overdrive mode, something actual G-SYNC display controllers do all on their own.

As for Zotac, they're listing an open box 4070ti ... for $864.99. :(
That's not a 4070 Ti, that's a 4070 Ti Super, which has 16GB of VRAM. Zotac knows what they have and has been raising prices accordingly. Still a better deal than you're going to find from scalpers, and Zotac has a lot of limiters in place to stop scalpers from gobbling all their cards up.

Their Open box offerings are now going for around their original MSRPs (Zotac's 40-series cards for the 4070 Ti and up are fairly beefy affairs, even the Trinity).
 

Xavin

Ars Legatus Legionis
30,551
Subscriptor++
PhysX and G-Sync were blatantly obvious lock-in tactics using discrete hardware. Until now, I've never seen even the staunchest Nvidia promoters argue otherwise. Just wow.
PhysX in the beginning 100% needed custom hardware to do what it did. There's no way to argue around that. It took 5+ years and the advent of compute cores and advanced shader hardware in GPUs before complex physics was something that could run on any card.

G-sync also required special hardware to run well. The early Freesync stuff worked really poorly and had a bunch of limitations that G-sync with the special chips didn't have. Eventually the Freesync spec adopted enough of of the G-sync tech and the MediaTek display chips everyone uses in their monitors and TVs added it that the custom Nvidia chip wasn't necessary for a good experience anymore. Even so, stuff still works better with the actual G-sync chips.

You are mistaking custom hardware features that are such a good idea everyone comes up with ways to do the same thing for intentional lock-in. They wouldn't have become universal and able to run everywhere if Nvidia hadn't come up with the custom hardware first. It's the same for a whole lot of other GPU features too. RT in the form we use for games is entirely an Nvidia creation, as is realtime upscaling using game data not just finished frames, and a bunch of other less gamer-facing features. If it was just artificial lock-in, it would be easy for AMD and Intel to catch up, but instead it takes everyone years, because this stuff is hard and Nvidia is doing truly new cutting edge things that require special hardware.
 
  • Love
Reactions: NewNinetyNine

Daneel

Ars Legatus Legionis
11,250
Subscriptor
If it was just artificial lock-in, it would be easy for AMD and Intel to catch up, but instead it takes everyone years, because this stuff is hard and Nvidia is doing truly new cutting edge things that require special hardware.

I haven't seen anyone suggest it was artificial lock-in. My definition of that is there's no real functional advantage, the "feature" only exists for the purpose of lock-in.
 

IceStorm

Ars Legatus Legionis
25,451
Moderator
So if you have Thunderbolt, an eGPU enclosure, and an older nVidia GPU (eVGA 3060 Ti), you can "add" PhysX support that way.

The only downsides are that Windows wanted to prefer the 3060 Ti over the Arc B580 (can fix that via a Windows setting), and for some reason the Thunderbolt connection goes missing on a system update. A proper restart seems to resolve the missing Thunderbot dock (a Razer Core).

I have the Arc B580 paired with a 12600K on a MSI Z690I Unify. Now my B580 can run Arkham City and Arkham Asylum in all their PhysX glory :)

Sadly, the RX 9070 is paired with a 5800X3D, and I never bought the single-m.2 ASRock X570 Phantom Gaming ITX/TB3. Might have to figure something else out for it. mATX might be an option, but AM4 mATX options that can accommodate a triple slot GPU and another x16 PCIe cards are somewhat sparse. I found one board, but the extra PCIe slots are only x1. Turns out the original Ageia PhysX card ran over a PCI 2.0 (PCI, not PCIe) connection, and those maxed out at 500MB/sec. A PCIe 3.0 x1 slot is 985MB/sec, so it should work? :)

And it turns out for those really old titles, you need an extra GPU no matter what. Borderlands 2 and Metro 2033 don't see a benefit, but the rest saw 20-90% speedups by adding a 3050 alongside a 4090. Watch Newegg catch wind and start offering GPU+GPU bundles to offload those crappy 3050 6GB cards no one wants.

P.S. Newegg had a Shuffle today. Card prices were so-so. There were no MSRP AMD cards. There was one MSRP 5070 Ti, and two MSRP 5070.
 
Last edited:

Beautiful Ninja

Ars Tribunus Angusticlavius
7,262
Managed to score a 5090 FE through the Nvidia Verified Priority Access queue. With how nutty pricing is right now with GPU's, I can sell my RTX 4090 for nearly what I paid for it brand new.

Outside of the obvious performance benefits, I've also noticed some functionality improvements thanks to the increased bandwidth on the display outputs.

DSR/DLDSR now works on my high resolution/refresh rate monitors, this used to be disabled. Enabling DSC apparently forced a single port to use 2 ports worth of bandwidth on older cards, so enabling DSC with multiple monitors meant effectively meant only being able to use 2 monitors plugged into the GPU. Now I can also have full bandwidth/refresh rate and my PSVR2 still works. An extremely first world problem, but one that is now solved. I can now fulfill my nerd dream of being able to swap between my 4K/240 16:9, Super Ultrawide 240hz and a VR headset depending on the game.
 
That these tactics originally required hardware discrete from your graphics card was supposed to make the lock-in more obvious. As opposed to, say, RT Cores on Turing. Again, just wow.

Don't want to waste money on a separate physics hardware accelerator? Just stick with Nvidia over two upgrades, and use your old Nvidia GPU for PhysX! Paid extra for an original G-Sync monitor? You don't want to waste that by buying an AMD card, do you?

The fact the the lock-in was ephemeral, having been defeated or outlived its usefulness, is also irrelevant. They would lock you in for life, if they could.
 
  • Like
Reactions: Distraction
the Nvidia Verified Priority Access queue.
Finally all this water carrying for Jensen + shareholders from others (not you, congrats on the score, genuinely) is starting to make sense!

You have prove your worth as "verified GeForce gamer" to be "selected" through for NVPA; what better way than a litany of forum posts pretending that Nvidia cares about gamers and doesn't engage in anti-consumer business practices?
 

Xavin

Ars Legatus Legionis
30,551
Subscriptor++
Finally all this water carrying for Jensen + shareholders from others (not you, congrats on the score, genuinely) is starting to make sense!

You have prove your worth as "verified GeForce gamer" to be "selected" through for NVPA; what better way than a litany of forum posts pretending that Nvidia cares about gamers and doesn't engage in anti-consumer business practices?
It just requires having an Nvidia account, which nearly everyone has as you needed it for the old GeForce Experience, you don't even have to actually own an Nvidia card, so you can get off your high horse.

The fact the the lock-in was ephemeral, having been defeated or outlived its usefulness, is also irrelevant. They would lock you in for life, if they could.
If Nvidia wanted to lock everyone in, they absolutely could. They could lock AMD and Intel completely out of upscaling, AI, and RT with patents and force all big devs into exclusivity agreements if they wanted driver support for their games (which they give to everyone for free right now). They have developed almost every important graphical advancement in the past 20 years, and they could have kept all that stuff to themselves, but instead they published public papers on most of their research, aren't interested in patent enforcement, open source lots of stuff, and work with the standards bodies to enable everyone to use it.

It's frankly a good thing Nvidia hit the jackpot with AI and are untouchably big, because if someone like Broadcom or Oracle bought them, we would all be truly fucked.
 
That's not how the patent system works.
Rights, patents don't protect ideas (nor does copyright), they protect a specific method of doing a particular thing. And the courts don't like patents of "do existing process X, but with a computer", so it's unlikely that NVidia would prevail with "do upscaling, but with AI".... not least because they can't precisely describe the machine that does the upscaling, since they won't actually know exactly how it works.

They definitely cannot patent "do upscaling", any more than someone could patent "mow grass".
 
  • Like
Reactions: corprebel

Aeonsim

Ars Scholae Palatinae
1,159
Subscriptor++
If Nvidia wanted to lock everyone in, they absolutely could. They could lock AMD and Intel completely out of upscaling, AI, and RT with patents and force all big devs into exclusivity agreements if they wanted driver support for their games (which they give to everyone for free right now).
3D graphics, upscaling, ray-tracing, and AI are all things that predate NVIDIA by decades; they would have massive issues trying to block people from developing accelerators and frameworks for those. Also becoming total monopoly is a quick way to get broken up and lots of attention from governments all around the world.
 

Scandinavian Film

Ars Scholae Palatinae
1,378
Subscriptor++
Rights, patents don't protect ideas (nor does copyright), they protect a specific method of doing a particular thing. And the courts don't like patents of "do existing process X, but with a computer", so it's unlikely that NVidia would prevail with "do upscaling, but with AI".... not least because they can't precisely describe the machine that does the upscaling, since they won't actually know exactly how it works.

They definitely cannot patent "do upscaling", any more than someone could patent "mow grass".
And they absolutely do patent their particular implementations and methods of upscaling, AI, and ray tracing.
 
I dunno, if RED (which sued Nikon so Nikon bought them) can control video compression locking out camera companies, except for BlackMagic which had a grandfather claim of some sort, maybe that's apples and oranges, but I certainly understand and appreciate anyone with a chicken little mindset when it comes to this stuff.

https://petapixel.com/2023/01/25/re...ing-the-camera-industry-with-patent-trolling/

I'm as skeptical of anyone who stands up and says they have a handle on this who isn't employed as a patent lawyer as I am of the people running around like chickens at any rate.
 

IceStorm

Ars Legatus Legionis
25,451
Moderator
MLID had his February Loose Ends livestream (yeah, it's mid-March).

Sounds like some pretty somber things going down in the industry:

  • The lack of GPU volume (nVidia) means AIBs are heading towards massive layoffs.
  • Unless by some miracle there's a carve-out for GPUs, tariffs are probably going to drive the the RX 9070 and RX 9070 XT to $700/$800 for "MSRP" models.
  • He doesn't expect the GPU shortages to end before next year.
  • He thinks the AI bubble is starting to deflate, which is why nVidia's ignoring gaming for the moment - they want to make as much AI money as possible. He believes the tariffs are driving panic buying with AI parts. Before the tariffs? Lead times on AI cards were going down, not up.
  • There is 6nm capacity at TSMC. He'd like to see AMD start pumping out RX 7600 XT 16GB at $299.
  • AMD will continue to restock RDNA4, but it won't be enough.
  • Navi 48 Ultimate? If it turns out as well as he's hearing? Yeah, it'll be expensive (RTX 5080 pricing), but it will be decently faster.

It's... it's not going to be a good year for anything. If you need it, buy it now.
 

NervousEnergy

Ars Legatus Legionis
10,994
Subscriptor
It just requires having an Nvidia account, which nearly everyone has as you needed it for the old GeForce Experience, you don't even have to actually own an Nvidia card, so you can get off your high horse.
Totally off the conversation that prompted this post, but I greatly appreciate you posting that. I hadn't signed up for the Priority Access program as I had never (I thought) created an nvidia account, but your comment here prompted me that I must have at some point... and sure enough, I had an account. Last sign-in was 8 years ago but fortunately I had used one of my common PWs and was able to get in, update the email address (and add MFA), and register for it. Thanks!

I'm not terribly interested in a 5090, but given that I can sell my 4090 Suprim Liquid for MORE than the MSRP of a FE card right now I don't know why I wouldn't take the essentially free upgrade if it's offered.
 
  • Like
Reactions: corprebel

mpat

Ars Tribunus Angusticlavius
6,242
Subscriptor
G-sync also required special hardware to run well. The early Freesync stuff worked really poorly and had a bunch of limitations that G-sync with the special chips didn't have. Eventually the Freesync spec adopted enough of of the G-sync tech and the MediaTek display chips everyone uses in their monitors and TVs added it that the custom Nvidia chip wasn't necessary for a good experience anymore. Even so, stuff still works better with the actual G-sync chips.

I could argue a lot of this in detail - in particular that Freesync comes from Panel Self Refresh in embedded DisplayPort 1.3, a standard that came out before G-sync, so the idea that Freesync adapted some G-sync tech is ludicrous on its face - but I won't because it is ancient history at this point and nobody cares. This is how G-sync ended. It did exactly what Nvidia wanted - it was a USP when PhysX didn't cut it anymore, and then they moved on when they had a new USP - DLSS. This is how DLSS will end as well. It is only a question of how long it takes to get there.

(Sidenote: I don't think that original G-sync module would even be legal in the EU these days. Apparently it draws 14W while turned off, which would exceed the max power draw by a lot. I can't find exactly when that requirement was tightened last time - I know the original version came in in 2006, but they keep tightening it, and I can't find out when they included displays in standby.)
 

IceStorm

Ars Legatus Legionis
25,451
Moderator
in particular that Freesync comes from Panel Self Refresh in embedded DisplayPort 1.3, a standard that came out before G-sync
FreeSync was based off a 2009 VESA standard introduced as a power saving solution. It enabled reducing the refresh rate for mobile devices to match the framerate of the video being played. It was never a spec to change refresh rates on the fly to match a game's ever-changing frame rate. It was never intended for gaming.

You may say this is old history, but AMD themselves allow their moniker to be used, to this day, on displays with a "variable" refresh rate of 40-60hz. AMD banked on the commodity hardware improving over time such that the lack of a variable refresh rate range being too narrow for Low Framerate Compensation would "go away" on its own. Well, it's 2025, and Samsung is still selling this junk.

The G-SYNC module was designed, from the ground up, to be a high performance gaming display controller. It handles LFC all by itself. It handles overdrive all by itself. The "G-SYNC Compatible" monitors we have today are commodity display controllers that have marginally caught up to handle the sync range required for LFC, and that LFC has to be done on the GPU, not right at the display like on the dedicated module. The commodity solutions still require manual overdrive tuning.
 
  • Like
Reactions: Demento

Xavin

Ars Legatus Legionis
30,551
Subscriptor++
3D graphics, upscaling, ray-tracing, and AI are all things that predate NVIDIA by decades; they would have massive issues trying to block people from developing accelerators and frameworks for those. Also becoming total monopoly is a quick way to get broken up and lots of attention from governments all around the world.
The concepts predate Nvidia, but the actual implementations that let them run on consumer GPUs in realtime are almost entirely Nvidia. Regardless of how you feel about software patents they do have them, they just don't enforce them. There are also a lot of hardware patents involved that aren't controversial at all. They have enough money and patents they could trip everyone else up in court forever, regardless of whether they actually win in the end. We have seen it happen with companies that have more dubious patents.

As far as breaking up monopolies, that's not been going well lately. I'm also not really sure how you would even break up Nvidia, since the thing they do that makes the vast majority of their money is make GPUs. They have some other stuff, but it's inconsequential compared to AI, compute, and gaming GPUs, which are all basically the same thing.

My point isn't that Nvidia is some perfect company that's the consumer's best friend, but that if they wanted to be anti-consumer and evil they have many levers they could pull which they explicitly don't.
 

Demento

Ars Legatus Legionis
14,479
Subscriptor
3Dfx, VideoLogic, Rendition, ATI, 3DLabs, ... and after all these, finally ... Nvidia.

Please get your history straight. Thanks.
Shoulders of giants. All of their stuff (bar ATI) would be unrecognisable in today's GPU pipeline, but was necessary to get here and furthermore based on research back in the early 80s, where the notion of programmable shaders comes from.
 
The "G-SYNC Compatible" monitors we have today are commodity display controllers that have marginally caught up to handle the sync range required for LFC, and that LFC has to be done on the GPU, not right at the display like on the dedicated module. The commodity solutions still require manual overdrive tuning.
First I've heard of that being a concern. Is there any actual downside to LFC being on the GPU and not the dedicated module? Does it impact frametimes or latency or anything at all?

Edit: Just saw the mod note, I can make a new thread if this is a derail.

This likely won't be a widely viable option, but the nvidia marketplace queue came through for me with a 5080 this week.
Do you own a previous 80-class card that Nvidia would know about on the same account that you used for the lottery? Widely reported (but not proven) that the 5090 lottery primarily picks people who own a 4090 on that same account.
 

NervousEnergy

Ars Legatus Legionis
10,994
Subscriptor
The G-SYNC module was designed, from the ground up, to be a high performance gaming display controller.
So I'm curious if I'll notice much (if any) difference between my current 5 year old G-Sync monitor (Alienware AW3418DW) and a newer 'G-Sync compatible' monitor, given that a 4090 is driving it. I'd like to switch to OLED, but this G-Sync Alienware has run very well with dedicated VRR hardware. There are very few monitors with G-S Ultimate designation, and none that I'm terribly interested in for a size/format upgrade. Some quick research shows a variety of opinions.
 

evan_s

Ars Tribunus Angusticlavius
6,383
Subscriptor
So I'm curious if I'll notice much (if any) difference between my current 5 year old G-Sync monitor (Alienware AW3418DW) and a newer 'G-Sync compatible' monitor, given that a 4090 is driving it. I'd like to switch to OLED, but this G-Sync Alienware has run very well with dedicated VRR hardware. There are very few monitors with G-S Ultimate designation, and none that I'm terribly interested in for a size/format upgrade. Some quick research shows a variety of opinions.

Assuming it's actually a good monitor and not just a normal screen they slapped FreeSync on you should be fine. The basic FreeSync spec has no real requirements other than supporting variable refresh rates which is where you get the ones with really low ranges, poor quality and other problems. So check the specs carefully on those ones. AMD FreeSync Premium and AMD FreeSync Premium Pro add some actually meaningful requirements like the G-Sync had so anything that specifically notes one of them is probably good.

https://en.wikipedia.org/wiki/FreeSync#FreeSync_tiers
 

IceStorm

Ars Legatus Legionis
25,451
Moderator
So I'm curious if I'll notice much (if any) difference between my current 5 year old G-Sync monitor (Alienware AW3418DW) and a newer 'G-Sync compatible' monitor, given that a 4090 is driving it.
If you move to G-SYNC Compatible, you'd be giving up LFC and automatic overdrive control for a display that will burn in if used for productivity.

Unless there is a specific reason to switch to OLED, like HDR gaming or better blacks, I wouldn't change the monitor. I definitely wouldn't swap it out if you use it for productivity for a substantial part of the day.

nVidia has a list of supported monitors. You can filter by a bunch of options to find what you're looking for if you really want to upgrade.
 
So I'm curious if I'll notice much (if any) difference between my current 5 year old G-Sync monitor (Alienware AW3418DW) and a newer 'G-Sync compatible' monitor, given that a 4090 is driving it. I'd like to switch to OLED, but this G-Sync Alienware has run very well with dedicated VRR hardware. There are very few monitors with G-S Ultimate designation, and none that I'm terribly interested in for a size/format upgrade. Some quick research shows a variety of opinions.
Honestly, I haven't noticed any VRR difference between real-chip GSync and GSync compatible.

I really, really like my AW3423DWF, but you should read up on the restrictions of OLED. It's not a great solution for a daily productivity/work monitor.
you'd be giving up LFC and automatic overdrive control f
Overdrive isn't a thing for OLED.
 

MadMac_5

Ars Praefectus
3,836
Subscriptor
FYI for any Canadians looking for a 9070 XT, the Gigabyte Gaming OC version is in stock at Memory Express's online store for $960 right now. The stock as of this post says "10+," but I have no idea if that's 11 or 50 or how long they will last. One of their Calgary stores also shows 10+ of that particular card, so it must be a drop at their flagship store while the other stores like the one near me in Winnipeg continue to sit with bare shelves. If I didn't already have a 3060 Ti and a big ol' backlog of games, I'd probably have ordered one online two minutes ago.

EDIT: Aaaaand, it's gone.
 
Last edited:
He wouldn’t be giving up LFC, freesync supports that just fine. Unless you meant that specific monitor doesn’t support it somehow, in which case I wouldn’t buy it under any circumstances. I don’t see how that’s even possible, though, other than old pieces of crap where 2x the min refresh is higher than the max. Like 48–75Hz or something.
 

NervousEnergy

Ars Legatus Legionis
10,994
Subscriptor
Unless there is a specific reason to switch to OLED, like HDR gaming or better blacks, I wouldn't change the monitor. I definitely wouldn't swap it out if you use it for productivity for a substantial part of the day.
Blacks, more vibrant picture, and a much higher refresh rate (at 120 max on this old Alienware - would prefer double that.) It is used a good bit for non-gaming tasks (not sure I'd count posting on Ars, reading Kindle, and slicing 3D models as being 'productive'), but gaming is a primary driver.

Specific monitors I've been watching (for quite a long time) are the 49" panels (Samsung G93SC, MSI 491CQPX) and LG 39" 21:9 panel.

I have no plans to switch to AMD GPUs anytime in the near future unless they start competing at the 4090/5080+ level, so the effectiveness of any monitor to work with VRR on an Nvidia GPU is key.
 

mpat

Ars Tribunus Angusticlavius
6,242
Subscriptor
First I've heard of that being a concern. Is there any actual downside to LFC being on the GPU and not the dedicated module? Does it impact frametimes or latency or anything at all?

Edit: Just saw the mod note, I can make a new thread if this is a derail.
Is the note ”stop talking about VRR displays” or ”stop talking about stuff that happened a decade ago”? Because VRR displays are somewhat relevant to the topic, while my little historical dumpster diving was only to provide another example to explain my point about Nvidia now needing a new feature to replace DLSS in their marketing. I can keep quiet about the origins of Freesync if that is not wanted - we have had that discussion before in any case.

Anyway: the only downside I see to having LFC in the GPU instead of in the display is that doing the compensation in the display will use half as much bandwidth in the display interface. I don’t think that that matters in any reasonable situation - lowest DisplayPort version applicable is 1.2, which can handle 4K@90Hz or so, so it would only really come into play if you wanted to run 4K@120Hz on a card from before 2016 (when they all got DP 1.4). That seems… ambitious.