5700X3D upgrade?

cerberusTI

Ars Tribunus Angusticlavius
6,947
Subscriptor++
Hmm, sounds like the theatre audience trash mobs? One of my all time favourite dungeons. The music and atmosphere were great; the architecture of the place itself told stories and lore. The NPCs tied into so many story threads from the classic questing experience. Karazhan was a lovingly crafted adventure. Not just "content" cranked out routinely from a crownd of overworked people with little creative freedom.

Anyway, I think you present a nice rationale why nobody at Blizzard would ever feel that this scenario needs any more performance tuning. But the point still stands that such crowd situations are single thread limited. And IMHO any performance tuning here is still valuable, because the stuttering is a slippery slope. Sometimes the catastrophic fail pull would have actually been survived if the players' computers hadn't slowed down to a crawl.
The contention was that this is a single threaded game which has not been updated in a while and is horribly CPU limited. I do not think there is anything CPU limited on a modern computer though, such that it could use modernization. It does use multiple threads where it can benefit, such as rendering the world (it is a modern DX12 graphics engine, using multiple threads.)

There are a few places you can pick up some performance with a better system, but mostly it runs well even on quite old and not so powerful hardware. I do not think complaints they have not modernized it are fair, there are a lot of signs they do optimize and modernize the engine over time.

Let us look at those situations where the frame rate is not high on almost any computer.

The theater packs are what it looks like he is pulling from his screenshot for the combat test. He has a video though, which I suppose I can watch in a corner as it is not too long. Looking, it is everything from the beginning of the instance, the ballroom, banquet hall, etc. all the way up through the theater, all at once. That is a lot of mobs, he means it by worst case combat scenario in some ways.

The potato is still getting almost 60fps there, and scaling looks like it purely based upon memory generation. I do not think there is anything they can do in terms of threading it better, the hundreds of mobs all doing things and needing updates are going to keep it memory bound. The X3D chip has a tiny impact on frame rate, as does CPU frequency (what should be a slower processor is actually winning slightly in fps, the 12400F is on top by a small amount over both a 12700 at a higher frequency, and a 5800X3D with more cache), it wildly blows any cache size I assume and is purely down to how many memory accesses can be serviced at that point. That gets slightly faster with memory generation, or server platforms with more channels, not CPU speed. CPUs with more cache and more frequency are not winning this benchmark, it is purely memory.

Threading cannot solve a problem like that, it would just block more cores for no gain. Optimizing something like that further means not allowing independent actions for each mob, but this is not a swarm shooter, and that is likely not a good idea for the kind of game this is, unless they have something specific in mind for an event.

That has not improved massively over time as memory latency has not improved much over time. Transfer rate has, and that can be seen to some degree in the benchmarks, but it is mostly a latency limited task once you produce a mob pile of that scale. More interesting would be how many mobs you can pull before the rate drops both with and without an X3D sized cache, as that likely does matter.


Wading through hundreds of players to the bank is similar, although it does show a big jump in frame rate for the X3D chip (but not CPU speed, a slower single thread CPU of a similar generation wins this again by a small margin if we discount the X3D chip, which has a massive jump in rate.) The assumption there is that they can pick up some locality of reference as many of those players will just be sitting there, so it is not updating everything all the time and it could find that data in the cache with some frequency. Loading delays as it brings in newly seen equipment and such kill the low 1% on everything they test in this scenario, and is likely a major drag on the average rate.


The heavy combat test especially says number of cores, amount of cache (including X3D), and CPU frequency are all basically irrelevant to the results. It only cares about memory latency.
 

cerberusTI

Ars Tribunus Angusticlavius
6,947
Subscriptor++
What graph are you looking at? 1% lows going from 16.9fps (2200G) to 39.9fps (5800X3D) is a more than doubling of performance. The average also more than doubles, from 40.1fps to 91.1fps

If I loaded into an MMO's hub location, and my framerate was dropping into the teens, I'd log out and wouldn't return until I had upgraded.

You're also glossing over the actual text written. The weaker systems were the ones with lots of stutter.

It's clear Zen 3 (5900X) and Zen 3 with vcache (5800X3D) are both fine, with vcache resulting in a 25% boost to average fps, but only an 8% boost to 1% lows. That tracks with the kind of boost I said a vcache CPU would provide - one performance tier.
The same one you are, but I noted that the 5900X only gets 70 fps as important. It goes from 40 to 70 over several generations of CPU, does not benefit at all from a higher frequency CPU or more cores in the same generation, and adds another 20 frames from having an X3D cache (which is more than one generation of CPU).

Updating hundreds of players is memory bound, all they can really do is put less players in an area, or update them less frequently.

When it is not memory bound from hundreds of entities around, you can see the multi threaded rendering pull out hundreds of frames per second on moderate hardware, which is competitive with the game engine in a twitch FPS (except that they do not do mouselook at 240 so you only benefit if you are still or running forward only, it samples view position at a lower rate).
 
Updating hundreds of players is memory bound, all they can really do is put less players in an area, or update them less frequently.
You could be on to something. Memory bound does sound very plausible for data that could very well be organized as linked lists, or programmed as an object oriented design. A long time back when I looked into the design of a few text based "multi user dungeons", object orientation was a natural and powerful design paradigm for the respective game engines, and it can lead to lots and lots of cross referencing with pointers.
 
If I loaded into an MMO's hub location, and my framerate was dropping into the teens, I'd log out and wouldn't return until I had upgraded.
Well those were the 1% lows not the framerate so most people would look at it, see they're at like 45fps, and shrug and move on with their lives, thinking "movies are 24fps so it's fine". But it's a stuttery mess.

And almost every WoW player deals with that, every single day.
 

cerberusTI

Ars Tribunus Angusticlavius
6,947
Subscriptor++
Well those were the 1% lows not the framerate so most people would look at it, see they're at like 45fps, and shrug and move on with their lives, thinking "movies are 24fps so it's fine". But it's a stuttery mess.

And almost every WoW player deals with that, every single day.
The 1% low on that is another one where their benchmark has a 12400 beating a 12700 and a 5800X3D. The X3D part improves the average by a ton, but it does not improve the low much at all.

It could be that for part of the test it is too big a working set for the X3D cache to even matter (much like the combat test, where it does not help), but my guess would be that it is utterly dominated by SSD loading delays, which is why all CPUs are in the 20s and 30s for the low. That is also what they attribute it to.
 

cerberusTI

Ars Tribunus Angusticlavius
6,947
Subscriptor++
1% lows dictate how "smooth" a game is, not the average FPS number. Going from 1% lows of ~16 to ~40 is a huge boost, and takes a game from insufferable to playable.
In that case, what you want is a faster SSD. The X3D part only improves the average, not the low. Those loading stutters are very much reduced with better storage. The change from an HD to an SSD was very noticeable, the change from an early SSD to an NVME SSD was noticeable, the change from a SN850X to a T700 is noticeable (especially when you can see the same scene on both at the same time). I did not notice the difference between a 980 pro and an SN850X, but those are pretty close. Lots of memory can also help as it can avoid an SSD load (internally in the app or through windows file caching).

The combat benchmark did not see those lows as everything was loaded for most of it, but in a city you have a constant game of "that guy who just hearthed in has a hat and some pants we have not seen, pull it from disk now". It improves with storage. or if you have a lot of extra memory and sit there long enough you do not have new items loading often.

I like my high frame rates, but the drop in a busy bank or similar does not bother me so much personally. I do kind of wish they would keep more along the lines of classic populations on servers though. I am noticing that they have locked my dead server, which probably means they are going to dump us on a mega realm. I would prefer dead to constantly busy, and not just for performance reasons.

It does not matter in combat though, as situations where you are loading and in combat are very rare. You already zoned in and have loaded all your raid members and the mobs you are looking at before you pull.
 
Last edited:

cerberusTI

Ars Tribunus Angusticlavius
6,947
Subscriptor++
The culprit could well be memory bandwidth, I suppose. I don't have a 9700X to directly compare against. I haven't seen people on Intel with DDR5-8200 or whatever talk about how great WoW runs, though, unlike those with X3D CPUs.
Latency, not bandwidth. The updates are likely very small, but it has many of them to do every second. Dropping CL would likely improve things, and that loading stutter is slightly reduced by bandwidth as graphics assets are large enough to benefit, but mostly it is how fast you can access small amounts of memory in random areas which is holding all of the frame rates so low.

Increasing bandwidth can help a little bit as the transfers complete more quickly, but it does not help much in writing a little bit of data to some memory then moving on to an unrelated area, which is mostly what is going on.

Anyway, my point is not that better hardware has no benefit. It does. My point is that they actually did a pretty good job optimizing this game, and spend a lot of consideration and effort on it from what I can tell. These are just hard situations for a game, and they are mostly limited in scope. It is not a matter of being old and outdated.
 
The latency on my 5950X with DDR4-3600 CL16 was ~9ns. On my 9800X3D with DDR5-6000 CL36, it's 12ns. So that ain't it.

I don't care if they "did a good job" in a vacuum, comparing against some imaginary target. The fact is the game runs like absolute shit for 95%+ of players in the situations I listed earlier in the thread, and those are common. When almost everybody playing the game has 1% lows in the teens sitting in town with GPU utilization <70%, "they did a good job" rings hollow. Ultimately only the player experience matters.

Even with a 9800X3D, I'm NEVER GPU limited. Using intel presentmon I always, always see CPU time higher than GPU time. This applies when solo too, the difference is I can get up to my cap of 142fps then and my 1% lows are in the 80s.

Has it improved over the years? Certainly. It's still terrible.
 
Last edited:

cerberusTI

Ars Tribunus Angusticlavius
6,947
Subscriptor++
The latency on my 5950X with DDR4-3600 CL16 was ~9ns. On my 9800X3D with DDR5-6000 CL36, it's 12ns. So that ain't it.

I don't care if they "did a good job" in a vacuum, comparing against some imaginary target. The fact is the game runs like absolute shit for 95%+ of players in the situations I listed earlier in the thread, and those are common. When almost everybody playing the game has 1% lows in the teens sitting in town with GPU utilization <70%, "they did a good job" rings hollow.
The problem with a blanket statement that it runs like crap with 40 players on one boss, is that I get a solid 240 fps in a situation like that, and have for more than one generation of computer.

Either this greatly changed in the last two expansions, or something else is going on in that situation.

Personally, I think it is all about your mods, and you have something you were not willing to disable when you tried. It is hard to reconcile others getting far higher frame rates in that situation with yours, where that is not common unless they have hundreds of players or mobs in the scene otherwise, but addons with a lot of demands are a well known way to get that kind of rate.

Somehow you are pulling a low frame rate in a situation where an older 11900K with a 3070 would not be maxed out, and would be vsync limited.
 

cerberusTI

Ars Tribunus Angusticlavius
6,947
Subscriptor++
That's cool. Show me a screenshot of you raiding with 40 people at 240fps. Show the 1% lows also please. I would like to see it.

If you don't play anymore, show me a screenshot of anyone getting that performance. Anyone at all. Not in classic, mind you. Not in old content.
I can go anywhere someone who was max level in BFA can go, although I have no bars or talents set up.

I see few to no screenshots at all with an FPS meter showing (at any rate) so that does not seem easily possible. You will need to ask your raidmates for that if you want to see what others see on modern wow in the same situation you are in.

I see forum posts asking about why their raid FPS suddenly dropped from the 150s to a low number though (they say they have a 3700x with a 5700xt), which implicates addons again. That is a much lesser computer getting what is still a fairly high frame rate in raids, until addons caused an issue.
https://eu.forums.blizzard.com/en/wow/t/fps-drops-during-raid/349357/3
I see a thread for 240+ classic, but while the CPU required to hit that in terms of game updates should not differ much (and it makes the 40 person limit easier as classic did that more often), that is not what you asked about.

View: https://www.reddit.com/r/classicwow/comments/1fio53f/what_fps_does_your_classic_wow_run_anyone_here_at/


I see complaints about specific recent raids, stating how they fixed low frame rates, and are back to hundreds of frames per second.

View: https://www.reddit.com/r/wow/comments/1fkwxag/this_fixed_my_low_fps_issues/


Not everything works for everyone, but that people see high rates as normal in raids is notable.

Just logging in to the modern one (which is a city, and right after initial load so there is no time to settle loading) it is just 240. It has been a while, so I got the epilepsy warning, a cutscene, and it asked me to update my gear through their thing. I see plenty of people running around, easily a few dozen, and I get 240 fps if it is focused (200 exactly otherwise, so there is a limit.) The worst I see is 212 as a low average, so that is what loading with dozens of people looks like. I can in fact see the drops, but that is so normal in wow cities that I easily ignore it.

I do not see a way to uncap it to see how much more than 240 it can do, and I do not think the game directly reports a 1% rate, but it does high rates in a not so full city easily.
 

Attachments

  • Screenshot 2024-12-20 180117.png
    Screenshot 2024-12-20 180117.png
    588 KB · Views: 4

cerberusTI

Ars Tribunus Angusticlavius
6,947
Subscriptor++
Finding the vsync setting and turning it off, then finding a mount and flying to Tanaris, I get 450 - 550 fps on the way (all settings including antialiasing and ray tracing maxed out, 3440x1440). When landing there with a fair number of players around (although I would not call it extremely busy), I get about 140. Finding a worst case position looking out over everything, it goes into the 120s.

Switching it to windowed mode and pulling up process lasso, the frame rate goes up to 160 in lowering the resolution (from the same worst case position), and CPU usage looks like the screenshot (it is not purely even, but it is scaling across cores.)

No cores are at high temperature, and it is not running them at full speed. The CPU is using 60w (fully loaded is 140). Memory is intermittently using up to 3w per module, which is very high, and is mostly staying at about .85w, which is fairly high. A full load test is 5w or so. It is reading and writing about 3 - 4 GB/s.

HWinfo disagrees on the frame rate, and says it is running at about 180 - 220, rather than 160 in wow. CPU busy is at 5ms per frame, which is enough to throw off a 240 frame rate.

The GPU is drawing 160w (it does 450 under load testing.)

Setting all cores in process lasso, it goes down to about 100fps (from 160). Setting just frequency cores it stays there. Setting it back to the cache cores it is back to 160.
 

Attachments

  • Screenshot 2024-12-21 085435.png
    Screenshot 2024-12-21 085435.png
    758.1 KB · Views: 6
  • Screenshot 2024-12-21 090108.png
    Screenshot 2024-12-21 090108.png
    570.8 KB · Views: 6
  • Like
Reactions: io-waiter

cerberusTI

Ars Tribunus Angusticlavius
6,947
Subscriptor++
Assessing performance, the X3D chips have a massive impact in this situation (100 to 160 is not small at all.)

It is memory latency bound, does use multiple cores, and scaling characteristics with many players are much worse than in classic. There were likely more than 40 players around, but not by a lot. 240 is not a likely frame rate with 40 people hitting a boss in the modern version from that test.
 
I spoke imprecisely earlier; what matters is how many players in view, not how many are in the area, so the anniversary celebration area isn’t really comparable to the current expansion’s city now several months affer it was introduced as most players don’t go there anymore. Still better than orgrimmar though.

The real test is an outdoor worldboss where you don’t only see 40 people, but they’re all firing off spells and whatnot. That’s where my old 5950x used to drop below the VRR window of my monitor. Very noticeable.

My DDR5 memory latency is much worse than on DDR4, again, so it can’t be that. I find the thought that upgrading from a high-end pci-e 4 NVMe SSD to pci-e 5 being the determining factor to be very unlikely. It could still be memory bandwidth, I suppose I could underclock my memory and test further but that test would be better done without the 3d vcache masking it. You could test it with your non-vcache CCD if you’re interested.

I do think it’s the extra CPU cache, and it’s good to see your tests confirm it as I was unable to do that comparison myself. And good idea using process lasso on a 7950X3D to test it!

If you play WoW, you’ll see tremendous improvement upgrading to a X3D CPU. I can’t think of another game where it offers such a meaningful improvement other than esports stuff.
 
Last edited:
My DDR5 memory latency is much worse than on DDR4, again, so it can’t be that.
Remember, you're using that higher-latency memory with an X3D chip, which hides a lot of that. That's the purpose of CPU cache, to insulate you from the glacial response time of RAM. You're probably gaining a ton from the cache, and then losing some of that gain again when memory demands exceed what it can service, like in highly populated areas.

My read of @cerberusTI's system description is that nothing is highly loaded except RAM, so that seems like the probable bottleneck. And it makes sense: there are a huge number of items players can be wearing, so drawing each avatar correctly is likely to involve a hell of a lot of scattered RAM lookups.

I'm not clear on whether it's latency, bandwidth, or both being the actual bottleneck. The fact that you're getting so much benefit from big cache implies that latency is a big part of it, since that's the biggest improvement from cache.
 
Last edited:

cerberusTI

Ars Tribunus Angusticlavius
6,947
Subscriptor++
Remember, you're using that higher-latency memory with an X3D chip, which hides a lot of that. That's the purpose of CPU cache, to insulate you from the glacial response time of RAM. You're probably gaining a ton from the cache, and then losing some of that gain again when memory demands exceed what it can service, like in highly populated areas.

My read of @cerberusTI's system description is that nothing is highly loaded except RAM, so that seems like the probable bottleneck. And it makes sense: there are a huge number of items players can be wearing, so drawing each avatar correctly is likely to involve a hell of a lot of scattered RAM lookups.

I'm not clear on whether it's latency, bandwidth, or both being the actual bottleneck. The fact that you're getting so much benefit from big cache implies that latency is a big part of it, since that's the biggest improvement from cache.
It is not bandwidth. If I pull up OCCT to benchmark it and look at the same HWinfo number, it is 75GB/s (or a bit over half that in each direction). Wow is only using 3 - 4GB/s, so even two way that is an order of magnitude less than possible memory bandwidth.

Which module uses the power is variable as well, this is very much a random lookup problem, and having an X3D cache not only means that lookup is faster if it finds it there, but that it does not go into the giant pile of memory accesses at all. This is why I am seeing a difference between 100 and 160 even with 10ns memory (and subtimings which are very well tuned, far beyond expo defaults.)

Also, it is of note that DDR5 has better ability to service multiple requests. That does not help you in a strictly single threaded application, but that is not wow, which is clearly using more than one thread in looking. It can stack these, but it overwhelms memory at some point, and a hidden danger (which can be seen in the forum complaints about drops) is that while you can split it on AMD with a large L3, that is much less ok on Intel, where the L2 is more important (and you will end up with the memory only number faster if not careful).
 
Last edited:

cerberusTI

Ars Tribunus Angusticlavius
6,947
Subscriptor++
Yes that’s why I suggsted testing on the 7950X3D with each CCD and underclocked (CL’d I guess) memory.
It is not quite that simple due to what the timings are, and the nature of a random workload.

It will frequently stack many of those delays, not just CL. That is a big part of the problem with a random workload, you are taking more of the timings as latency charges on a read or write, and run into more situations where it must wait out a timer in closing out and activating different parts of memory.

I strangely cannot find a good description of what the timings are, however ChatGPT can probably write one (it looks fine reading it, but goes off on some tangents as I was not very specific.)

https://chatgpt.com/share/67674e97-1604-8012-bad0-95ab78d698c4
 
I play WOW on a Rryzen 7 5700x with a 3060Ti and plays just fine. 2k 30" monitor I think the settings are on high.
Never an issue in the world or Raiding. Running off a large SSD. But my monitor is only 60hz max.

Somewhere around or specifically Shadowlands they started to re-write WOW. I know they improved the graphics slightly and some talk about threading etc.. But it's wow not really high end graphics.

Also play cyberpunk and it flys on my PC. So much fun and chaos in that game. Sometimes there is a odd hiccup, but I think that has more to do with the game bugs then my PC.

I can't see 5700x3D being much of a bump, which I why I went with X, and haven't regretted it one bit.
If I were to do an upgrade I would spend or put money towards a GPU or a higher model CPU.
 

io-waiter

Ars Tribunus Militum
1,543
Knowledge is power but ignorance is bliss, sort of sums it up.

I'm going to wait and see if there is a post holiday sales and I just got a USD50 equivalent gift card so the odds for a 3D in the future is pretty good.

I will play ESO and it has a similar behavior as WoW, but I do not think there is any way on the MMOs to have good frame rate with lot's of other players nearby, all the individual eye candy and personalization on one place with no predictive behavior and add network dependent code to that... anyway frame drops in the cities are not an issue for me and the events in ESO PVE combat isn't really that dependent on the frame rate anyway :)

I'm starting to do the math on what an entire new build would be, but with new RAM, MB and the cost of a CPU that confidently beats the 5700X3D the costs are kind a hard to ignore and there is a"want" for an Atmos capable HT receiver as well, and the economy isn't really in the shape where spending makes you feel good, or safe for that matter.

EDIT, you guys (and gals) are awesome, in case you don't already know that :)
 
I'm starting to do the math on what an entire new build would be, but with new RAM, MB and the cost of a CPU that confidently beats the 5700X3D the costs are kind a hard to ignore
The 9800X3D is an incredible chip, but the scalpers have it way over retail. Well worth the cost at retail, not worth the scalper price. The 5700X3D will be a reasonable stopgap for a couple of years, I believe, and then you can maybe do a new build in 2026 or 2027. It's fast enough that it should acceptably drive any GPU of 4080-class or lower.

By that time, there should be another generation of Zen chips out, and they'll probably be faster still, though perhaps not that much. The 9800X3D is a major improvement, but the next generation will probably be much less impressive.
 

cerberusTI

Ars Tribunus Angusticlavius
6,947
Subscriptor++
For a better explanation of why random workloads are so bad, let us look at real memory timings. My memory is rated:
30 40 40 96 (cl, trcd, trp, tras)

CL is the most important, as it is a delay you take on any memory read. You can stack these, it is simply a delay between when you ask for the data and when it shows up. Your requests must be to that row though, or to another area of memory which shares almost nothing.

If you have a lot of random accesses, it is highly likely that a row other than the one you want will be active. In that case it is not just a matter of issuing the request and waiting 30 cycles for the data. If you just read something else, you must wait out tras (the minimum row active time.) Once that is done, you need to wait for trp as you put the row on precharge. Once that is done, you wait for trc as you activate the new row. Then you start issuing reads to the row, waiting for CL to get the first byte of data.

The actual delay you take can be zero if everything is organized well, you can actually saturate full bandwidth in some cases. Scattered reads with still good locality can be a latency of 30 cycles (10ns for DDR 6000.)

A random read queued up with many other random reads will often require 206 though (96+40+40+30), and is very likely to be over 110 (40+40+30) on nearly all accesses.

That is greatly simplified in some ways, especially for DDR5, but it may make it a bit clearer what those numbers mean, and why unpredictable loads with a large working set are such an issue (there are things they could do about it, but they have complications which are likely not worth it).
 
Last edited:

cerberusTI

Ars Tribunus Angusticlavius
6,947
Subscriptor++
39.9 is 8% better than 36.9.

Objectively, that's an improvement.
The margin of error on running around a city is larger than this, so it would be better to say it is similar.

You are right that it is a slightly higher rather than lower number in this specific benchmark, although if you look at the other similar benchmarks such as the combat test, or other cities, it gets a slightly lower number along the same difference as often as it does higher. On one of those it actually loses by enough that it would be suspicious if the expectation was not that these tests had such a high variance, but it was likely a matter of someone loading in during one test and not the other in a non current city.

The low in a city is likely dominated by people flying or hearthing in (so your SSD will matter as well, the dominant delays are longer waits.)

It does improve the average, where it clearly wins most of these (or they are GPU limited), but the low is up in the air as to which wins, and it is not by enough to be significant in any of them.

Listing the lows as a comparison (5900x vs 5800X3D):
72.9 vs 67.1 combat
265.7 vs 230.2 Atal'Dazar
78.4 vs 77.7 Legion Dalaran
36.9 vs 39.9 Valdrakken
177.9 vs 191.4 Ardenweald
433.8 vs 451.2 Stonard

It loses three and wins three, all by smaller margins than are meaningful given the variance in testing an MMO on a server with others.

That was probably true in my test as well. Switching if it is on the X3D cores changes the frame rate from 100 to 160, but the low was almost certainly when I first went into range of that pavilion with a bunch of people under it, and it loaded all of them in. It noticeably stuttered as I came into range, even if the average was still high from the prior 500+ fps it was doing until that point, and it settled at 140 or so. Cache cannot help much in loading a bunch of first use assets in.
 
Last edited:
I will play ESO and it has a similar behavior as WoW,
I play ESO off and on also and have never experienced the very noticeable poor performance I got in WoW before upgrading to an X3D chip. It's a much better performing game overall. Maybe it doesn't scale to the very low-end as well, I haven't tested that, but I don't see a ton of improvement going from a 5950X to a 9800X3D in that particular game.
 

Axl

Ars Scholae Palatinae
601
  • Like
Reactions: continuum
My 1% lows in WoW fighting a worldboss went from like 14 to 25. It was a monstrous improvement. Still not good; human technology circa 2024 is insufficient to play WoW smoothly all the time.

That chart is much more representative of overall gaming where it’s helpful overall but doesn’t entirely change the experience. Usually CPU-constrained games are constrained to above 60fps, at least on my old Zen3, and even the 1% lows were nearly always within the VRR window of my display even without LFC. Most people should still allocate the bulk of their gaming PC budget to the GPU.
 

io-waiter

Ars Tribunus Militum
1,543
It’s done!

The 5700X3D arrived today and the 9070XT came on friday. The 5700X is in it’s new home with the 2080 and after rebuilding that baby InWin case and getting the radiator in the bottom an the intake fans beneath it became awesome.

The 5700X3D is nicely humming along with air cooling but I have some light regrets in not pairing it with the water cooler, but it’s reasonably cool hovering around 70 at load.

The 3700X is getting built with the 1660Super tomorrow:)

Case closed and before any trade wars and at MSRP 😃
 

IceStorm

Ars Legatus Legionis
25,450
Moderator
The 5700X3D is nicely humming along with air cooling but I have some light regrets in not pairing it with the water cooler, but it’s reasonably cool hovering around 70 at load.
Try a slight Curve Optimizer offset of -15 in the BIOS.

The 5800X3D I have paired with the RX 9070 was hitting 90C, even on a new 240mm AIO. Set a -15 offset, and now it rarely breaks 80C.
 
  • Like
Reactions: io-waiter