CPUs ready to blast past their limits can be had with a warranty, for a premium.
See full article...
See full article...
No. And that's why I'm effectively out of the overclocking game, myself.I'm curious what people are doing that would require CPU overclocking at home. Does a 5-10% increase really make that much of a difference in a game if the other specs are the same?
I think they still missed the market though. This would be better sold as a barebones system with a good MB, and options for either the best Noctua air cooler or whoever is seen as the best in water cooling.$230 extra for the modded CPU and warranty coverage is surprisingly reasonable. People spend more than that on water-cooling gear for what is ultimately smaller improvements.
UnzippyPeanut?Wow, that's hot!
Although, the chip will automatically sort of overlock itself (well, it is technically not an overclock I guess because this is the factory default, but…), improving performance if there’s thermal headroom, right?Yeah. They specifically say they aren't testing for OC just stock functionality after delidding. Theoretically, you might not be planning to OC a delidded CPU and might just want better cooling for a silent PC build with a direct die water cooler or something along those line. The 9800X3d isn't too power hungry so it probably isn't really needed, especially if you undervolt with curve optimizer or apply a lower power profile, but there could still be some benefit.
The Core 2 Duo processors were pretty darn good value when overclocked by around 50%. I had an E6300 up from 1.8 GHz to about 3.2 GHz and an E6600 up from 2.4 GHz to 3.6 GHz.Yeah. The good old days of running dual Celeron 300A's. That was a ridiculously good system for the era at a pretty good price until the 300A got so popular the price went through the roof and they sold above MSRP.
A former employer fiddled around for a bit with phase-changing fluorocarbon cooling. I know it's not the cryogenic liquid of which you speak, but it's a step on the way there, and I suspect few hobbyists will have the proper chillers for cryogenic. Besides, ordinary chips, even mil-spec, are qualified for -55C to 155C, and cryogenics are too cold. It can be done, but it's not fall-off-a-log, and on consumer chips you're likely to find new race conditions that were never simulated or tested for because the temperature is so different.I don't overclock my stuff anymore. Baseline performance on decent chips has been good enough for a while now that I don't find it to be worth the hassle.
That said, if overclocking is going to happen, I am very much in the "it can't overheat if it's immersed in cryogenic liquid" school of thought.
We found a new use case for this! Except your custom cooling solution due to the different height of the die probably would kill your savings.What really gets my goat, as an Australian: I can buy the delidded one from TG/Der8auer and have it shipped from Germany and it would cost me less than current stock prices of 9800x3d on various Aus vendor sites.![]()
More importantly, AMD and Intel don't leave the headroom on the floor and instead give it to us in boost frequencies. Basically every chip is OC'd out of the factory the way we would OC back in the day. Everyone wins compared to when they tried to frequency lock CPU's that could run much faster.No. And that's why I'm effectively out of the overclocking game, myself.
Gone are the days of pencil-locking Duron voltage to boot a 190MHz FSB on an SDR board, using BGA-mounted SDRAM to hit such ridiculous frequencies (42% over stock clock on RAM and CPU FSB).
AMD and Intel don't leave the kind of headroom they used to. If you combined a delid with a good water cooler, you might get a few hundred MHz, but I doubt you'd even get 10%. I know the 9800X3D has its cache mounted on the bottom and picked up some clock from doing so, but I'd still be leery about OC'ing it. The cache may now be a secondary heat path, but it's still a heat path.
Heavy workloads these days are all multi-threaded, and you'd just do better stepping up to a higher-end chip as opposed to trying to squeeze a lower SKU for extra frequency.
This is for people who want the best chance at the fastest chip because they want it, not because you'll pick up some enormous real-world advantage.
Thanks for the memories, forgot the name was Cheetah. I was a lucky kid to get some 10k SCSI drives in what I attempted to make into a gaming computer. Poor thing was quite loud when running in the 95° attic bedroom.some 10K U160 SCSI drives (the 80 pin Seagate Cheetahs weren't that expensive relatively speaking), you could get pretty blazing performance (at the expense of sounding like a jet taking off when booting) for the time.
More specifically: manufacturing processes improved and allows today for tighter tolerance, and software improved and permits today tailored "boosts", which is less fun but gives more people close to the full capabilities of the specific CPU sample they own.AMD and Intel don't leave the kind of headroom they used to.
Which is a real pity given how much faster office software used to be (but that's more to deal with (lack of) optimization than with CPU speed – and the subscription model gives optimization efforts no business justification as money keeps coming in regardless of improvements).we're well past the point where you're going to see improvements in office software.
Yeah, reading the article I was expecting it to be 2-3 times that markup.$230 extra for the modded CPU and warranty coverage is surprisingly reasonable. People spend more than that on water-cooling gear for what is ultimately smaller improvements.
Remember, that poster is complaining about idle power, which is a notable weak point of AMD's particular chiplet choices. They use an I/O chiplet that's on a larger process and idles quite hot, particularly with PCIe4 southbridges (like the X570 for the older AM4 socket, as opposed to the cooler B550.)If you want to experiment, turn on AMD Eco Mode and run Curve Optimizer in Ryzen Master. Set Eco Mode to 45W TDP
The way I figure: modern chips know more about overclocking themselves than I do. About the only thing I touch is Curve Optimizer, to set an all-cores negative 20 millivolts. It runs a little cooler that way, and boosts itself a little higher.These days you're hard pressed to have any visible advantage from an overclock.
You're probably getting suboptimal contact between the cooler and the CPU lid. You might try re-pasting it. When I did this last build, I used a 360mm AIO with Thermal Grizzly's Kryonaut (regular, not Extreme), using the X application method (a big X from corner to corner of the IHS), and have been getting fantastic results, typically ~75C under max load for long periods.Put ~122W through the CCD and it jumps up ~85C.
Oh, I'm quite aware. I run a B550 myself, chosen for that reason. Idle power on AMD is higher. But, as you point out, it's 20-25W higher, and he's claiming 45W higher. I suspect that difference is down to things like PCIe 4.0, higher DRAM power consumption, possibly a bit from I/O, depending on storage choices.Remember, that poster is complaining about idle power, which is a notable weak point of AMD's particular chiplet choices. They use an I/O chiplet that's on a larger process and idles quite hot, particularly with PCIe4 southbridges (like the X570 for the older AM4 socket, as opposed to the cooler B550.)
The numbers that poster is quoting seem too high, but it's pretty common for any Zen 3, 4, or 5 chip to idle at around 25-30 watts, where Intel can often be under 10. It's one of the only areas where they still consistently win.
You are mistaken (or trying to revise history). The only reason Coppermine received negative attention was that Intel tried and failed to release a 1.13ghz model. The model was defective out of the box and had to be pulled. They tried too hard to clock things too high and it backfired. (wikipedia link, you'll have to read a bit for that tidbit: https://en.wikipedia.org/wiki/Pentium_III)Back in the day, I paid extra for a Celeron 300 to be overclocked and tested at 450MHz. It was apparently a pretty lucrative business, because nearly all of them worked fine, so the person doing the OC testing wasn't working that hard. But that way I knew for sure that the chip would work at 450MHz, and it served me for um, probably a couple years. CPU upgrades were constant back then.
There was one flaw. Apparently Coppermines running at +50% clocks had one instruction that was unreliable. Some game came out that used that instruction, and got the reputation of being horribly unreliable, when in fact it was the chips at fault. But in that time frame, the OCed 450s were so common that the game took the heat instead of the chips.
I wish I could remember what the game was. It really took a sales beating when it didn't deserve one. I guess the QA team just didn't think to test with an overclocked processor.
Anyway, this kind of thing was a viable business even when it was easy. What Thermal Grizzly is doing here is not easy.
I've checked the paste print and it looks fine and even. The 5800X3D's frequency is capped and I'm able to sustain the cap indefinitely, which is really all I care about since there's no more performance I can squeeze out, only longevity at this point. I use IC Diamond paste, not sure how it rates against Thermal Grizzly's products.You're probably getting suboptimal contact between the cooler and the CPU lid. You might try re-pasting it. When I did this last build, I used a 360mm AIO with Thermal Grizzly's Kryonaut (regular, not Extreme), using the X application method (a big X from corner to corner of the IHS), and have been getting fantastic results, typically ~75C under max load for long periods.
I may have the model name confused, but not the chip itself; it was the Celeron 300A OCed to 450Mhz that had the failing instruction. I don't remember what instruction it was. I'm not sure I ever knew.You are mistaken (or trying to revise history).
I've been looking for this out of curiosity, haven't found it yet.I may have the model name confused, but not the chip itself; it was the Celeron 300A OCed to 450Mhz that had the failing instruction. I don't remember what instruction it was. I'm not sure I ever knew.
As I recall, it wasn't really Coppermine itself that got the negative attention, back at that point in time. There were a few factors in play.You are mistaken (or trying to revise history). The only reason Coppermine received negative attention was that Intel tried and failed to release a 1.13ghz model. The model was defective out of the box and had to be pulled. They tried too hard to clock things too high and it backfired. (wikipedia link, you'll have to read a bit for that tidbit: https://en.wikipedia.org/wiki/Pentium_III)
Most folks with Celerons, regardless of the underlying architecture, enjoyed those overclocks back in the day.
Man, when I was a teenager, I inherited the server from my dad's office. It was a dual PIII 700Mhz with two SCSI's in RAID0. It even had a tape drive with spare carts. The server mobo has some compatibility limitations, but I found a Radeon video card that worked for it back when Radeon was ATI. It was a BEAST - did Folding@Home and SETI@Home, and absolutely stomped all over my friends. It was also a space heater. If my room got cold, I just fired up one of the aforementioned projects and wait twenty minutes.Yup. I remember those days when the only way to go dual core was dual socket systems. That's part of what made that dual 300A system such a killer system at the time. The Core 2 Duos and Athlon X2s were a big deal for making more than single core systems practical for most people.
I wonder if they're misremembering the intel 1.0x GHz fiasco where they released a factory-overclocked pentium to compere with the Athlon 1 GHz that actually worked.I've been looking for this out of curiosity, haven't found it yet.
But I do believe you. This has happened from time to time. And it's believable that it would be a game. You have a chip that's stable at a given speed 99.999% of the time, but then a game does something odd, and suddenly there's a failure condition.
I've heard of it happening more recently, but don't recall the exact nature of the issue.
Nope. It was the Celeron 300A, running at 450MHz, that had the somewhat obscure instruction that failed on many (most? all?) chips.I wonder if they're misremembering the intel 1.0x GHz fiasco where they released a factory-overclocked pentium to compere with the Athlon 1 GHz that actually worked.
Yeah, I'm also seeing this as a "leave it to the experts" situation. A good delidder is going to set you back like $60 even before you actually get enough practice in to the point where you feel comfortable popping the top off your fancy CPU.I think this is better for those who want to go with a delidded CPU but want to outsource the work to someone who has a lot of practice doing so. But I wouldn't think they'd be new to over clocking and custom cooling.
It was the 1.13GHz P3 that failed, not the 1GHz. But you are correct that Intel was looking for a marketing knockout blow on the Athlon that blew up on them. And then the i820 had to be recalled ~6 months later.I wonder if they're misremembering the intel 1.0x GHz fiasco where they released a factory-overclocked pentium to compere with the Athlon 1 GHz that actually worked.
Toms Hardware discovered the intel was not stable when they were testing it with linux compiles.
Yeah. The good old days of running dual Celeron 300A's. That was a ridiculously good system for the era at a pretty good price until the 300A got so popular the price went through the roof and they sold above MSRP.
Thanks. I could remember it was 1.xx to "win" over the 1 GHz Athlon (which actually was 1.1 GHz) but not the exact clock speed, and a google search didn't turn up the details. Knowing the correct clock let me find the article - https://www.tomshardware.com/reviews/intel-admits-problems-pentium-iii-1,235.htmlIt was the 1.13GHz P3 that failed, not the 1GHz. But you are correct that Intel was looking for a marketing knockout blow on the Athlon that blew up on them. And then the i820 had to be recalled ~6 months later.
EDIT: The 1GHz P3 was in miserably short supply, IIRC -- AMD outsold it 6:1 or something like that -- but I don't recall (and can't find) any evidence of a problem with it.
Yup. The OC hay days of past mostly came down to the complete market dominance of Intel, were they could afford to artificially cripple their chips and still outperform all competitors.I don't see the point of overclocking in 2025. You'll get maybe what, 4-5 % perf increase ?
And some software doesn't like it at all. I got the 5 % increase, but then Apex Legends would time out and crash constantly - had to bring it down to like 2 % increase for the game to work. 2-3 hours of tweaking for less than 1 FPS ...
And computing power? the increase from your old cpu+gou to your new rig is almost sci-fi. To get the same kind of computing power using a lot of computers in the era of your previous rig, you would have needed a lot more power.I have replaced Intel I5 6600k+GTX960+16GB DDR4 with AMD 7900X+NVIDIA 4600TI+32GB DDR5. My old system was using ~45W with just one browser started(a simple web page). My new system is using ~90W in this almost idle situation. Both systems have recent Windows versions Win 10 and Win 11. After years of progres, and using a recent litography process(6nm) AMD succeded to build a platform(cpu+chipset) that consumes almost double in idle. No reviewer is pointing to this stupid situation. Milions of computers consuming almost double the energy of what was consumed 10 years ago. I don't know how a recent Intel cpu/platform behaves, but AMD idle power usage is unbeliveble.
That's a very familiar upgrade path, although mine differs in minor details (486DX-66 to P166, K6/2-300, K6/3-400 (OC to 450), Athlon 1GHz, AthlonXP 1600+)Yeah, I remember those days. I didn't actually BUILD one of those, because IIRC, I was on AMD at the time. 386, 486-33 upgraded to 486-dx4-133, P166 (OC to 200), K6-2-266, K6-3-450 (not 100% sure on those Ks anymore), Athlon 1600xp, ....
Can you recommend a good undervolting guide please? Preferably for MSi motherboards?Try to undervolt it.
I have a 7800X3D, and the way to go is to reduce the voltages. Less heat -> longer bursts till the maximum temperatures are reached and the cores throttle down. No risk of frying your chips, as you pump less energy in. Got me 15% more in Cinebench for free with only non-aggressive, standard BIOS settings.