• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

AMD to Revise Specs of Ryzen 7 9700X to Increase TDP to 120W, to Beat 7800X3D

This is actually a really pro move if they release a BIOS update for this:
1. non-pro users likely won't update their BIOS and will be more than happy with a really fast 8-core 9700X. They will also likely experience less issues with their CPU/memory/MB as a result (good for user and for manufacturers)
2. pro users will be very happy with the increased W and the power it gives.

Literally everyone is happy in such an arrangement.
 
They're not doing an Intel "upgrade your locked processor" for cha-ching thing here, all processors will be sold with the higher TDP.
 
Just makes me happy that i chose to go with zen 3. Easy to cool compared to zen 4 and maybe zen 5 and intels last 2 or 3 gen of cpu's and zips power as well compared to zen 4 and specially intel to last gen cpu's.

How are Zen3 easier to cool?

If you want a Zen3 chip to perform, you need to keep it below 75c. And that´s not what I would call easy. With a NH-D15 on a 5600X I can barely do 4700Mhz in Cinebench R23. Looping pushes the temp over 75c at the the end of the second run and the clocks drop. The 5800X is even worse and known to hit 80c+ in normal games.

If used to kill time in a GPU bound title, sure, Its not like its impossible. But these things are not designed for the purpose of killing time. In fact, its quite the opposite.
 
Ah, the good old “crank the power up to win in benchmarks” move. I would have thought AMD would be smarter than this, but apparently not and they’ve resorted to cribbing from Intels playbook. A mistake, IMO, but seeing how a lot of people reacted in the thread about regular Zen 5 not beating X3D Zen 4 chips in gaming like that was a warcrime worthy of a Hague trial… well, the public deserves the nonsense companies pull, I suppose. Hopefully, they would leave in the old PPT settings as a pre-set option a la Eco mode.
True. There's no way this is a correct move, indeed! The thread where AMD said their "regular" CPUs would loose to X3D was quite obvious, and many, including me, thought that this is no problem. However...
Just makes me happy that i chose to go with zen 3. Easy to cool compared to zen 4 and maybe zen 5 and intels last 2 or 3 gen of cpu's and zips power as well compared to zen 4 and specially intel to last gen cpu's.
Not really. The cheaper price is for a reason. Zen3 has only the thermal and power envelope locked for safety reasons. While Zen4 is much more potent, thus the limits have been removed. Set the Zen4 power limits as Zen3, and you will have even better power consumption and clock boost. Because Zen4 CPUs are quite power efficient, and nothing of Zen3, outside fancy X3D models and the very top binned 5950X, have the performance per watt a bit worse than regular Ryzen 7000. And vice versa, if push Zen3 too high, it will give wose efficiency results, than Ze4 with similar clocks.
As fevgatos pointed out, below, the top chips, mostly the most potent, and efficient silicon. These can be pushed beyond the point of reasonability, but they operate much more efficiently, due to lower tollerances, due to having dual full CCDs, with much higher requirements. Whereas, the 12, 8 and 6 cores being the the worst quality silicon, obviously. Also, the X3D, seems cannot have the trash dies either, do to
Power limits, or eco mode as amd calls it, shouldn't affect any of that. From my experience eco mode works perfectly fine with no issues.

It is not the case that you can achieve similar performance with lower end cpus. A 7950x even at let's say 70 watts will be vastly faster than a 7700x even if you are running it at 200 watts. Not to mention much easier to cool.

Besides maximum performance, buying high end cpus like the 7950x and the 14900k is because they can be exceptionally fast while sipping power and very easy to cool
Exactly. Any CPU default/stock settings should be stability and effciency first. So that in case of some bugged firmware won't cause the savage situation, with default voltage going past 1.42V.
Everyone who want higher performance would just OC the chip, like it used to be for ages. Or just choose the power limit profile within UEFI/BIOS settings like 35W, 65W, 85W, 105W 125W... That would be really great, considering how the current AMD CPUs doing everyting auto.
genuinely asking, where is 9800X?
Where the intel i7/Ultra 7 is. AMD is copying intel's SKU naming scheme.
Well AMD aren't the sharpest tools going around, they're going for efficiency crown in mobile/servers & literally the opposite for desktops? This race to be the next P4 or Dozer is dumb & old
This indeed looks like "netburst" deja vu.
When not simply release 9800X.
Exactly. Instead of doing "4080 12GB" all over again, they should have just made another separate SKU with higher power envelope, like they did before. Where 5700X used to be 65W, and 5800X being 105. This is fallacy. They already claimed, they are not up for pushing non X3D parts, as they would lose anyway, and now.... AMD has almost completely became intel, if not worse, and for much sooner.
 
It makes sense, the 7700X trades blows with the 5800X3D in gaming and because of that I chose to just upgrade to AM5 rather than stick with AM4. If the 9700X just lost to the 7800X3D in gaming without any fight, there'd be little incentive to buy it unless you're looking to do non gaming tasks. I'm waiting for 9800X3D to see if it'll be worth the upgrade.
 
Seems like erryone runs default limits these days?

The way to get these chips to perform is by using higher power limits. It helps for higher core boosts, and higher sustained clocks across a myriad of loads.

Old news here folks :)
 
Depends how much this really means in the real world (In terms of power and heat). I am all for extra performance so long as it does not make the chip have too many major issues cooling it. Will be interested to see reviews especially if a bios update is required to update the chip because we may see reviews all over the place.
 
Brand new 8 (P) core CPUs are enthusiast parts. Setting a new enthusiast class part to 65W by default so all users building with it get that level of performance seems short-sighted. 105W seems a better fit but why did AMD set it to 65W to begin with? Do they really like this kind of press so much they just can't stop? As an RX 5600XT owner, this is last second re-think is yet another in a long line of AMinDecision.

Well at least they get there... sometimes, and if you want to run at efficiency then click 65W Eco mode if you're lazy or test in your apps and find your personal sweet spot for power-performance.
 
Let's not forget the 7800X3D uses less power than the 7700X yet draws way more frames. It's safe to assume the 9800X3D will be the same TDP with higher fps than 9700X. Not a fan of this large TDP increase though, the 65W thing was a great idea.
 
Not a fan of this large TDP increase though, the 65W thing was a great idea.
Agreed.

Nobody wants to read articles on how to fix thermal throttling, but articles with titles like "The one setting in the BIOS AMD doesn't want you to know about - 5-15% free performance gains with two clicks" could have been winners.
 
AMD's naming is their choice but in my opinion:
9700X should be at most 105W
9800X may be 120W.
 
Interesting. Speaking of which, why people keep saying 7800X3D uses only 40-50W? Mine often goes to 70 and even 88. Especially while shader loading (in games) and video editing. Even during regular gaming, altho that is indeed around 45-55.
 
There is no point buying a non X3D processor when the X3D ones will be released not too long after. I'll keep my 7800X3D and wait for the 9 series X3D ones, hoping that there will be higher core counts cpus which can fully use the X3D cache and not a new 7950X3D which cant do it.
 
On something like Zen 3, 130w on a 65w part is no problem. On a 105w part like my 5900X, 250w is not a problem. On X3D it feels like I have an overpriced prebuilt :D
 
Interesting. Speaking of which, why people keep saying 7800X3D uses only 40-50W? Mine often goes to 70 and even 88. Especially while shader loading (in games) and video editing. Even during regular gaming, altho that is indeed around 45-55.
That 88 you're seeing while shader loading or video editing is a full load / peak situation. The 40-55 W people are talking about is average gaming (that is, being in the game).
 
I'm more interested in getting some insight on whether 9000-series will see any improvements for cooling performance? Thicker IHS / offset silicon die - has anything changed or additional methods adopted to improve heat dissipation? or is same ole?
 
Better not. 3700X and 5700X have got the same great TDP of 65 W.
Why not just add another 9800X? That's way more reasonable.
 
I'm more interested in getting some insight on whether 9000-series will see any improvements for cooling performance? Thicker IHS / offset silicon die - has anything changed or additional methods adopted to improve heat dissipation? or is same ole?
I'm 99% certain that it's the same. Probably that's why the increased TDP, because you'll need a beefy cooler.

Better not. 3700X and 5700X have got the same great TDP of 65 W.
Why not just add another 9800X? That's way more reasonable.
Probably because a 9700 non-X is coming (speculation).
 
Hey gaiz, I’ve unlocked my 14900K to no limits, set an infinite PL2, turned off every C-state, forced off core parking and am running a cmd line to get the Workstation Ultimate Performance plan in Windows, my temps aren’t great and it seems like the cooler can’t keep up, what do?
I have to assume you trolling your self at this point, but running a 14900k with unlocked limits will degrade the CPU and you'll get crashing in windows/games.

Just wait for the 9700x3d lol
This.
 
AMD is stupid for continuing this 3D "cache grab". It is a design deficiency that AMD decided to address as a cost added "fix" that the end-user has to pay extra for. In 2024 the increased L3 cache amount should have been baked into the design of the CPU core from day one, and not deliberately separated just to make money from gamers. AMD should do this, and/or maybe just fix their awful memory controller with its low bandwidth and high latency.
 
AMD is stupid for continuing this 3D "cache grab". It is a design deficiency that AMD decided to address as a cost added "fix" that the end-user has to pay extra for. In 2024 the increased L3 cache amount should have been baked into the design of the CPU core from day one, and not deliberately separated just to make money from gamers. AMD should do this, and/or maybe just fix their awful memory controller with its low bandwidth and high latency.
The 3D cache limits thermal conductivity, and thus, maximum achievable clock speed and voltage, not to mention it only really benefits gaming, so why would AMD have only X3D CPUs? It wouldn't make sense.

Also, you can't bake 96 MB of cache into the CPU die without doubling its size and significantly increasing manufacturing costs. Making a small CPU die and another small cache die is a lot cheaper.
 
I don't understand why there is so much complaint about this in the comments. If you still want the lower consumption you can just set the TDP back to a lower number in BIOS.
For most people who are looking at this, the higher default limit is better. It's much easier to lower the limit than increase it with most motherboard.
 
The 3D cache limits thermal conductivity, and thus, maximum achievable clock speed and voltage, not to mention it only really benefits gaming, so why would AMD have only X3D CPUs? It wouldn't make sense.

Also, you can't bake 96 MB of cache into the CPU die without doubling its size and significantly increasing manufacturing costs. Making a small CPU die and another small cache die is a lot cheaper.
I totally disagree because they also use the extra cache in some of their server chips, so obviously something other than games benefit. I have also heard many owners of x3D chips saying that their system is more responsive than non x3D cache chips, but I admit that could easily be placebo. But more and more software will use this as time goes on, it's not 1980 anymore, and when you break it down, its actually not much cache per core. You fall for the marketing trick big numbers but forget its shared between 8/16 cores. You also forgot the fact that AMD cannot keep up with Intel without using the 3D cache band-aid.

I get you on the thermals, but AMD should have taken Zen5 to 3nm and stopped using the bolt-on cache, and simply added it to the die. It's time for AMD to stop playing money grabbers and just get this done. Then they can use this bolt-on x3D cache for an even higher-end range of server chips, which they can charge even more crazy prices for. Zen 6 better go down this route.
 
Back
Top