Thursday, June 27th 2024

AMD to Revise Specs of Ryzen 7 9700X to Increase TDP to 120W, to Beat 7800X3D

AMD's Ryzen 9000 "Granite Ridge" family of Socket AM5 desktop processors based on the "Zen 5" microarchitecture arrive in July, with four processor models in the lead—the 9950X 16-core, the 9900X 12-core, the 9700X 8-core, and the 9600X 6-core. AMD is building the CCDs (CPU core dies) of these processors on the slightly newer 4 nm foundry node, compared to the 5 nm node that the Ryzen 7000 series "Raphael" processors based on "Zen 4" are built on; and generally lowered the TDP values of all but the top 16-core part. The company is reportedly reconsidering these changes, particularly in wake of company statements that the 9000X series may not beat the 7000X3D series in gaming performance, which may have sullied the launch, particularly for gamers.

From the company's Computex 2024 announcement of the Ryzen 9000 series, the 9950X has the same 170 W TDP as its predecessor, the 7950X. The 9900X 12-core part, however, comes with a lower 120 W TDP compared to the 170 W of the 7900X. Things get interesting with the 8-core and 6-core parts. Both the 9700X 8-core, and the 9600X 6-core chips come with 65 W TDP. The 9700X succeeds the 7700X, which came with a 105 W TDP, while the 9600X succeeds the 7600X that enjoys the same 105 W TDP. The TDP and package power tracing (PPT) values of an AMD processor are known to affect CPU boost frequency residence, particularly in some of the higher core-count SKUs. Wccftech reports that AMD is planning to revise the specifications of at least the Ryzen 7 9700X.
Apparently, the Ryzen 7 9700X will undergo a set of changes to its specifications which see the TDP and PPT values increase. The TDP will be increased to 120 W, which is higher than even the 105 W that the 7700X comes with, and matches the 120 W of the 7800X3D. Given that the 9700X lacks 3D V-cache, the increased power limits should vastly improve the boost frequency residence of this chip. At this point we don't know if the re-spec includes an increase in clock speeds.

As to how AMD plans to go about this change in specs, given that a July launch would mean that chips with 65 W TDP may already have entered the supply chain; we honestly don't know, and the source article doesn't say. If we were to speculate, such an on-the-fly specs change could be deployed through motherboard BIOS updates that see the motherboard override the TDP and PPT values of the 9700X.

The idea behind the specs change, according to Wccftech, is to improve the gaming performance of the 9700X through clock speeds (boost residence) backed by increased power limits, so it gets closer to—or even beat—the Ryzen 7 7800X3D. A 9000X3D series (Zen 5 + 3D V-cache) is very much on the cards, but we don't expect those chips to come out before Q4 2024 at least.
Source: Wccftech
Add your own comment

112 Comments on AMD to Revise Specs of Ryzen 7 9700X to Increase TDP to 120W, to Beat 7800X3D

#76
oxidized
fevgatosI really don't understand why people care. These CPUs are unlocked, you can configure them however you want. That's like caring about the out of the box brightness of your TV. Whatever


No, they really are not. In order to achieve the same performance as a zen or a 14th gen chip they need substantially more cooling and power draw.
It's not like you'd decrease your TV's brightness or refresh rate in order for it to not be a house heater.
Posted on Reply
#77
harm9963
Got my 5950X at the get go, never had issues ,at idle stock 18 watts, with Ram at 1.45v ,jumps to 33watts and with ASUS Dark Hero
X570 - DOCS, 2.94v and ASUS water preset in BIOS.
Posted on Reply
#78
Godrilla
JohHAMD's naming is their choice but in my opinion:
9700X should be at most 105W
9800X may be 120W.
Yeah when the 65 watt tdp for the 9700x dropped I said where is the middle ground. I got chewed up for saying that Mobile level efficiency on a desktop cpu doesn't make sense to me with a unlocked multiplier. Hence why they have the non X locked cpus for maximum efficiency in that regard. Again now I am saying where is the middle ground? Hopefully the consumer can choose between maximum efficiency and full blown overclocking. Imagine if AMD was sand bagging the specs for only the overclocking community to discover that it has more unlocked performance in the tank . I really hope the last part is true.
starfalsInteresting. Speaking of which, why people keep saying 7800X3D uses only 40-50W? Mine often goes to 70 and even 88. Especially while shader loading (in games) and video editing. Even during regular gaming, altho that is indeed around 45-55.
Still significantly lower than the 7700x although I would argue the 7800X3D is primarily a gaming card and we are not compiling shader significant of the time spent with it.
Posted on Reply
#79
JustBenching
oxidizedIt's not like you'd decrease your TV's brightness or refresh rate in order for it to not be a house heater.
You'd decrease it for the reason youll decrease your TV or your AC. You just don't like the way it's configured at stock.
Posted on Reply
#80
JWNoctis
stimpy88I totally disagree because they also use the extra cache in some of their server chips, so obviously something other than games benefit. I have also heard many owners of x3D chips saying that their system is more responsive than non x3D cache chips, but I admit that could easily be placebo. But more and more software will use this as time goes on, it's not 1980 anymore, and when you break it down, its actually not much cache per core. You fall for the marketing trick big numbers but forget its shared between 8/16 cores. You also forgot the fact that AMD cannot keep up with Intel without using the 3D cache band-aid.

I get you on the thermals, but AMD should have taken Zen5 to 3nm and stopped using the bolt-on cache, and simply added it to the die. It's time for AMD to stop playing money grabbers and just get this done. Then they can use this bolt-on x3D cache for an even higher-end range of server chips, which they can charge even more crazy prices for. Zen 6 better go down this route.
A better question might be asked of why even the next generation of AMD's mobile processors did not have even 32MB of L3 cache.

There's probably a limit in terms of scaling for SRAM cells and access lines used in these caches. Regular, 2D, cache size has not increased much for years. Penryn had 6MB L2 at 45nm, Skylake had 6-8MB L3 at 14nm, and even current higher-end Intel and AMD non-X3D offerings have barely more than 30MB L3 accessible per core. Arguably Penryn was X3D of its day, above 50% of the chip area being that cache, but I think the point still stands.
Posted on Reply
#81
The Shield
Onasi...but seeing how a lot of people reacted in the thread about regular Zen 5 not beating X3D Zen 4 chips in gaming like that was a warcrime worthy of a Hague trial… well, the public deserves the nonsense companies pull, I suppose. Hopefully, they would leave in the old PPT settings as a pre-set option a la Eco mode.
People is only asking for the X3D models to being launched at the same time of the normal ones, that would be an obvious strategy IF marketing bullshits stayied out of the door.
Posted on Reply
#82
chrcoluk
So the more power more performance AMD going in this direction a bit now.
Posted on Reply
#83
Lew Zealand
stimpy88I have also heard many owners of x3D chips saying that their system is more responsive than non x3D cache chips, but I admit that could easily be placebo.
I have not noticed this. Went from an OCd 5600 to a 5800X3D and at the desktop it's the same experience, but in games the 1% lows in CPU-limited situations are very nicely improved. Even going from a OCd 2600 to a 5700X3D, the desktop experience was only subtly better as any 4+ core CPU design in the last 10 years does more than a competent job managing Windows. While the desktop performance differences of my Haswell i7-4790 and Zen 4 Ryzen 7840 are noticeable, they're still in the same class of UI experience with 16GB and a decent SSD.
Posted on Reply
#84
Super Firm Tofu
GodrillaHence why they have the non X locked cpus for maximum efficiency in that regard.
The standard (non-X) chips aren't locked. They just have lower default clocks and power limit. You can enable PBO and crank up the PPT.
Posted on Reply
#85
SL2
JWNoctisA better question might be asked of why even the next generation of AMD's mobile processors did not have even 32MB of L3 cache.
Dragon range does have 32 MB (besides the optional 3D V-cache). It's a mobile variant of Raphael, 6 to 16 cores, and is best suited for laptops with high end GPU's.

You could argue that it's not mobile, but it really is.
Posted on Reply
#86
JWNoctis
SL2Dragon range does have 32 MB (besides the optional 3D V-cache). It's a mobile variant of Raphael, 6 to 16 cores, and is best suited for laptops with high end GPU's.

You could argue that it's not mobile, but it really is.
My original point still stands, though.

If only they'd get X3D on mobile APUs. Though I suppose they would have, if they could.
Posted on Reply
#87
SL2
JWNoctisIf only they'd get X3D on mobile APUs. Though I suppose they would have, if they could.
I don't see the point. Does the added cache always improve gaming performance significantly, regardless of GPU performance?

I actually don't know, but I wouldn't bet on it. I mean, at which point does V-cache become pointless? Kind of important if you can't upgrade your laptop GPU anyway, integrated or not.

Edit: Or do you mean added cache shared with the IGP?
Posted on Reply
#88
ARF
OnasiAh, the good old “crank the power up to win in benchmarks” move. I would have thought AMD would be smarter than this, but apparently not and they’ve resorted to cribbing from Intels playbook. A mistake, IMO, but seeing how a lot of people reacted in the thread about regular Zen 5 not beating X3D Zen 4 chips in gaming like that was a warcrime worthy of a Hague trial… well, the public deserves the nonsense companies pull, I suppose. Hopefully, they would leave in the old PPT settings as a pre-set option a la Eco mode.
the 9950X 16-core, the 9900X 12-core, the 9700X 8-core, and the 9600X 6-core
This is a quite bad news, both for the consumers, and for AMD which will be forced to put very low price tags on these, if they want them to even barely move off the shelves.
If you ask me, I see no initiative and reason to buy anything from this generation - simply the stagnation is too pronounced, and the core count deficit is too strong.

AMD definitely needs a move innovative approach, if they don't want to lose market share to intel.

Ryzen 9 9950X 16-core
Ryzen 9 9900X 16-core
Ryzen 7 9700X 12-core
Ryzen 5 9600X 10-core


This or DOA.
Posted on Reply
#89
AusWolf
oxidizedIt's not like you'd decrease your TV's brightness or refresh rate in order for it to not be a house heater.
It's not a question of being a house heater. You don't need a bigger cooler to use your TV at max brightness.

You can lower your power limit to suit your cooling, or you can buy a bigger cooler. Or you can leave it as it is and accept that it might run into Tjmax occasionally. It's a matter of choice.
Posted on Reply
#90
JWNoctis
SL2I don't see the point. Does the added cache always improve gaming performance significantly, regardless of GPU performance?

I actually don't know, but I wouldn't bet on it. I mean, at which point does V-cache become pointless? Kind of important if you can't upgrade your laptop GPU anyway, integrated or not.

Edit: Or do you mean added cache shared with the IGP?
Acknowledged. It really only benefits those applications that worked with datasets that both could still and would not otherwise fit within the cache, and would otherwise be bottlenecked by RAM bandwidth or latency, very often games. I was probably clouded by my experience with a 7800X3D, which was a pretty big leap from a 5800H on a lot more thing, than just gaming performance. Had I upgraded from a 7700X, the impression would likely be different.

A shared cache - maybe an L4 - on the IOD or whatever equivalent shared with the IGP could be a fun idea, though I wonder how much good that would actually do.
Posted on Reply
#91
SL2
JWNoctisI was probably clouded by my experience with a 7800X3D, which was a pretty big leap from a 5800H on a lot more thing, than just gaming performance. Had I upgraded from a 7700X, the impression would likely be different.
Yeah, but I'd like to see some benchmarks where the added cache makes sense. I guess maybe it does with a 4060, but probably not with a 1630 lol

The reason I'm asking is that the universal recommendation of throwing a 7800X3D at anything doesn't always seem worthwile.
JWNoctisA shared cache - maybe an L4 - on the IOD or whatever equivalent shared with the IGP could be a fun idea, though I wonder how much good that would actually do.
The memory bus width is doubled and RAM speed is much higher for Strix point, I guess that'll have to do for now. Also, It will benefit in many benchmarks.
Posted on Reply
#92
AusWolf
SL2I don't see the point. Does the added cache always improve gaming performance significantly, regardless of GPU performance?

I actually don't know, but I wouldn't bet on it. I mean, at which point does V-cache become pointless? Kind of important if you can't upgrade your laptop GPU anyway, integrated or not.

Edit: Or do you mean added cache shared with the IGP?
The extra cache is pointless at a hard GPU limit, or when 1% and 0.1% low FPS doesn't matter. I suppose it'll be useful for GPU upgrades - your system might last a bit longer before you have to swap your CPU.
Posted on Reply
#93
Lew Zealand
JWNoctisAcknowledged. It really only benefits those applications that worked with datasets that both could still and would not otherwise fit within the cache, and would otherwise be bottlenecked by RAM bandwidth or latency, very often games. I was probably clouded by my experience with a 7800X3D, which was a pretty big leap from a 5800H on a lot more thing, than just gaming performance. Had I upgraded from a 7700X, the impression would likely be different.

A shared cache - maybe an L4 - on the IOD or whatever equivalent shared with the IGP could be a fun idea, though I wonder how much good that would actually do.
An L4 cache shared with the iGPU can do a lot of good.

I started PC gaming with a NUC5i7: 384 cores and no L4 cache (Iris 6100). Later I upgraded to a NUC7i7: 384 cores and 64MB L4 cache (Iris Plus 650). 49% faster in Time Spy GFX, 73% faster in Fire Strike GFX. Similar improvements noticed in all games. The GPU cores had not changed substantially when you compare scores from other parts with the same # of cores and cache and the system memory went from 1866 to 2133 MHz in the 2 models, so not a huge difference.

Shared L4 for iGPU gaming could be a huge help.
Posted on Reply
#95
JustBenching
SL2I don't see the point. Does the added cache always improve gaming performance significantly, regardless of GPU performance?

I actually don't know, but I wouldn't bet on it. I mean, at which point does V-cache become pointless? Kind of important if you can't upgrade your laptop GPU anyway, integrated or not.

Edit: Or do you mean added cache shared with the IGP?
Mainly depends on the your FPS target. If you target 200 fps - which means you are going to lower settings to get there even with a mid range card, then the x3d might make some difference. For most people though, it's just an overly expensive CPU that offers no benefits cause they don't have a 4090 and they don't play at 1080p low. A 7600 for half the price is usually the better choice.
Posted on Reply
#97
SL2
ARFEvery CPU down to Core i3 12th gen is good enough if you are playing at 2160p.
A better CPU gives you more headroom to increase FPS by lowering quality settings, which is becomes more important if you have something slower than a 4090.
AusWolfI suppose it'll be useful for GPU upgrades - your system might last a bit longer before you have to swap your CPU.
I agree when it comes to desktop, but the post you quoted was mainly about mobile APU's where you can't change the GPU anyway.
Posted on Reply
#98
dont whant to set it"'
Alternative scenario: keep TDP at65Watt and let PBO do some heavy lifting for a change/oc'ers toy.
le:the 1,two,three-4 by a hypothetical stretch core boosting sure is similar.
Posted on Reply
#99
stimpy88
dont whant to set it'Alternative scenario: keep TDP at65Watt and let PBO do some heavy lifting for a change/oc'ers toy.
le:the 1,two,three-4 by a hypothetical stretch core boosting sure is similar.
Reviewers only use OOTB settings. And this CPU looks bad because of that.

As I have been saying since Zen 4 - AMD needs to stop this 3D cache grab, and incorporate the extra L3 directly in to the die.
This situation has only happened because of this greed.

AMD has outdone AMD at it's own stupid game. Intel is going to give them a bloody nose, and they don't have a product that competes for another 6 months, and then the cost of those parts will become an issue.

Zen 6 needs to bring an end to this greedy farce, or this will just happen again.
Posted on Reply
#100
A Computer Guy
TomgangJust makes me happy that i chose to go with zen 3.
5950x was just an amazing chip for it's time. Sure other CPU's are faster but now you have to deal with big.little or hybrid X3D or much higher TDP 16 core desktop parts. It's been a dream working on the well roundedness of a good X570 board offering x16 (or x8/x8x or x4x4x4x4) + x4 + x1 with a 5950x CPU and a lot of USB3 I/O plus plenty of SATA, NVMe, and 128GB ECC. It's just aging incredibly well.
Posted on Reply
Add your own comment
Dec 21st, 2024 07:53 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts