• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD to Revise Specs of Ryzen 7 9700X to Increase TDP to 120W, to Beat 7800X3D

Joined
Jan 14, 2019
Messages
10,506 (5.26/day)
Location
Midlands, UK
System Name Holiday Season Budget Computer (HSBC)
Processor AMD Ryzen 7 7700X
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 16 GB Corsair Vengeance EXPO DDR5-6000
Video Card(s) Sapphire Pulse Radeon RX 6500 XT 4 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2, 4 + 8 TB Seagate Barracuda 3.5"
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Windows 10 Pro
I totally disagree because they also use the extra cache in some of their server chips, so obviously something other than games benefit.
Servers don't need high clock speeds, but general use / productivity home PCs do.

I have also heard many owners of x3D chips saying that their system is more responsive than non x3D cache chips, but I admit that could easily be placebo.
I have a 7700X and a 7800X3D as well. It is placebo. Both chips are equally responsive in general use, the 7700X maybe a tad more due to the higher clock speed, although by an insignificant margin.

But more and more software will use this as time goes on, it's not 1980 anymore, and when you break it down, its actually not much cache per core. You fall for the marketing trick big numbers but forget its shared between 8/16 cores. You also forgot the fact that AMD cannot keep up with Intel without using the 3D cache band-aid.
Cache per core? All of the L3 can be used by any core, so technically, you still have 96 MB in a single-core workload.
It's not a marketing trick. You either have high voltage and high clock speeds, or more cache. There's no other way around it.
Who said AMD can't keep up? Who said a few percent difference matters? Are we even talking about the same topic? :wtf:

I get you on the thermals, but AMD should have taken Zen5 to 3nm and stopped using the bolt-on cache, and simply added it to the die. It's time for AMD to stop playing money grabbers and just get this done. Then they can use this bolt-on x3D cache for an even higher-end range of server chips, which they can charge even more crazy prices for. Zen 6 better go down this route.
It doesn't matter what nm your chip is on. If you add +64 MB cache, you basically double its size, which results in much fewer chips per wafer, which increases your defect rate, and thus, the price of the end product significantly. Not to mention, in 2D, you have longer interconnects, which adds latency, you probably also need a larger socket, and so on. You can't just bolt as much cache to your CPU as you want.
 
Joined
Feb 17, 2017
Messages
854 (0.32/day)
Location
Italy
Processor i7 2600K
Motherboard Asus P8Z68-V PRO/Gen 3
Cooling ZeroTherm FZ120
Memory G.Skill Ripjaws 4x4GB DDR3
Video Card(s) MSI GTX 1060 6G Gaming X
Storage Samsung 830 Pro 256GB + WD Caviar Blue 1TB
Display(s) Samsung PX2370 + Acer AL1717
Case Antec 1200 v1
Audio Device(s) aune x1s
Power Supply Enermax Modu87+ 800W
Mouse Logitech G403
Keyboard Qpad MK80
I really don't understand why people care. These CPUs are unlocked, you can configure them however you want. That's like caring about the out of the box brightness of your TV. Whatever


No, they really are not. In order to achieve the same performance as a zen or a 14th gen chip they need substantially more cooling and power draw.

It's not like you'd decrease your TV's brightness or refresh rate in order for it to not be a house heater.
 
Joined
Jul 11, 2015
Messages
659 (0.20/day)
System Name Harm's Rig's
Processor 5950X /2700x / AMD 8370e 4500
Motherboard ASUS DARK HERO / ASRock B550 Phantom Gaming 4
Cooling ArcticLiquidFreezer III420 Push/Pull P14 max/Noctua NF-A14 i / Enermax LIQMAX III ARGB 360 AIO
Memory Patriot Viper Steel DDR4 16GB (4x 8GB) 4000M TRIDENT Z F-43600V15D-16GTZ /G.SKILL DDR4
Video Card(s) ZOTAC AMP EXTREME AIRO 4090 / 1080 Ti /290X CFX
Storage SAMSUNG 980 PRO SSD 1TB/ WD DARK 770 2TB , Sabrent NVMe 512GB / 1 SSD 250GB / 1 HHD 3 TB
Display(s) Thermal Grizzly WireView / TCL 646 55 TV / 50 Xfinity Hisense A6 XUMO TV
Case TT 37 VIEW 200MM'S-ARTIC P14MAX / NZXT Tempest custom
Audio Device(s) Sharp Aquos
Power Supply FSP Hydro PTM PRO 1200W ATX 3.0 PCI-E GEN-5 80 Plus Platinum - EVGA 1300G2/Corsair w750
Mouse G502
Keyboard G413
Got my 5950X at the get go, never had issues ,at idle stock 18 watts, with Ram at 1.45v ,jumps to 33watts and with ASUS Dark Hero
X570 - DOCS, 2.94v and ASUS water preset in BIOS.
 
Joined
Sep 4, 2022
Messages
223 (0.33/day)
AMD's naming is their choice but in my opinion:
9700X should be at most 105W
9800X may be 120W.
Yeah when the 65 watt tdp for the 9700x dropped I said where is the middle ground. I got chewed up for saying that Mobile level efficiency on a desktop cpu doesn't make sense to me with a unlocked multiplier. Hence why they have the non X locked cpus for maximum efficiency in that regard. Again now I am saying where is the middle ground? Hopefully the consumer can choose between maximum efficiency and full blown overclocking. Imagine if AMD was sand bagging the specs for only the overclocking community to discover that it has more unlocked performance in the tank . I really hope the last part is true.

Interesting. Speaking of which, why people keep saying 7800X3D uses only 40-50W? Mine often goes to 70 and even 88. Especially while shader loading (in games) and video editing. Even during regular gaming, altho that is indeed around 45-55.
Still significantly lower than the 7700x although I would argue the 7800X3D is primarily a gaming card and we are not compiling shader significant of the time spent with it.
 
Joined
Jun 14, 2020
Messages
2,910 (1.96/day)
System Name Mean machine
Processor 12900k
Motherboard MSI Unify X
Cooling Noctua U12A
Memory 7600c34
Video Card(s) 4090 Gamerock oc
Storage 980 pro 2tb
Display(s) Samsung crg90
Case Fractal Torent
Audio Device(s) Hifiman Arya / a30 - d30 pro stack
Power Supply Be quiet dark power pro 1200
Mouse Viper ultimate
Keyboard Blackwidow 65%
It's not like you'd decrease your TV's brightness or refresh rate in order for it to not be a house heater.
You'd decrease it for the reason youll decrease your TV or your AC. You just don't like the way it's configured at stock.
 
Joined
May 22, 2024
Messages
172 (3.82/day)
System Name Kuro
Processor AMD Ryzen 7 7800X3D@65W
Motherboard MSI MAG B650 Tomahawk WiFi
Cooling Thermalright Phantom Spirit 120 EVO
Memory Corsair DDR5 6000C30 2x48GB (Hynix M)@6000 30-36-36-48 1.36V
Video Card(s) PNY XLR8 RTX 4070 Ti SUPER 16G@200W
Storage Crucial T500 2TB + WD Blue 8TB
Case Lian Li LANCOOL 216
Audio Device(s) Sound Blaster AE-7
Power Supply MSI MPG A850G
Software Ubuntu 24.04 LTS + Windows 10 Home Build 19045
Benchmark Scores 17761 C23 Multi@65W
I totally disagree because they also use the extra cache in some of their server chips, so obviously something other than games benefit. I have also heard many owners of x3D chips saying that their system is more responsive than non x3D cache chips, but I admit that could easily be placebo. But more and more software will use this as time goes on, it's not 1980 anymore, and when you break it down, its actually not much cache per core. You fall for the marketing trick big numbers but forget its shared between 8/16 cores. You also forgot the fact that AMD cannot keep up with Intel without using the 3D cache band-aid.

I get you on the thermals, but AMD should have taken Zen5 to 3nm and stopped using the bolt-on cache, and simply added it to the die. It's time for AMD to stop playing money grabbers and just get this done. Then they can use this bolt-on x3D cache for an even higher-end range of server chips, which they can charge even more crazy prices for. Zen 6 better go down this route.
A better question might be asked of why even the next generation of AMD's mobile processors did not have even 32MB of L3 cache.

There's probably a limit in terms of scaling for SRAM cells and access lines used in these caches. Regular, 2D, cache size has not increased much for years. Penryn had 6MB L2 at 45nm, Skylake had 6-8MB L3 at 14nm, and even current higher-end Intel and AMD non-X3D offerings have barely more than 30MB L3 accessible per core. Arguably Penryn was X3D of its day, above 50% of the chip area being that cache, but I think the point still stands.
 
Last edited:

The Shield

New Member
Joined
Jan 2, 2024
Messages
18 (0.10/day)
...but seeing how a lot of people reacted in the thread about regular Zen 5 not beating X3D Zen 4 chips in gaming like that was a warcrime worthy of a Hague trial… well, the public deserves the nonsense companies pull, I suppose. Hopefully, they would leave in the old PPT settings as a pre-set option a la Eco mode.
People is only asking for the X3D models to being launched at the same time of the normal ones, that would be an obvious strategy IF marketing bullshits stayied out of the door.
 
Joined
Feb 1, 2019
Messages
2,860 (1.44/day)
Location
UK, Leicester
System Name Main PC
Processor 13700k
Motherboard Asrock Z690 Steel Legend D4 - Bios 13.02
Cooling Noctua NH-D15S
Memory 32 Gig 3200CL14
Video Card(s) 3080 RTX FE 10G
Storage 1TB 980 PRO (OS, games), 2TB SN850X (games), 2TB DC P4600 (work), 2x 3TB WD Red, 2x 4TB WD Red
Display(s) LG 27GL850
Case Fractal Define R4
Audio Device(s) Asus Xonar D2X
Power Supply Antec HCG 750 Gold
Software Windows 10 21H2 LTSC
So the more power more performance AMD going in this direction a bit now.
 
Joined
Jul 20, 2020
Messages
921 (0.64/day)
System Name Gamey #1 / #2
Processor Ryzen 7 5800X3D / Core i7-9700F
Motherboard Asrock B450M P4 / Asrock B360M P4
Cooling IDCool SE-226-XT / CM Hyper 212
Memory 32GB 3200 CL16 / 32GB 2666 CL14
Video Card(s) PC 6800 XT / Soyo RTX 2060 Super
Storage 4TB Team MP34 / 512G Tosh RD400+2TB WD3Dblu
Display(s) LG 32GK650F 1440p 144Hz VA
Case Corsair 4000Air / CM N200
Audio Device(s) Dragonfly Black
Power Supply EVGA 650 G3 / Corsair CX550M
Mouse JSCO JNL-101k Noiseless
Keyboard Steelseries Apex 3 TKL
Software Win 10, Throttlestop
I have also heard many owners of x3D chips saying that their system is more responsive than non x3D cache chips, but I admit that could easily be placebo.

I have not noticed this. Went from an OCd 5600 to a 5800X3D and at the desktop it's the same experience, but in games the 1% lows in CPU-limited situations are very nicely improved. Even going from a OCd 2600 to a 5700X3D, the desktop experience was only subtly better as any 4+ core CPU design in the last 10 years does more than a competent job managing Windows. While the desktop performance differences of my Haswell i7-4790 and Zen 4 Ryzen 7840 are noticeable, they're still in the same class of UI experience with 16GB and a decent SSD.
 

SL2

Joined
Jan 27, 2006
Messages
1,996 (0.30/day)
A better question might be asked of why even the next generation of AMD's mobile processors did not have even 32MB of L3 cache.
Dragon range does have 32 MB (besides the optional 3D V-cache). It's a mobile variant of Raphael, 6 to 16 cores, and is best suited for laptops with high end GPU's.

You could argue that it's not mobile, but it really is.
 
Joined
May 22, 2024
Messages
172 (3.82/day)
System Name Kuro
Processor AMD Ryzen 7 7800X3D@65W
Motherboard MSI MAG B650 Tomahawk WiFi
Cooling Thermalright Phantom Spirit 120 EVO
Memory Corsair DDR5 6000C30 2x48GB (Hynix M)@6000 30-36-36-48 1.36V
Video Card(s) PNY XLR8 RTX 4070 Ti SUPER 16G@200W
Storage Crucial T500 2TB + WD Blue 8TB
Case Lian Li LANCOOL 216
Audio Device(s) Sound Blaster AE-7
Power Supply MSI MPG A850G
Software Ubuntu 24.04 LTS + Windows 10 Home Build 19045
Benchmark Scores 17761 C23 Multi@65W
Dragon range does have 32 MB (besides the optional 3D V-cache). It's a mobile variant of Raphael, 6 to 16 cores, and is best suited for laptops with high end GPU's.

You could argue that it's not mobile, but it really is.
My original point still stands, though.

If only they'd get X3D on mobile APUs. Though I suppose they would have, if they could.
 

SL2

Joined
Jan 27, 2006
Messages
1,996 (0.30/day)
If only they'd get X3D on mobile APUs. Though I suppose they would have, if they could.
I don't see the point. Does the added cache always improve gaming performance significantly, regardless of GPU performance?

I actually don't know, but I wouldn't bet on it. I mean, at which point does V-cache become pointless? Kind of important if you can't upgrade your laptop GPU anyway, integrated or not.

Edit: Or do you mean added cache shared with the IGP?
 
Last edited:

ARF

Joined
Jan 28, 2020
Messages
4,260 (2.63/day)
Location
Ex-usa | slava the trolls
Ah, the good old “crank the power up to win in benchmarks” move. I would have thought AMD would be smarter than this, but apparently not and they’ve resorted to cribbing from Intels playbook. A mistake, IMO, but seeing how a lot of people reacted in the thread about regular Zen 5 not beating X3D Zen 4 chips in gaming like that was a warcrime worthy of a Hague trial… well, the public deserves the nonsense companies pull, I suppose. Hopefully, they would leave in the old PPT settings as a pre-set option a la Eco mode.

the 9950X 16-core, the 9900X 12-core, the 9700X 8-core, and the 9600X 6-core

This is a quite bad news, both for the consumers, and for AMD which will be forced to put very low price tags on these, if they want them to even barely move off the shelves.
If you ask me, I see no initiative and reason to buy anything from this generation - simply the stagnation is too pronounced, and the core count deficit is too strong.

AMD definitely needs a move innovative approach, if they don't want to lose market share to intel.

Ryzen 9 9950X 16-core
Ryzen 9 9900X 16-core
Ryzen 7 9700X 12-core
Ryzen 5 9600X 10-core


This or DOA.
 
Joined
Jan 14, 2019
Messages
10,506 (5.26/day)
Location
Midlands, UK
System Name Holiday Season Budget Computer (HSBC)
Processor AMD Ryzen 7 7700X
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 16 GB Corsair Vengeance EXPO DDR5-6000
Video Card(s) Sapphire Pulse Radeon RX 6500 XT 4 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2, 4 + 8 TB Seagate Barracuda 3.5"
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Windows 10 Pro
It's not like you'd decrease your TV's brightness or refresh rate in order for it to not be a house heater.
It's not a question of being a house heater. You don't need a bigger cooler to use your TV at max brightness.

You can lower your power limit to suit your cooling, or you can buy a bigger cooler. Or you can leave it as it is and accept that it might run into Tjmax occasionally. It's a matter of choice.
 
Joined
May 22, 2024
Messages
172 (3.82/day)
System Name Kuro
Processor AMD Ryzen 7 7800X3D@65W
Motherboard MSI MAG B650 Tomahawk WiFi
Cooling Thermalright Phantom Spirit 120 EVO
Memory Corsair DDR5 6000C30 2x48GB (Hynix M)@6000 30-36-36-48 1.36V
Video Card(s) PNY XLR8 RTX 4070 Ti SUPER 16G@200W
Storage Crucial T500 2TB + WD Blue 8TB
Case Lian Li LANCOOL 216
Audio Device(s) Sound Blaster AE-7
Power Supply MSI MPG A850G
Software Ubuntu 24.04 LTS + Windows 10 Home Build 19045
Benchmark Scores 17761 C23 Multi@65W
I don't see the point. Does the added cache always improve gaming performance significantly, regardless of GPU performance?

I actually don't know, but I wouldn't bet on it. I mean, at which point does V-cache become pointless? Kind of important if you can't upgrade your laptop GPU anyway, integrated or not.

Edit: Or do you mean added cache shared with the IGP?
Acknowledged. It really only benefits those applications that worked with datasets that both could still and would not otherwise fit within the cache, and would otherwise be bottlenecked by RAM bandwidth or latency, very often games. I was probably clouded by my experience with a 7800X3D, which was a pretty big leap from a 5800H on a lot more thing, than just gaming performance. Had I upgraded from a 7700X, the impression would likely be different.

A shared cache - maybe an L4 - on the IOD or whatever equivalent shared with the IGP could be a fun idea, though I wonder how much good that would actually do.
 

SL2

Joined
Jan 27, 2006
Messages
1,996 (0.30/day)
I was probably clouded by my experience with a 7800X3D, which was a pretty big leap from a 5800H on a lot more thing, than just gaming performance. Had I upgraded from a 7700X, the impression would likely be different.
Yeah, but I'd like to see some benchmarks where the added cache makes sense. I guess maybe it does with a 4060, but probably not with a 1630 lol

The reason I'm asking is that the universal recommendation of throwing a 7800X3D at anything doesn't always seem worthwile.
A shared cache - maybe an L4 - on the IOD or whatever equivalent shared with the IGP could be a fun idea, though I wonder how much good that would actually do.
The memory bus width is doubled and RAM speed is much higher for Strix point, I guess that'll have to do for now. Also, It will benefit in many benchmarks.
 
Joined
Jan 14, 2019
Messages
10,506 (5.26/day)
Location
Midlands, UK
System Name Holiday Season Budget Computer (HSBC)
Processor AMD Ryzen 7 7700X
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 16 GB Corsair Vengeance EXPO DDR5-6000
Video Card(s) Sapphire Pulse Radeon RX 6500 XT 4 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2, 4 + 8 TB Seagate Barracuda 3.5"
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Windows 10 Pro
I don't see the point. Does the added cache always improve gaming performance significantly, regardless of GPU performance?

I actually don't know, but I wouldn't bet on it. I mean, at which point does V-cache become pointless? Kind of important if you can't upgrade your laptop GPU anyway, integrated or not.

Edit: Or do you mean added cache shared with the IGP?
The extra cache is pointless at a hard GPU limit, or when 1% and 0.1% low FPS doesn't matter. I suppose it'll be useful for GPU upgrades - your system might last a bit longer before you have to swap your CPU.
 
Joined
Jul 20, 2020
Messages
921 (0.64/day)
System Name Gamey #1 / #2
Processor Ryzen 7 5800X3D / Core i7-9700F
Motherboard Asrock B450M P4 / Asrock B360M P4
Cooling IDCool SE-226-XT / CM Hyper 212
Memory 32GB 3200 CL16 / 32GB 2666 CL14
Video Card(s) PC 6800 XT / Soyo RTX 2060 Super
Storage 4TB Team MP34 / 512G Tosh RD400+2TB WD3Dblu
Display(s) LG 32GK650F 1440p 144Hz VA
Case Corsair 4000Air / CM N200
Audio Device(s) Dragonfly Black
Power Supply EVGA 650 G3 / Corsair CX550M
Mouse JSCO JNL-101k Noiseless
Keyboard Steelseries Apex 3 TKL
Software Win 10, Throttlestop
Acknowledged. It really only benefits those applications that worked with datasets that both could still and would not otherwise fit within the cache, and would otherwise be bottlenecked by RAM bandwidth or latency, very often games. I was probably clouded by my experience with a 7800X3D, which was a pretty big leap from a 5800H on a lot more thing, than just gaming performance. Had I upgraded from a 7700X, the impression would likely be different.

A shared cache - maybe an L4 - on the IOD or whatever equivalent shared with the IGP could be a fun idea, though I wonder how much good that would actually do.

An L4 cache shared with the iGPU can do a lot of good.

I started PC gaming with a NUC5i7: 384 cores and no L4 cache (Iris 6100). Later I upgraded to a NUC7i7: 384 cores and 64MB L4 cache (Iris Plus 650). 49% faster in Time Spy GFX, 73% faster in Fire Strike GFX. Similar improvements noticed in all games. The GPU cores had not changed substantially when you compare scores from other parts with the same # of cores and cache and the system memory went from 1866 to 2133 MHz in the 2 models, so not a huge difference.

Shared L4 for iGPU gaming could be a huge help.
 

ARF

Joined
Jan 28, 2020
Messages
4,260 (2.63/day)
Location
Ex-usa | slava the trolls
Every CPU down to Core i3 12th gen is good enough if you are playing at 2160p.

1719683144046.png

 
Joined
Jun 14, 2020
Messages
2,910 (1.96/day)
System Name Mean machine
Processor 12900k
Motherboard MSI Unify X
Cooling Noctua U12A
Memory 7600c34
Video Card(s) 4090 Gamerock oc
Storage 980 pro 2tb
Display(s) Samsung crg90
Case Fractal Torent
Audio Device(s) Hifiman Arya / a30 - d30 pro stack
Power Supply Be quiet dark power pro 1200
Mouse Viper ultimate
Keyboard Blackwidow 65%
I don't see the point. Does the added cache always improve gaming performance significantly, regardless of GPU performance?

I actually don't know, but I wouldn't bet on it. I mean, at which point does V-cache become pointless? Kind of important if you can't upgrade your laptop GPU anyway, integrated or not.

Edit: Or do you mean added cache shared with the IGP?
Mainly depends on the your FPS target. If you target 200 fps - which means you are going to lower settings to get there even with a mid range card, then the x3d might make some difference. For most people though, it's just an overly expensive CPU that offers no benefits cause they don't have a 4090 and they don't play at 1080p low. A 7600 for half the price is usually the better choice.
 
Last edited:
Joined
Apr 19, 2018
Messages
1,105 (0.49/day)
Processor AMD Ryzen 9 5950X
Motherboard Asus ROG Crosshair VIII Hero WiFi
Cooling Arctic Liquid Freezer II 420
Memory 32Gb G-Skill Trident Z Neo @3806MHz C14
Video Card(s) MSI GeForce RTX2070
Storage Seagate FireCuda 530 1TB
Display(s) Samsung G9 49" Curved Ultrawide
Case Cooler Master Cosmos
Audio Device(s) O2 USB Headphone AMP
Power Supply Corsair HX850i
Mouse Logitech G502
Keyboard Cherry MX
Software Windows 11
Every CPU down to Core i3 12th gen is good enough if you are playing at 2160p.

View attachment 353378
Until the 50x0 series is launched...
 

SL2

Joined
Jan 27, 2006
Messages
1,996 (0.30/day)
Every CPU down to Core i3 12th gen is good enough if you are playing at 2160p.
A better CPU gives you more headroom to increase FPS by lowering quality settings, which is becomes more important if you have something slower than a 4090.

I suppose it'll be useful for GPU upgrades - your system might last a bit longer before you have to swap your CPU.
I agree when it comes to desktop, but the post you quoted was mainly about mobile APU's where you can't change the GPU anyway.
 
Joined
Oct 8, 2015
Messages
729 (0.23/day)
Location
Earth's Troposphere
System Name 3 "rigs"-gaming/spare pc/cruncher
Processor R7-5800X3D/i7-7700K/R9-7950X
Motherboard Asus ROG Crosshair VI Extreme/Asus Ranger Z170/Asus ROG Crosshair X670E-GENE
Cooling Bitspower monoblock ,custom open loop,both passive and active/air tower cooler/air tower cooler
Memory 32GB DDR4/32GB DDR4/64GB DDR5
Video Card(s) Gigabyte RX6900XT Alphacooled/AMD RX5700XT 50th Aniv./SOC(onboard)
Storage mix of sata ssds/m.2 ssds/mix of sata ssds+an m.2 ssd
Display(s) Dell UltraSharp U2410 , HP 24x
Case mb box/Silverstone Raven RV-05/CoolerMaster Q300L
Audio Device(s) onboard/onboard/onboard
Power Supply 3 Seasonics, a DeltaElectronics, a FractalDesing
Mouse various/various/various
Keyboard various wired and wireless
VR HMD -
Software W10.someting or another,all 3
Alternative scenario: keep TDP at65Watt and let PBO do some heavy lifting for a change/oc'ers toy.
le:the 1,two,three-4 by a hypothetical stretch core boosting sure is similar.
 
Joined
Apr 19, 2018
Messages
1,105 (0.49/day)
Processor AMD Ryzen 9 5950X
Motherboard Asus ROG Crosshair VIII Hero WiFi
Cooling Arctic Liquid Freezer II 420
Memory 32Gb G-Skill Trident Z Neo @3806MHz C14
Video Card(s) MSI GeForce RTX2070
Storage Seagate FireCuda 530 1TB
Display(s) Samsung G9 49" Curved Ultrawide
Case Cooler Master Cosmos
Audio Device(s) O2 USB Headphone AMP
Power Supply Corsair HX850i
Mouse Logitech G502
Keyboard Cherry MX
Software Windows 11
Alternative scenario: keep TDP at65Watt and let PBO do some heavy lifting for a change/oc'ers toy.
le:the 1,two,three-4 by a hypothetical stretch core boosting sure is similar.
Reviewers only use OOTB settings. And this CPU looks bad because of that.

As I have been saying since Zen 4 - AMD needs to stop this 3D cache grab, and incorporate the extra L3 directly in to the die.
This situation has only happened because of this greed.

AMD has outdone AMD at it's own stupid game. Intel is going to give them a bloody nose, and they don't have a product that competes for another 6 months, and then the cost of those parts will become an issue.

Zen 6 needs to bring an end to this greedy farce, or this will just happen again.
 
Top