• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Ryzen 7 9700X

OK then...
There is also the 8700/8600/8500G available for AM5 as well. You have 7000X3D chips and then the entire 7000 x and non x chips to choose from. There are also plenty of AM5 MBs in every price and format bracket. I am also sure there are Eypc chips for AM5 as well.
 
They really needed a new IO die for Zen 5 with a faster IF. There's easily double digits of performance gain to be had once they fix the back end and IO. Here's hoping X870 can do DDR5 8000, I'm hearing there will be EXPO kits out for plug and play 8000Mhz but there will still be bottlenecks because the IF is stuck at 2000mhz.
From what I've heard, that is what Zen 6 will bring, along with fixing the problems with dual ccd's. Strix Halo will get an all new 3nm I/O die and that is why it's a year late.
 
There is also the 8700/8600/8500G available for AM5 as well. You have 7000X3D chips and then the entire 7000 x and non x chips to choose from. There are also plenty of AM5 MBs in every price and format bracket. I am also sure there are Eypc chips for AM5 as well.
I wouldn't say it's "plenty"

In Canada at least, the only widely available ~$100CAD AM5 board is this MSI A620 board.

Everything else is $150+
 
I think it was a bad choice to drop Far Cry 6 for Starfield in your CPU review benchmarks. Far Cry 6 is well known for being CPU limited, thus perfect for a CPU comparison, while Starfield doesn't care about the CPU and they all perform within a fraction of a FPS of each other, until you get to outdated ones like Zen 2 and Intel 11th gen.
 
@W1zzard Anandtech's strix point review showed core to core latency regressions for SMT threads, can you do a quick test just 1 game at 1 resolution with SMT off for both the 9700X and 7700 please?
 
Far Cry 6 is well known for being CPU limited, thus perfect for a CPU comparison
Far Cry 6 hogs the memory in a very specific way, which makes it very memory limited. But it's getting old and I rather have something newer that people can relate to. The Starfield engine is terrible in its own way though. If they come out with a new game I'll consider it of course for the Fall 2024 or Spring 2025 rebench
 
"AMD's Ryzen 7 7700X is currently selling for $290, which is a pretty attractive offer, too. Not sure if it's worth saving another $10 to buy the 7700 non-X."

Oh I would say that it totally is, especially for a first-build because the R7-7700 comes with a Wraith Prism cooler. I can say from personal experience that the Wraith Prism is not only very effective, it's downright gorgeous. The Wraith Prism makes your CPU look like a miniature nuclear reactor with fully programmable RGB effects. I've been cooling my R7-5800X3D for almost two years now with a Wraith Prism without any issues whatsoever and I love how it looks.

The R7-7700 may only be $10 cheaper than the R7-7700X but the performance difference is nigh imperceptible and the value added by that beautiful Wraith Prism makes the R7-7700 more attractive to anyone who doesn't believe in needlessly wasting their money on an AIO for a non-Intel CPU.
 
Far Cry 6 hogs the memory in a very specific way, which makes it very memory limited. But it's getting old and I rather have something newer that people can relate to. The Starfield engine is terrible in its own way though. If they come out with a new game I'll consider it of course for the Fall 2024 or Spring 2025 rebench
Thanks for the reply. I just mentioned that because FC6 is a good CPU bench test and Starfield isn't. Same with Cyberpunk, they all perform about the same, seems a waste of time and page space. You could just say "we didn't bench these games because they all perform within a FPS of each other until you go way back to Zen 2".

While I'm on telling you how to do your job (I'm kidding and I'm sure you welcome suggestions), it would be AWESOME if someone benched the games with something lower than a 4090 so all of us who don't have a $2000 GPU can see how much CPU choice actually affects us. Something like a 4070ti/7900xt that's closer to what most people have, that might still show a difference... Or even something in the $500 range. Just a thought - it drives me nuts that no one does this. The 4090 tests are great for showing the absolute theoretical differences, but I suspect that doesn't affect most gamers in the same way as the charts indicate.

Sorry if I sound critical, Techpowerup's reviews are awesome and are truly the first I look for when a new component comes out.
 
something lower than a 4090
That's what I did in the past, people kept complaining that I'm testing on slow hardware. Same with memory speeds

I just mentioned that because FC6 is a good CPU bench test and Starfield isn't. Same with Cyberpunk, they all perform about the same, seems a waste of time and page space. You could just say "we didn't bench these games because they all perform within a FPS of each other until you go way back to Zen 2".
I agree, but the vast majority of people want to see recent titles, not good tests. This also brings up a more philosophical question, if many modern titles run well, why care about outdated titles that dont run well?
 
Just a thought - it drives me nuts that no one does this. The 4090 tests are great for showing the absolute theoretical differences, but I suspect that doesn't affect most gamers in the same way as the charts indicate.

Nobody does this, because you don't want bottlenecks when testing a single component.

To get the information you want, all you have to do is look at two separate benchmarks. A GPU benchmark, to see the maximum framerate you can get on your GPU, and then a CPU benchmark to see the maximum framerate you can get on your CPU.

Testing with a midrange GPU is pointless, because it doesn't tell you which component is the bottleneck (GPU utilization would have to be included). And it will vary too much between games, some will be bottlenecked by the CPU, some by the GPU, you just don't know which one.

Hardware Unboxed did multiple videos on this topic, but many people still don't understand.
 
Seems like going from my 5800x to 9700x might be worth it. If I want better, I'll need to wait for Nov/Dec, maybe Jan for a 9800X3D. Not sure if it's worth it :o
 
This may come across as a bit of rambling but hear me out.

I can understand why AMD went with making the 9700X a lower wattage part when compared to the older 7700X which was a lot more power hungry. They wanted to show what an actual power-efficient chip looked like as a way of saying "Hey, look what we can do with so little power while Intel chips need way more than our chips need". I feel like AMD was thumbing their noses at Intel.

However, I can't help but feel like they've taken the "X" out of the 9700X model name. The "X", at least in my opinion, stands for extreme. People like us here at TechPowerUp build high-performance PCs with beefy cooling capabilities, so we're not worried about power usage and heat output (at least to some extent). I can't help but feel like this is really a 9700, not a 9700X type chip.

AMD really needs to make some kind of BIOS setting that takes the gloves off this chip.

I'm stepping off my soapbox now.
 
And it seems like I'm not
X for expensive
You're not wrong about that. When I bought my 7700X, I didn't have the benefit of the drastic price cuts that we're seeing now.
 
And it seems like I'm not

You're not wrong about that. When I bought my 7700X, I didn't have the benefit of the drastic price cuts that we're seeing now.
Same here for my 7800X3D you guys in the U.S are rather lucky you get much better prices/price cuts compared to here in Gougelandastan (NZ) where we're lucky to see it on sale for 50 bucks off and have the shops call it a massive drop in price LOL
 
Thanks for the review W1z. Really impressive power consumption gains from the 7th generation. I've been buying AMD primarily for quite some time. They seem to be pushing their confidence in the market through pricing.

I would expect to see Intel become the value position, outside of energy efficiency.
 
Testing with a midrange GPU is pointless, because it doesn't tell you which component is the bottleneck (GPU utilization would have to be included). And it will vary too much between games, some will be bottlenecked by the CPU, some by the GPU, you just don't know which one.
It's not pointless. It potentially tells people with a midrange GPU that it's pointless to upgrade their CPU. That's a couple of hundred dollars minimum worth of free advice. It actually has more value to more people than testing with a GPU only a few can afford or are willing to spend the money on. Even if it was testing 2-3 of the most CPU-sensitive games with something like, say, a 7800XT or 4070 Super. I would put money on there being a lot more 4070s out there in real gamers' systems than 4090s. There's probably even more 4060s. I have a couple of 7900xtx systems, so I'm kind of on the higher end, but I still wonder what the differences would be with my GPU.

I 100% understand that removing the GPU bottleneck to the extent possible gives more proper "scientific" results, but it also probably results in a whole lot of needless money spent on CPU upgrades, which could actually be the point (which I'm also not oblivious to). Sure, we want to know what it can do in a perfect scenario, but it would also be useful to know "what would the result be in MY system and what would the gain be if I upgraded to a new CPU (and probably MB+RAM, for about $700 for all 3)".

That's what I did in the past, people kept complaining that I'm testing on slow hardware. Same with memory speeds
You can't please all the people. I get it.
I agree, but the vast majority of people want to see recent titles, not good tests. This also brings up a more philosophical question, if many modern titles run well, why care about outdated titles that dont run well?
As games get more graphically complex AND better written for multithreading, it's going to get to the point where they all perform the same anyway and we're back to everything always being GPU bound. You already see this with games like Cyberpunk and Starfield where even with a 4090, there's a fraction of a FPS difference and you don't see the CPU becoming a limitation until you get down to the AMD APUs, Zen 2, and Intel 11th gen.
 
Wait.. what? Love? Did we read the same conclusion?
Well, ok, love is a strong word, but I did get a rather positive feel, especially compared to most of the others. I should've been more specific because for some reason Tom's literally does love it (4.5/5 stars). You were far more ambivalent but you did recognise that Zen5 was less about a performance uplift (although there was a small uplift there) and more about an efficiency uplift.

In any case, the article still concluded with:
recommended.gif

Also, your headline says :"The magic of Zen5" while HU's article on Techspot says "R5-9600X: Poor Value for Gamers" which does a lot to set the tone for the rest of the article. Also, HU gave both Zen5 score of 65/100. That's not a score that they give for things that they recommend, let alone highly recommend. So yeah, I got the idea that you think Zen5 is pretty good overall.

Sure, I was exaggerating a bit (still not as bad as AMD's marketing department), but hey, it's the internet and everybody does that sometimes, eh? :)

It's all fine until you look at the new higher prices. The Ryzen 7600 was $250 CAD in Canada when I built a system for my roommate last month, the 9600X is $400 today. No.
Yeah, but that's not really a fair comparison because the launch prices of Zen4 were even worse and people were like "I'm sticking to AM4.". I myself was one of those people and decided to go from an R7-5700X to an R7-5800X3D (Sold my 5700X pretty easily).

Comparing the price of a last-gen part from two years ago to the launch price of brand-new parts is never going to work in the new parts' favour. We'll see how the prices drop over time.

Now, if you'll excuse me, the Als are kicking the crap out of the Ticats and I'm missing it! ;)
 
Last edited:
I’m sticking with my 7700X for the time being. If my father decides to do an upgrade, I’ll give him my 7700X and either get a 7800X3D or 9800X3D.
 
Comparing the price of a last-gen part from two years ago to the launch price of brand-new parts is never going to work in the new parts' favour. We'll see how the prices drop over time.

Not true. New parts even with higher prices are worth buying usually because of new levels of performance. The perf/dollar might stay the same, but not get worse.

Raising the price back up for a new launch without sufficient performance increases is on the other hand unjustifiable.

Well, this time the emphasis was on efficiency, not performance. I agree wholeheartedly that AMD should've communicated this a lot better. Also, we're talking MSRP. I wouldn't be surprised if the actual prices are somewhat lower. FWIW people have the option of going 7000 series without missing out. More choice is always welcome.
But they didn't achieve amazing efficiency. Where is the review running both chips at the lowest stable voltage at different clock speeds? 4.0ghz, 4.2ghz, 4.4ghz etc?

You can't just look at artificial segmentation and TDPs.

I don't see a chart... /:
 
That's what I did in the past, people kept complaining that I'm testing on slow hardware. Same with memory speeds


I agree, but the vast majority of people want to see recent titles, not good tests. This also brings up a more philosophical question, if many modern titles run well, why care about outdated titles that dont run well?
I do have a likeness for old titles being in the mix, particularly ones that use older graphic APIs. Partially because I find myself playing older games a lot but also it can indicate how the CPU deals with different types of loads from older style of development and older APIs.

But I guess as a reviewer you can never win, you add old games for me, then someone else will moan about it.

One thing that new games tend to do better than older games is utilise CPU resources better, newer games are more likely to have more threads, and DX12 doesnt have the old overloaded main thread problem that DX9 had.
 
I do have a likeness for old titles being in the mix, particularly ones that use older graphic APIs. Partially because I find myself playing older games a lot but also it can indicate how the CPU deals with different types of loads from older style of development and older APIs.

But I guess as a reviewer you can never win, you add old games for me, then someone else will moan about it.

One thing that new games tend to do better than older games is utilise CPU resources better, newer games are more likely to have more threads, and DX12 doesnt have the old overloaded main thread problem that DX9 had.

The entire point of buying a high end CPU for gaming is to raise the frame rate (and remove stutters and improve 1 percent lows) with low frame rate games. I am SO GLAD that Wizard updated the testing suite, and frankly imo, I'd just select the 10 latest games from the last 2 years with the LOWEST frame rates and use those for testing, and nothing else. I don't see the point in CS Go or Overwatch or other games that already run over 240fps being tested.

Or perhaps he could release a "10 most difficult games to run average" as well. Prepare a second set of charts for the 1080p and 1440p relative performance only based on the most demanding games with the lowest frame rates.
 
Nobody does this, because you don't want bottlenecks when testing a single component.
Not true.

Ancient Replays and Level1Tech did test with a 7900XTX besides the 4090 and obtained different and interesting results.

Same for doing benchmarks using Linux instead of Windows.

The point is, the magical 4090 is not the end all, be all.
 
The entire point of buying a high end CPU for gaming is to raise the frame rate (and remove stutters and improve 1 percent lows) with low frame rate games. I am SO GLAD that Wizard updated the testing suite, and frankly imo, I'd just select the 10 latest games from the last 2 years with the LOWEST frame rates and use those for testing, and nothing else. I don't see the point in CS Go or Overwatch or other games that already run over 240fps being tested.

Or perhaps he could release a "10 most difficult games to run average" as well. Prepare a second set of charts for the 1080p and 1440p relative performance only based on the most demanding games with the lowest frame rates.
I am not talking about games that run at 100s of fps, those are pointless.

There is old games that even new CPUs struggle on as they coded so inefficiently, as an example Lightning Returns can only just about sustain 30fps now on a 13700k and 7800X3D, in late game Ruffian when loads of NPCs are spawned. The game is rapidly cycling assets, and doing it all in a single CPU thread. Its still miles away from doing it at 60 of which it will still stutter with a 60fps cap on the best gaming CPUs of today, and previous flagships like the 9900k cant hit 20fps in the same area.
 
Back
Top