Sunday, February 4th 2024

AMD Readies X870E Chipset to Launch Alongside First Ryzen 9000 "Granite Ridge" CPUs

AMD is readying the new 800-series motherboard chipset to launch alongside its next-generation Ryzen 9000 series "Granite Ridge" desktop processors that implement the "Zen 5" microarchitecture. The chipset family will be led by the AMD X870E, a successor to the current X670E. Since AMD isn't changing the CPU socket, and this is very much the same Socket AM5, the 800-series chipset will support not just "Granite Ridge" at launch, but also the Ryzen 7000 series "Raphael," and Ryzen 8000 series "Hawk Point." Moore's Law is Dead goes into the details of what sets the X870E apart from the current X670E, and it all has to do with USB4.

Apparently, motherboard manufacturers will be mandated to include 40 Gbps USB4 connectivity with AMD X870E, which essentially makes the chipset a 3-chip solution—two Promontory 21 bridge chips, and a discrete ASMedia ASM4242 USB4 host controller; although it's possible that AMD's QVL will allow other brands of USB4 controllers as they become available. The Ryzen 9000 series "Granite Ridge" are chiplet based processors just like the Ryzen 7000 "Raphael," and while the 4 nm "Zen 5" CCDs are new, the 6 nm client I/O die (cIOD) is largely carried over from "Raphael," with a few updates to its memory controller. DDR5-6400 will be the new AMD-recommended "sweetspot" speed; although AMD might get its motherboard vendors to support DDR5-8000 EXPO profiles with an FCLK of 2400 MHz, and a divider.
The Ryzen 9000 series "Granite Ridge" will launch alongside a new wave of AMD X870E motherboards, although these processors very much will be supported on AMD 600-series chipset motherboards with BIOS updates. The vast majority of Socket AM5 motherboards feature USB BIOS Flashback, and so you could even pick up a 600-series chipset motherboard with a Ryzen 9000 series processor in combos. The company might expand the 800-series with other chipset models, such as the X870, B850, and the new B840 in the entry level.
Sources: Moore's Law is Dead (YouTube), Tweaktown
Add your own comment

220 Comments on AMD Readies X870E Chipset to Launch Alongside First Ryzen 9000 "Granite Ridge" CPUs

#151
kapone32
dgianstefaniOK, set your memory to 4800 MT JEDEC and report back afterwards if you still believe this.

You're on a 144 Hz monitor, so your averages probably wouldn't be affected too much, but I'd wager almost anything you'd notice your minimum FPS dipping below 60 FPS quite frequently.

Regardless, RAM speed/latency objectively matters to gaming performance, even if you have an X3D chip, this is a statement of fact. Depending on your monitor refresh rate, your GPU power and the type of gamer you are, you may notice this more or less, but the underlying technical data is irrefutable.
I can't see I see that and I run my RAM at 5200. Could be your 12GB Vram buffer effecting that. Maybe it is more CPU cores I have and more VRAM to not experience that.
Posted on Reply
#152
dgianstefani
TPU Proofreader
kapone32I can't see I see that and I run my RAM at 5200. Could be your 12GB Vram buffer effecting that. Maybe it is more CPU cores I have and more VRAM to not experience that.
Completely missing the point as usual, and somehow conflating GPU bottlenecking with CPU bottlenecking, well done.
Posted on Reply
#153
gffermari
dgianstefaniRAM speed/latency objectively matters to gaming performance, even if you have an X3D chip, this is a statement of fact. Depending on your monitor refresh rate, your GPU power and the type of gamer you are, you may notice this more or less, but the underlying technical data is irrefutable.
+1

RAM speed and latency is one of the pillars for high end gaming. No matter if you use Intel or AMD (X3D).
Yes, in slow paced games you won't notice a thing but the numbers say the truth.

Also the X3Ds can cover the memory latency issues and it's more difficult to notice while gaming.
Basically because the X3Ds improve more the minFPS than the avgFPS, the ram issues are hiding behind that.
But still, even with a X3D CPU there is a difference in performance.
Posted on Reply
#154
kapone32
dgianstefaniCompletely missing the point as usual, and somehow conflating GPU bottlenecking with CPU bottlenecking, well done.
Of course you are right. Playing at 1440P 165Hz with 12GB is fine but you are telling people that VRAM is not the culprit when both of those can have an effect on Gaming performance when talking about 1% lows, when using system RAM means that SAM does not have enough VRAM to not send data to system RAM. The CPU cores can be enough to add more Cache so that even less data is sent to System RAM. That is when RAM latency matters. I guess I am dropping performance as well as my RAM runs at 5200 mt/s all day. That is not too far from the 4800 you referenced.
Posted on Reply
#155
AusWolf
dgianstefaniOK, set your memory to 4800 MT JEDEC and report back afterwards if you still believe this.

You're on a 144 Hz monitor, so your averages probably wouldn't be affected too much, but I'd wager almost anything you'd notice your minimum FPS dipping below 60 FPS quite frequently.

Regardless, RAM speed/latency objectively matters to gaming performance, even if you have an X3D chip, this is a statement of fact. Depending on your monitor refresh rate, your GPU power and the type of gamer you are, you may notice this more or less, but the underlying technical data is irrefutable.
I am running my RAM at 4800 MHz JEDEC (because my CPU needs a lot less SoC voltage this way), and my gaming experience is just fine. :)

Here is the underlying technical data:


My own take on it is that if I have 218/166 avg/min FPS or 204/149 is an insignificant difference, as the amount of info reaching my eyes and brain is exactly the same.

Also, this test was done with a 4090 at 1080p. With my 7800 XT at 1440 UW, I am infinitely more GPU-limited, therefore, my RAM speed matters even less.
Posted on Reply
#156
dgianstefani
TPU Proofreader
AusWolfI am running my RAM at 4800 MHz JEDEC (because my CPU needs a lot less SoC voltage this way), and my gaming experience is just fine. :)

Here is the underlying technical data:


My own take on it is that if I have 218/166 avg/min FPS or 204/149 is an insignificant difference, as the amount of info reaching my eyes and brain is exactly the same.

Also, this test was done with a 4090 at 1080p. With my 7800 XT at 1440 UW, I am infinitely more GPU-limited, therefore, my RAM speed matters even less.
And here's some other testing.

In your own testing for a 7700X, the difference between 4800 and 6000 is 30 FPS.

Some games such as Tarkov, Factorio or Minecraft are intensely CPU bottlenecked almost at all times, and will have more significant differences regardless of GPU used.

Other games will be GPU bottlenecked at all times, mostly single player games.

Regardless of how significant you consider 10-30 FPS, or if you consider it to be the "same amount of info reaching your eyes and brain", it's a difference. Your earlier statement of "RAM speed and latency don't matter at all" is therefore subjective, not objective.




If you have low expectations of your hardware, that's fine. If you don't notice the difference between 120 and 150 FPS, that's fine, but lets not pretend that then translates into "RAM speed doesn't matter".

Because people read technical threads like these, and misinformation is then propagated.

Even once you go past 6000 MT, the "sweet spot" (more like the spot where you can probably reach with AMD, since 6400 MT+ is pretty much unattainable without going out of sync and losing performance), you still see CPU and therefore FPS improvements from faster RAM. E.g. ~10 FPS just from 400 MT in the chart I linked. Intel CPUs getting 8000+ MT operate on a whole other level compared to tests done at 6000 MT.
Posted on Reply
#157
kapone32
dgianstefaniAnd here's some other testing.

In your own testing for a 7700X, the difference between 4800 and 6000 is 30 FPS.

Some games such as Tarkov, Factorio or Minecraft are intensely CPU bottlenecked almost at all times, and will have more significant differences regardless of GPU used.

Other games will be GPU bottlenecked at all times, mostly single player games.

Regardless of how significant you consider 10-30 FPS, or if you consider it to be the "same amount of info reaching your eyes and brain", it's a difference. Your earlier statement of "RAM speed and latency don't matter at all" is therefore subjective, not objective.




If you have low expectations of your hardware, that's fine. If you don't notice the difference between 120 and 150 FPS, that's fine, but lets not pretend that then translates into "RAM speed doesn't matter".

Because people read technical threads like these, and misinformation is then propagated.
I did not know that a 13900K comes with V cache otherwise this chart means nothing. In case you did not know AMD Overlay also shows 1% data and the X3D is famous for holding 1% numbers. The fact that you have a 7800X3D and did not use your own data speaks volumes.

Yep you can definitely tell the difference between 130 and 100 FPS on a Freesync panel....not.
Posted on Reply
#158
AusWolf
dgianstefani"RAM speed and latency don't matter at all" is therefore subjective, not objective.
If you read my earlier comment again, you'll see that this is exactly what I said. RAM speed doesn't matter to me. The information that reaches my brain is exactly the same.

If the difference above is night and day to you, fair enough. All I'm saying is, it means nothing to me. My play style is slow enough not to notice any difference above a certain FPS. Of course everybody is different.
Posted on Reply
#159
dgianstefani
TPU Proofreader
kapone32I did not know that a 13900K comes with V cache otherwise this chart means nothing. In case you did not know AMD Overlay also shows 1% data and the X3D is famous for holding 1% numbers. The fact that you have a 7800X3D and did not use your own data speaks volumes.

Yep you can definitely tell the difference between 130 and 100 FPS on a Freesync panel....not.
Noted. I now know that Kapone cannot tell the difference when looking at a 30% performance improvement. Thanks for the quote, will be useful for future reference when you start discussing performance.

I'm sure you do indeed have "the best AMD computer", but how could you tell? What if another PC was 29% faster?
kapone32The fact that you have a 7800X3D and did not use your own data speaks volumes.
It speaks for the fact I don't need to waste time logging my own testing to disprove anything you say, benchmarks off TPU are more than sufficient.

Or the general web, as AusWolf showed with his data.
Posted on Reply
#160
Tigerfox
dgianstefaniOK, set your memory to 4800 MT JEDEC and report back afterwards if you still believe this.

You're on a 144 Hz monitor, so your averages probably wouldn't be affected too much, but I'd wager almost anything you'd notice your minimum FPS dipping below 60 FPS quite frequently.

Regardless, RAM speed/latency objectively matters to gaming performance, even if you have an X3D chip, this is a statement of fact. Depending on your monitor refresh rate, your GPU power and the type of gamer you are, you may notice this more or less, but the underlying technical data is irrefutable.
In theory, yes. In real life, for most people it doesn't matter. All the benchmarks above useing a 4090 @1080p speaks volumes.

I'm not talking JEDEC 4800. Im talking 5600 or 6000 Cl32/CL30 with EXPO. And I'm talking 1440p@144Hz or even UHD. Even with a 4080, you will run into GPU limit most of the time, and when you don't, there wont be any tangible difference between 5600, 6000 ord 6400. I definiately won't consider anything abvoe that for AM5, because I don't have the time I would need to invest into optimzing for so little gain.

If you want to achieve the highest possible framerate @1080p because you think that makes a better experience than hhigher resolutions, then that might be something different, but even then I think you really have to have a 4080 or 4090 before any investment in fast RAM makes sense.
Posted on Reply
#161
dgianstefani
TPU Proofreader
TigerfoxIn theory, yes. In real life, for most people it doesn't matter.

I'm not talking, JEDEC 4800. Im talking 5600 or 6000 Cl32/CL30 with EXPO. And I'm talking 1440p@144Hz or even UHD. Even with a 4080, you will run into GPU limit most of the time, and when you don't, there wont be any tangible difference between 5600, 6000 ord 6400. I definiately won't consider anything abvoe that for AM5, because I don't have the time I would need to invest into optimzing for so little gain.
What you consider "tangible" or "real life" are subjective analyses, the fact is that faster/lower latency RAM affects performance, even on an X3D platform, unless you think the entire game data fits in ~100 MB of cache. Considering this is a hardware enthusiast forum, I would expect less resistance to this statement.

The benchmarks use a 4090 at 1080p because that's an easy way to force a CPU bottleneck, instead of playing for an hour to get 10 minutes of usable data demonstrating the differences you want to focus on. Unless you think CPU/RAM benchmarking should use GPU limited scenarios? This is the basis of scientific testing, you exclude the control variables and test the variable you're interested in.

1440p is considered a CPU limited resolution these days BTW. Especially with the (now cheap) popular 240 Hz monitors, or the emerging 360 Hz/480 Hz 1440p monitors. 4K is pretty much the only resolution where you're GPU bound all the time, but even then you can see improved minimum FPS with better RAM. 144 Hz monitors have been around for more than a decade now, it's pretty entry level.
Posted on Reply
#162
Tigerfox
dgianstefaniWhat you consider "tangible" or "real life" are subjective analyses, the fact is that faster/lower latency RAM affects performance, even on an X3D platform, unless you think the entire game data fits in ~100 MB of cache. Considering this is a hardware enthusiast forum, I would expect less resistance to this statement.
That's what I meant with "in theory". I don't doubt it does. What I doubt is that RAM speed affects performance @1440p or, UHD to such an extent that it warrants investing the time and/or money needed especially over somethin like 6000 CL30 or 6400 CL32 with EXPO that isn't much more expensive than 5600 and is usable out of the box.

If you can't afford at least a 4080, worrying about highend RAM is nonsense. Even then, you should much rather invest in a 4090. If you can't even afford a 7800X3D and 4080, even more so.

What I do think helps performance a lot, but won't do myself for time reasons, is buying cheap RAM and OCing it to 6000/6400 or lowering bad timings to good.

So, for people useing a 7800X3D/7950X3D or 13900K/14900K(S), 4090 and a monitor supporting 240Hz+ wanting to get the framerate as high as possible, especially the lows, RAM OC might be important, but those people are few.
dgianstefaniThe benchmarks use a 4090 at 1080p because that's an easy way to force a CPU bottleneck, instead of playing for an hour to get 10 minutes of usable data demonstrating the differences you want to focus on. Unless you think CPU/RAM benchmarking should use GPU limited scenarios? This is the basis of scientific testing, you exclude the control variables and test the variable you're interested in.

1440p is considered a CPU limited resolution these days BTW. Especially with the (now cheap) popular 240 Hz monitors, or the emerging 360 Hz/480 Hz 1440p monitors. 4K is pretty much the only resolution where you're GPU bound all the time, but even then you can see improved minimum FPS with better RAM. 144 Hz monitors have been around for more than a decade now, it's pretty entry level.
I have yet to see such a benchmark in 1440p. Then I again I just wen't from 1200p@60Hz to 3440x1440p@144Hz, but haven't had a chance to really play on that since for time reasons. There isn't only gaming. I for example have to read a lot on my monitor, so I didn't want OLED. There isn't much with IPS, 3440@1440p, 10Bit above 144Hz.

But my point was: Consider how many can't afford a 4080 or even 4070TiS. How many have to make do with a 4060, 7600, 6600Xt or something like that, and a 7600, 5700X, 12400 etc. RAM Speed is something they should worry about last.
Posted on Reply
#163
kapone32
dgianstefaniNoted. I now know that Kapone cannot tell the difference when looking at a 30% performance improvement. Thanks for the quote, will be useful for future reference when you start discussing performance.

I'm sure you do indeed have "the best AMD computer", but how could you tell? What if another PC was 29% faster?


It speaks for the fact I don't need to waste time logging my own testing to disprove anything you say, benchmarks off TPU are more than sufficient.

Or the general web, as AusWolf showed with his data.
Yep you again are right a Freesync panel that supports 45-165Hz does make a difference when you are at 100 vs 130 FPS. I guess you don't understand how Freesync works. Thanks for confirming that.

Does the 13900K have the same 1% performance of X3D chips?
Posted on Reply
#164
dgianstefani
TPU Proofreader
TigerfoxThat's what I meant with "in theory". I don't doubt it does. What I doubt is that RAM speed affects performance @1440p or, UHD to such an extent that it warrants investing the time and/or money needed especially over somethin like 6000 CL30 or 6400 CL32 with EXPO that isn't much more expensive than 5600 and is usable out of the box.

If you can't afford at least a 4080, worrying about highend RAM is nonsense. Even then, you should much rather invest in a 4090. If you can't even afford a 7800X3D and 4080, even more so.

What I do think helps performance a lot, but won't do myself for time reasons, is buying cheap RAM and OCing it to 6000/6400 or lowering bad timings to good.

So, for people useing a 7800X3D/7950X3D or 13900K/14900K(S), 4090 and a monitor supporting 240Hz+ wanting to get the framerate as high as possible, especially the lows, RAM OC might be important, but those people are few.
While some of what you're saying is true (but exaggerated, you don't need a 4090 and a 240 Hz+ panel to notice differences in RAM and therefore CPU performance), there are issues with the "20 FPS doesn't matter" attitude. Especially as this performance % difference sticks across all levels of hardware (moving your minimums from 100 to 130 is noticable even if you're only using a 4070/7700 class GPU, which is easily capable of 120 FPS. In a CPU limited scenario, the FPS is limited by the CPU, so whether or not you are using a 4090 or a 4070, those minimum lows will still be very similar, as long as the 4070 is capable of reaching a higher average than the CPU dictates, which it is).

For instance, ~20-25 FPS is the difference between a 4070 Ti and a 4080 (which used to be $400 difference), yet I don't see anyone saying that a faster GPU won't make a difference? Yet when that 20-25 FPS is coming from a RAM tune (free, or maybe $50 more expensive if you want to buy a faster stock kit), suddenly it's imperceivable or "not tangible". A 7900XTX vs 7900XT is even less than 20 FPS, does that render AMD's flagship pointless? No.

Posted on Reply
#165
Tigerfox
dgianstefani[...] there are issues with the "20 FPS doesn't matter" attitude. Especially as this performance % difference sticks across all levels of hardware (moving your minimums from 100 to 130 is noticable even if you're only using a 4070/7700 class GPU, which is easily capable of 120 FPS. In a CPU limited scenario, the FPS is limited by the CPU, so whether or not you are using a 4090 or a 4070, those minimum lows will still be very similar, as long as the 4070 is capable of reaching a higher average than the CPU dictates, which it is).

For instance, ~20-25 FPS is the difference between a 4070 Ti and a 4080 (which used to be $400 difference), yet I don't see anyone saying that a faster GPU won't make a difference? Yet when that 20-25 FPS is coming from a RAM tune (free, or maybe $50 more expensive if you want to buy a faster stock kit), suddenly it's imperceivable or "not tangible". A 7900XTX vs 7900XT is even less than 20 FPS, does that render AMD's flagship pointless? No.
You are still missing the point.

First, I have yet to see bencmarks in 1440p were RAM-tuning above DDR5-6000 CL30 from the shelf on a 7800XD does even make a difference of a two digit percentage, in low fps if you like. The benchmark you posted is, again, unrealistic 1080p with cards designed for 1440p and UHD. Then, even if there were games were that was the case, it would have to be in a fps-range were I would see the difference. I don't play fast shooters and I didn't get to play on my 34" 144Hz-monitor, but I really doubt I would see a difference between ~120fps and 130fps.

On the other hand, if you really achive 10fps+ performance gain by tuning cheap RAM to 6000+ with low latency and, of course that's good, but since up to that point the price difference is minimal in my country, my time is to precious for that. Gaining more than 10fps+ by tuning above DDR5-6000 CL30 with an 7800X3D in 1440p, that I relly wan't to see.
Posted on Reply
#166
AusWolf
dgianstefaniNoted. I now know that Kapone cannot tell the difference when looking at a 30% performance improvement. Thanks for the quote, will be useful for future reference when you start discussing performance.

I'm sure you do indeed have "the best AMD computer", but how could you tell? What if another PC was 29% faster?
It's all relative. 30% over 50 FPS is noticeable. 30% over 200 FPS is not (unless the game you're playing is called FRAPS or Afterburner).

Edit: The only difference is that you don't get 30% extra with high graphical settings / resolutions, and with mid-range or lower GPUs.

Your own sensitivity matters a lot as well. I know someone who demands a constant 360 FPS on his 360 Hz monitor. As for me, anything above ~40 is smooth enough, especially with Freesync.
Posted on Reply
#167
dgianstefani
TPU Proofreader
TigerfoxFirst, I have yet to see bencmarks in 1440p were RAM-tuning above DDR5-6000 CL30 from the shelf on a 7800XD does even make a difference of a two digit percentage, in low fps if you like. The benchmark you posted is, again, unrealistic 1080p with cards designed for 1440p and UHD. Then, even if there were games were that was the case, it would have to be in a fps-range were I would see the difference. I don't play fast shooters and I didn't get to play on my 34" 144Hz-monitor, but I really doubt I would see a difference between ~120fps and 130fps.
Noone tests at 1440p, not because you wouldn't see a difference, but because it's a mixed benchmark. At 1440p sometimes you'll be CPU limited, sometimes you'll be GPU limited, this isn't helpful when you're trying to evaluate the effect of RAM performance. At 4K we know CPU and therefore RAM speed becomes less meaningful, because only high end cards can start to push more than 120 FPS, but 1440p is significantly easier to drive for the GPU than 4K. To the point where you can reach up to 300 FPS without much trouble in your average game (600 FPS if you're talking esports titles). A 10-20% difference in FPS matters more at 100 FPS+ than it does at 50 FPS, as AusWolf has just said. Again, it's not the average FPS which is more significant here, although moving from 200 to 230 FPS is nice indeed if you're running a 240 Hz panel, it's the minimum FPS, moving from 230 FPS to 150 is a lot less jarring than moving from 230 to 120, or from 120 FPS to 100 FPS, instead of 120 FPS to 80 FPS.

The point I've been trying to make for the past 30 minutes, which seems to have significant resistance (for some reason), is that you don't ever want to be CPU bottlenecked, because that is what people notice as stuttering, or dips. It's irrelevant if you're playing at 100 FPS or 200 FPS, that number suddenly halving or going down by an appreciable amount because your CPU is struggling to keep up is noticable and immersion breaking, and coincides with massively increased input lag. If you want to talk esports, then "muscle memory" is tied to frame rates, you want consistency, not high averages.
AusWolfIt's all relative. 30% over 50 FPS is noticeable. 30% over 200 FPS is not (unless the game you're playing is called FRAPS or Afterburner).

Edit: The only difference is that you don't get 30% extra with high graphical settings / resolutions, and with mid-range or lower GPUs.

Your own sensitivity matters a lot as well. I know someone who demands a constant 360 FPS on his 360 Hz monitor. As for me, anything above ~40 is smooth enough, especially with Freesync.
Like I said, subjective analysis is fine, but lets call it for what it is.

Not much point buying a high refresh monitor if you barely ever hit or sustain that high FPS.
Posted on Reply
#168
Tigerfox
@dgianstefani : Even then, as I said, most people just don't have the GPU to not be GPU-bttlenecked in 1440p. I won't pay 1.000€+ for a GPU even tough I can afford it, because I can't spent enough time on gaming to justify that.
And I still don't believe in differences this huge between DDR5-6000 CL30 out of the box and tuning on 7800X3D. If you do, please show me. But, because I don't have enough time for gaming, I wouldn't invest the time I don't have in finetuning my RAM. Buying 6000 CL30 or 6400 CL32 with EXPO instead of 5x00 for 20-30€ more, fine by me. But only because I can afford a 7800XD. If I couldn't, I wouldn't waste money on RAM.
Posted on Reply
#169
AusWolf
dgianstefaniNoone tests at 1440p, not because you wouldn't see a difference, but because it's a mixed benchmark. At 1440p sometimes you'll be CPU limited, sometimes you'll be GPU limited, this isn't helpful when you're trying to evaluate the effect of RAM performance. At 4K we know CPU and therefore RAM speed becomes less meaningful, because only high end cards can start to push more than 120 FPS, but 1440p is significantly easier to drive for the GPU than 4K.
That's why I'm saying that RAM speed doesn't matter to me, as I'm playing at 1440 UW with as high graphical details as my GPU allows. This way, I'm always GPU limited.

Artificially inducing a CPU-limited scenario purely for the sake of science is fine, but why should I give the results more credit than they're worth?
dgianstefaniA 10-20% difference in FPS matters more at 100 FPS+ than it does at 50 FPS, as AusWolf has just said.
I actually said the opposite. At low FPS, any small extra can help, but at high FPS, I couldn't care less if there's any difference.
dgianstefaniThe point I've been trying to make for the past 30 minutes, which seems to have significant resistance (for some reason), is that you don't ever want to be CPU bottlenecked, because that is what people notice as stuttering, or dips. It's irrelevant if you're playing at 100 FPS or 200 FPS, that number suddenly halving or going down by an appreciable amount because your CPU is struggling to keep up is noticable and immersion breaking, and coincides with massively increased input lag. If you want to talk esports, then "muscle memory" is tied to frame rates, you want consistency, not high averages.
I agree with this sentiment, but which situations your experience is limited by your CPU/RAM is highly dependent on your hardware and your sensitivity. I doubt that I could ever notice when a game running at 200 FPS dips into 100 for a microsecond, and I also doubt that any mid-range GPU is CPU limited at 1440p unless you pair it with a Celeron.
dgianstefaniLike I said, subjective analysis is fine, but lets call it for what it is.

Not much point buying a high refresh monitor if you barely ever hit or sustain that high FPS.
Exactly - it's all subjective. That's my point all along. :)

The point is not necessarily the high refresh rate, but VRR, which eliminates any screen tearing at the appropriate performance levels.
Posted on Reply
#170
dgianstefani
TPU Proofreader
AusWolfThat's why I'm saying that RAM speed doesn't matter to me, as I'm playing at 1440 UW with as high graphical details as my GPU allows. This way, I'm always GPU limited.

Artificially inducing a CPU-limited scenario purely for the sake of science is fine, but why should I give the results more credit than they're worth?


I actually said the opposite. At low FPS, any small extra can help, but at high FPS, I couldn't care less if there's any difference.


I agree with this sentiment, but which situations your experience is limited by your CPU/RAM is highly dependent on your hardware and your sensitivity. I doubt that I could ever notice when a game running at 200 FPS dips into 100 for a microsecond, and I also doubt that any mid-range GPU is CPU limited at 1440p unless you pair it with a Celeron.


Exactly - it's all subjective. That's my point all along. :)

The point is not necessarily the high refresh rate, but VRR, which eliminates any screen tearing at the appropriate performance levels.
No. You're not.

I'm busy doing other things now, but good chat I guess.

Again with the exaggerations though.

Dips do not last "microseconds".

Screen tearing is an entirely different problem than stuttering or framerate dips. They may have similar causes, but they're different issues entirely.
Posted on Reply
#171
AusWolf
dgianstefaniNo. You're not.

I'm busy doing other things now, but good chat I guess.
Okay, I'm not. My gaming experience is still as solid as I could want for, and I still don't see much, if any difference between 6000 and 4800 MHz RAM.
dgianstefaniScreen tearing is an entirely different problem than stuttering or framerate dips. They may have similar causes, but they're different issues entirely.
Of course they're different things. I just demonstrated that there can be multiple reasons for buying a high refresh rate monitor. The high refresh rate does not necessarily have to be the no.1 buying criterion.
Posted on Reply
#172
AsRock
TPU addict
dgianstefaniI mean, my current X670E mobo already has two USB4 ports, so hopefully there's also other improvements or the refresh is pretty boring.

Mandating it is good I suppose, but surely there's more interesting stuff to improve.

Thunderbolt 4? WiFi 7? Better memory traces?
Extra cost for sure maybe another +$60+ ?m on top. OOoh it has USB 3 wow big deal hahahha.
Posted on Reply
#173
kapone32
dgianstefaniWhile some of what you're saying is true (but exaggerated, you don't need a 4090 and a 240 Hz+ panel to notice differences in RAM and therefore CPU performance), there are issues with the "20 FPS doesn't matter" attitude. Especially as this performance % difference sticks across all levels of hardware (moving your minimums from 100 to 130 is noticable even if you're only using a 4070/7700 class GPU, which is easily capable of 120 FPS. In a CPU limited scenario, the FPS is limited by the CPU, so whether or not you are using a 4090 or a 4070, those minimum lows will still be very similar, as long as the 4070 is capable of reaching a higher average than the CPU dictates, which it is).

For instance, ~20-25 FPS is the difference between a 4070 Ti and a 4080 (which used to be $400 difference), yet I don't see anyone saying that a faster GPU won't make a difference? Yet when that 20-25 FPS is coming from a RAM tune (free, or maybe $50 more expensive if you want to buy a faster stock kit), suddenly it's imperceivable or "not tangible". A 7900XTX vs 7900XT is even less than 20 FPS, does that render AMD's flagship pointless? No.

What you do not understand is that Freesync has made that moot in the accepted range. It is almost impossible to discern between 120 and 140 FPS, Unless there is a FPS counter used. The reason that X3D chips feel so fast is that it mitigates the RAM theory that you are trying to promote. Of course if you were talking about APUs that have no Vram buffer then your argument comes into focus. It would also seem that you are still using AM4 as a basis. On that I agree that AM4 CPUs felt faster with using faster Ram with tight timings.to get the last 5-8 FPS. I also have the best RAM you can buy for AM4 in the Team Extreme kit that costs 3 times what the Gskill 3600 18 costs and do you know where it made a difference? It did not. You see I bought that RAM with an X3D chip and it basically defeated the notion. I cannot believe that you have an X3D chip and do not understand that.
Posted on Reply
#174
Tigerfox
@dgianstefani : Just out of interest I looked up recent benchmarks about the benefits of fast RAM on Ryzen 7000 and found this. While DDR5-7400 is possible since AGESA 1.0.0.7b, it performs even slightly worse than DDR5-6200, because of 1:2 mode. Same for 7950X in 1440p and 7800X3D in 1080p. So what's your argument again? There is no performance to be gained by fast RAM. You can buy RAM fast enough to max out Ryzen 7000 for just slightly more than cheap RAM. Only thing you can do ist buy cheap RAM and tune it to 6400 CL30/32 yourself, if you have time but no money.
Posted on Reply
#175
dgianstefani
TPU Proofreader
Tigerfox@dgianstefani : Just out of interest I looked up recent benchmarks about the benefits of fast RAM on Ryzen 7000 and found this. While DDR5-7400 is possible since AGESA 1.0.0.7b, it performs even slightly worse than DDR5-6200, because of 1:2 mode. Same for 7950X in 1440p and 7800X3D in 1080p. So what's your argument again? There is no performance to be gained by fast RAM. You can buy RAM fast enough to max out Ryzen 7000 for just slightly more than cheap RAM. Only thing you can do ist buy cheap RAM and tune it to 6400 CL30/32 yourself, if you have time but no money.
A tuned 6200 setup with subtimings is faster than an EXPO 6000 bumped to 6400 (unlikely to achieve). Something like 55 ns vs around 60.

TRFC and other timings make more difference if you can't go past 6400 without going out of sync like with Zen. It's another reason the platform isn't the magic bullet people think it is. Intel scales past 8000 MT, Zen will do 6200, 6400 if you're very lucky.

Perf diff between 6000 EXPO and 6200 tuned is about 8% in my testing averages. But in best case scenario moved from 190 min FPS to my min FPS not deviating from my framelock, so 237 FPS, that's one game though.
Posted on Reply
Add your own comment
Nov 25th, 2024 20:49 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts