• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Intel Sunny Cove Successor Significantly Bigger: Jim Keller

Joined
Feb 20, 2019
Messages
8,284 (3.93/day)
System Name Bragging Rights
Processor Atom Z3735F 1.33GHz
Motherboard It has no markings but it's green
Cooling No, it's a 2.2W processor
Memory 2GB DDR3L-1333
Video Card(s) Gen7 Intel HD (4EU @ 311MHz)
Storage 32GB eMMC and 128GB Sandisk Extreme U3
Display(s) 10" IPS 1280x800 60Hz
Case Veddha T2
Audio Device(s) Apparently, yes
Power Supply Samsung 18W 5V fast-charger
Mouse MX Anywhere 2
Keyboard Logitech MX Keys (not Cherry MX at all)
VR HMD Samsung Oddyssey, not that I'd plug it into this though....
Software W10 21H1, barely
Benchmark Scores I once clocked a Celeron-300A to 564MHz on an Abit BE6 and it scored over 9000.
RAM speed matters little to nothing for gaming. Gaming is not bottlenecked by memory bandwidth, and "faster" memory really only improve bandwidth, not latency.

While Skylake have improved clocks a lot over Sandy Bridge, especially with "aggressive" boosting, the CPU front-end improvements have also helped a lot. It's important to remember that IPC is a measure of "arbitrary" workloads, and many things affect IPC. One of the reasons why Intel still have an edge in gaming is a stronger front-end, while AMD have higher peak ALU/FPU throughput in some cases, both of which affect IPC, but only the first really affect gaming.

Like Danbert and Midland, I strongly disagree with you on this. You're comparing Sandy (DDR3-1333MHz max spec) with modern platforms that manage a minimum of ~2.5x the bandwidth and significantly lower latency at the same time.

Typically, RAM bandwidth is one of the leading contributors to low minimum framerates and there is no shortage of articles and videos going back a decade or so that make this painfully obvious. I must have watched and read over a hundred mainstream videos on this topic alone.

Who cares about average framerates when their 1% low and 0.1% low framerates are absolutely tanking performance when it matters?
 
Joined
Jun 10, 2014
Messages
2,987 (0.78/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
Like Danbert and Midland, I strongly disagree with you on this. You're comparing Sandy (DDR3-1333MHz max spec) with modern platforms that manage a minimum of ~2.5x the bandwidth and significantly lower latency at the same time.
Perhaps I could have been a little more precise. Yes, 1333 MHz vs 2666 MHz in today's games would impact performance somewhat. Around 2133-2400 MHz it starts to flatten out, and beyond 2666 MHz there are few significant differences (for Intel CPUs). My point that I didn't get across well enough is that memory speed matters much less than people think, and even the performance impact between 1333 MHz and 2666 MHz is usually only a few percent.

But no, memory latency haven't changed in the last 10+ years.

Who cares about average framerates when their 1% low and 0.1% low framerates are absolutely tanking performance when it matters?
I agree that consistency is much more important than average frame rate, no issue there.
 
Joined
Feb 3, 2017
Messages
3,757 (1.32/day)
Processor Ryzen 7800X3D
Motherboard ROG STRIX B650E-F GAMING WIFI
Memory 2x16GB G.Skill Flare X5 DDR5-6000 CL36 (F5-6000J3636F16GX2-FX5)
Video Card(s) INNO3D GeForce RTX™ 4070 Ti SUPER TWIN X2
Storage 2TB Samsung 980 PRO, 4TB WD Black SN850X
Display(s) 42" LG C2 OLED, 27" ASUS PG279Q
Case Thermaltake Core P5
Power Supply Fractal Design Ion+ Platinum 760W
Mouse Corsair Dark Core RGB Pro SE
Keyboard Corsair K100 RGB
VR HMD HTC Vive Cosmos
How much memory speed matters depends on application or game. At least among games there are definitely examples that rely on memory bandwidth and get decent improvements from faster memory, far more than a few percent.
 
Joined
Feb 20, 2019
Messages
8,284 (3.93/day)
System Name Bragging Rights
Processor Atom Z3735F 1.33GHz
Motherboard It has no markings but it's green
Cooling No, it's a 2.2W processor
Memory 2GB DDR3L-1333
Video Card(s) Gen7 Intel HD (4EU @ 311MHz)
Storage 32GB eMMC and 128GB Sandisk Extreme U3
Display(s) 10" IPS 1280x800 60Hz
Case Veddha T2
Audio Device(s) Apparently, yes
Power Supply Samsung 18W 5V fast-charger
Mouse MX Anywhere 2
Keyboard Logitech MX Keys (not Cherry MX at all)
VR HMD Samsung Oddyssey, not that I'd plug it into this though....
Software W10 21H1, barely
Benchmark Scores I once clocked a Celeron-300A to 564MHz on an Abit BE6 and it scored over 9000.
But no, memory latency haven't changed in the last 10+ years.

DDR3-1333 CL8 is 6.00ns of CAS latency, and that was the high-end stuff back in 2009 with a hefty premium.
DDR4-3200 CL14 is 4.37ns of CAS latency, and that's today's cheap stuff. You can buy 4600 CL17 if you want to pay the premium tax!

So yeah, we're probably looking at about a 50% improvement in latency alone if you take a mid-range kit from now and 10 years ago and compare the raw latency in nanoseconds. By saying memory latency hasn't changed in a decade is an insult to all the work and progress Samsung, Hynix, and Micron have made over the last few generations.

Ryzen loves faster RAM.
Intel loves faster RAM.

Even in 2017 on an i7-7700K, 3200MHz seemed to be the sweet spot with several mainstream titles doing up to 25% worse when dropping from 3200 to 2666. As far as most reviewers and youtubers are concerned, using less than DDR4-3200 is a bad idea these days because you're losing out on CPU performance. It doesn't matter if that's latency or bandwidth, the results are just worse with slower RAM.

 
Last edited:
Joined
Jun 10, 2014
Messages
2,987 (0.78/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
DDR3-1333 CL8 is 6.00ns of CAS latency, and that was the high-end stuff back in 2009 with a hefty premium.
DDR4-3200 CL14 is 4.37ns of CAS latency, and that's today's cheap stuff. You can buy 4600 CL17 if you want to pay the premium tax!

So yeah, we're probably looking at about a 50% improvement in latency alone if you take a mid-range kit from now and 10 years ago and compare the raw latency in nanoseconds. By saying memory latency hasn't changed in a decade is an insult to all the work and progress Samsung, Hynix, and Micron have made over the last few generations.
I'm sorry, but you are completely wrong when it comes to latency.
You can read about memory latency here.
Access time of an arbitrary address in DRAM is about ~50ns, with memory controller overhead it's about ~70-90ns as you can see here. So when you compare memory with different CAS latency with is 1.5-2ns quicker, it's not 50% quicker, more like ~2-3% quicker (and that's assuming you are able to actually run it on the best case non-JEDEC speeds they put in the spec sheet).
As I said, DRAM latencies haven't changed much changed a lot since the old Pentiums, which was still around ~>80ns.

Ryzen loves faster RAM.
Intel loves faster RAM.
Zen/Zen2 benefits from faster memory frequencies because other timings are tied to it internally, so that's only indirectly.
As you increase the memory speed, you generally only get more bandwidth, as CAS latency has to be increased as the clock increases due to the inherit latencies in DRAM.
And if bandwidth were a bottleneck for gaming, gamers would all buy HEDT CPUs with quad/hex/octa channel memory, that's the easiest way to get a lot of memory bandwidth.
Also, the preliminary specs for upcoming DDR5 expects higher latencies than DDR4.
 
Last edited:
Joined
Feb 20, 2019
Messages
8,284 (3.93/day)
System Name Bragging Rights
Processor Atom Z3735F 1.33GHz
Motherboard It has no markings but it's green
Cooling No, it's a 2.2W processor
Memory 2GB DDR3L-1333
Video Card(s) Gen7 Intel HD (4EU @ 311MHz)
Storage 32GB eMMC and 128GB Sandisk Extreme U3
Display(s) 10" IPS 1280x800 60Hz
Case Veddha T2
Audio Device(s) Apparently, yes
Power Supply Samsung 18W 5V fast-charger
Mouse MX Anywhere 2
Keyboard Logitech MX Keys (not Cherry MX at all)
VR HMD Samsung Oddyssey, not that I'd plug it into this though....
Software W10 21H1, barely
Benchmark Scores I once clocked a Celeron-300A to 564MHz on an Abit BE6 and it scored over 9000.
I'm sorry, but you are completely wrong when it comes to latency.
You can read about memory latency here.
Access time of an arbitrary address in DRAM is about ~50ns, with memory controller overhead it's about ~70-90ns as you can see here. So when you compare memory with different CAS latency with is 1.5-2ns quicker, it's not 50% quicker, more like ~2-3% quicker (and that's assuming you are able to actually run it on the best case non-JEDEC speeds they put in the spec sheet).
As I said, DRAM latencies haven't changed much changed a lot since the old Pentiums, which was still around ~>80ns.
Fair enough. I was just going on CAS latency x clock interval.

In terms of memory improvements though, I don't think it even matters that overall memory latency hasn't changed much. People can argue hypothetical theory all day, but in the real world, in real applications and real games, a significant portion of performance can be attributed to faster RAM, with a simple litmus test of "put in slower RAM and watch the performance vanish".
 
Joined
Feb 3, 2017
Messages
3,757 (1.32/day)
Processor Ryzen 7800X3D
Motherboard ROG STRIX B650E-F GAMING WIFI
Memory 2x16GB G.Skill Flare X5 DDR5-6000 CL36 (F5-6000J3636F16GX2-FX5)
Video Card(s) INNO3D GeForce RTX™ 4070 Ti SUPER TWIN X2
Storage 2TB Samsung 980 PRO, 4TB WD Black SN850X
Display(s) 42" LG C2 OLED, 27" ASUS PG279Q
Case Thermaltake Core P5
Power Supply Fractal Design Ion+ Platinum 760W
Mouse Corsair Dark Core RGB Pro SE
Keyboard Corsair K100 RGB
VR HMD HTC Vive Cosmos
DDR3-1333 CL8 is 6.00ns of CAS latency, and that was the high-end stuff back in 2009 with a hefty premium.
DDR4-3200 CL14 is 4.37ns of CAS latency, and that's today's cheap stuff. You can buy 4600 CL17 if you want to pay the premium tax!
This is not a good comparison. 2009 was early in DDR3 lifecycle and while a common module was DDR3-1333 there definitely were DDR3-1600 and faster modules available by the end of DDR3, you could get DDR3-1866 or DDR3-2133. DDR4 is much closer to the end of its life cycle by today. Look back to similar place in its life cycle a your DDR3 example and an average module was DDR4-2400 :)

DDR4-3200 CL14 is most definitely not cheap stuff. DDR4-3200 CL16 costs almost half of what CL14 costs.
And if bandwidth were a bottleneck for gaming, gamers would all buy HEDT CPUs with quad/hex/octa channel memory, that's the easiest way to get a lot of memory bandwidth.
Gamers did buy HEDT CPUs for a while and there are clear benefits even in gaming from more memory bandwidth. However, today the HEDT CPUs are overkill in terms of core count on one hand and/or not the best organization of inter-core communication for latency and this has clear negative effect on gaming. Both Intel's mesh CPUs and Threadrippers have the same problem here. Zen2-based Threadrippers might bring a change to this, hopefully.
Also, the preliminary specs for upcoming DDR5 expects higher latencies than DDR4.
As was the case with DDR2, DDR3 and DDR4 ;)
 
Last edited:
Joined
Feb 20, 2019
Messages
8,284 (3.93/day)
System Name Bragging Rights
Processor Atom Z3735F 1.33GHz
Motherboard It has no markings but it's green
Cooling No, it's a 2.2W processor
Memory 2GB DDR3L-1333
Video Card(s) Gen7 Intel HD (4EU @ 311MHz)
Storage 32GB eMMC and 128GB Sandisk Extreme U3
Display(s) 10" IPS 1280x800 60Hz
Case Veddha T2
Audio Device(s) Apparently, yes
Power Supply Samsung 18W 5V fast-charger
Mouse MX Anywhere 2
Keyboard Logitech MX Keys (not Cherry MX at all)
VR HMD Samsung Oddyssey, not that I'd plug it into this though....
Software W10 21H1, barely
Benchmark Scores I once clocked a Celeron-300A to 564MHz on an Abit BE6 and it scored over 9000.
This is not a good comparison. 2009 was early in DDR3 lifecycle and while a common module was DDR3-1333 there definitely were DDR3-1600 and faster modules available by the end of DDR3, you could get DDR3-1866 or DDR3-2133. DDR4 is much closer to the end of its life cycle by today. Look back to similar place in its life cycle a your DDR3 example and an average module was DDR4-2400 :)
I only picked those values because they are the Intel and AMD spec of Sandy Bridge and Zen 2 respectively to indicate the ~10 year difference. (it's 8 years, 9 months, actually).

As for pricing, I suspect that's just regional variation. 3200 14-16-16 is guaranteed on Samsung B-die It shouldn't be expensive and I certainly haven't paid much of a premium for it in the last quarter. You do have to read between the lines a bit because JEDEC-spec 3200 (PC4-25600) ranges from CL22 to CL20 and it's practically impossible to even buy RAM that slow. Even the lowest-grade Micron A-die or Samsung B-die will likely meet AMD's recommendations of 3200 CL14. It's not as if the XMP info in the SPD is particularly helpful either because it's not optimised for Ryzen and even across different generation of Intel DDR4-compatible CPUs there's a lot of variation in their RAM tolerance and IMC quality.

What I tried to say in my previous post is that none of these minor latency or timing details really matter. If you take any modern processor on today's mainstream DDR4 (somewhere around 3200) you are going to see drastic performance loss if you swap in much slower RAM, which brings us back to the original point of this discussion - that significant IPC gains made in the last decade are at least partly attributable to the improvements in DRAM speed and not something Intel can claim as true IPC gains. We all know that Haswell benefitted from a RAM overclock, despite the spec only being officially capped at 1600MHz.
 
Joined
Feb 3, 2017
Messages
3,757 (1.32/day)
Processor Ryzen 7800X3D
Motherboard ROG STRIX B650E-F GAMING WIFI
Memory 2x16GB G.Skill Flare X5 DDR5-6000 CL36 (F5-6000J3636F16GX2-FX5)
Video Card(s) INNO3D GeForce RTX™ 4070 Ti SUPER TWIN X2
Storage 2TB Samsung 980 PRO, 4TB WD Black SN850X
Display(s) 42" LG C2 OLED, 27" ASUS PG279Q
Case Thermaltake Core P5
Power Supply Fractal Design Ion+ Platinum 760W
Mouse Corsair Dark Core RGB Pro SE
Keyboard Corsair K100 RGB
VR HMD HTC Vive Cosmos
It is not so much the difference in years but a difference in where the RAM type is in its life cycle. DDR3 came out in 2007 and 2009 was pretty early in its life cycle. For DDR4 that came out in 2014 the same timeframe would be somewhere 2016-ish. CPU IMC spec's especially when it comes to Intel is probably a bad baseline.

You are also talking about memory overclocking here. Yeah, B-dies do 3200 CL14 but out-of-box spec module prices are a bit different thing. Looking at prices in Europe, 2x8GB CL16 will cost about 70€ and CL14 about 130€. Same relative difference applies to other sizes as well. JEDEC spec is a bit of a different thing because it states 1.2V as working voltage and low voltage is what gives you 3200 CL18-22.

How much of the speedup is attributable to RAM speedup is not a single question. There are benchmarks that really don't care. Cinebench (especially R15) is a very good example.
 
Joined
Jan 6, 2013
Messages
350 (0.08/day)
Why just now they (Intel) rejoining the inovative part of the scene , as ever since "Skylake" , it's as if Intel was in retirement cashing pension coupons or viewed another way , a multy year sabbatical.
So , in another parallel universe, I am safekeeping my 2 entry-level Skylake cpu's one being a Pentium spec whilst the other is a Celeron spec , having them donated to museums all the while story talking my children/grandchildren on how the x86 cpu's "war's" were "fought" and reminiscent of the era's when Intel was top dog in most of those , until "Ivy-Lake" .

"Good night and faa iuu"
I guess you don't read news. They had MASSIVE issues with 10nm. That is why.
 
Top