• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Intel Core i3-12300

IGP performance is primarily latency-sensitive, and all else being equal, bandwidth-sensitive.

So DDR5 6400 CL32 is the same latency as DDR4 3200 CL16. In a like-for-like IGP test, you'd assume that they both perform similarly at 720p but in games where the IGP has enough performance to push 1080p and beyond the DDR5 option would scale better less poorly.

Given that even the UHD 750 is too slow to provide meaningful framerates at 720p in many current AAA titles, scaling up to higher resolutions isn't really even relevant, so the discussion of DDR4 vs DDR5 isn't as important as "Hey Intel, where the hell are the other 64 EU in my IGP?"
In this test with a 5600G, Doom: Eternal 1080p low, CL16


  • 2666 - 38.9 fps
  • 3200 - 41.2 fps
  • 3600 - 42.9 fps
  • 4000 - 45.0 fps
And CL 14/16/18:

* 42.3 fps
* 41.2 fps
* 40.9 fps

So much more effect from speed than latency.
 
Last edited:
IGP dont care about latency, igp need bandwith :roll:

@Chrispy_
Bandwith is all u need on the igp not latency.

A10 7870K DDR3 1600:

CL7
30 FPS

CL9
29,96 FPS

CL11
28,87 FPS


DDR3:
1600 CL7
30 FPS

1866 CL11
34,3 FPS

2133 CL14
36,7 FPS

2400 CL16 (never saw that shit timing, just to compare)
38,9 FPS
 
Last edited:
Yeah RT is meant for the GPU but enabling taxes the CPU as well. Its not a GPU only thing.

Ask my aging 4790K why enabling it makes it hit 90%+ while the GPU lowers itself in utilization along with my frames. I assume its the BVH structure.
Nope, all handled on the RTX card. The CPU load should definitely be proportional to your framerate in any direct comparison of RTX off vs RTX on. If you're also using DLSS and your framerate is going up, then the CPU usage should also go up.

I'm not sure what your issue is, but it's not the normal behaviour. There are a few notes of people using older CPUs complaining about similar things on Nvidia's own forums but it's not intended nor expected. Could be PCIe bandwidth related, there were a couple of hints that updating the motherboard BIOS fixes that. It may also be specific to one game; Not all RTX implementations are bug-free on all platforms.

In this test with a 5600G, Doom: Eternal 1080p low, CL16


  • 2666 - 38.9 fps
  • 3200 - 41.2 fps
  • 3600 - 42.9 fps
  • 4000 - 45.0 fps
And CL 14/16/18:

* 42.3 fps
* 41.2 fps
* 40.9 fps

So much more effect from speed than latency.
For AMD IGPs, sure. That's because the AMD IGPs are fast enough that they lack bandwidth. Don't take my comment out of context though - it was a specific reply to DDR5 on the UHD 730 which is too slow to need all of the DDR4 bandwidth. Sure, if it was faster it would be able to take advantage of more bandwidth but the reality is that 32EU just plain sucks.

You don't have to guess or extrapolate, there are already several sites or channels that have investigated DDR4 vs DDR5 IGP scaling for Alder Lake that show no improvement at all. For the 96EU laptop Alder Lake models coming soon, I'm sure we'll see different results like we're accustomed to with the more compent AMD IGPs.

Gamers Nexus did a pretty solid investigation showing that the UHD 770 performance gains absolutely nothing from moving from DDR4-3200CL14 to DDR5 5200CL38. Although within 5%, the DDR4 IGP performance is better in all but one of the games tested, and that's likely to be from the improved latency of DDR4.
 
Last edited:
For AMD IGPs, sure. That's because the AMD IGPs are fast enough that they lack bandwidth. Don't take my comment out of context though - it was a specific reply to DDR5 on the UHD 730 which is too slow to need all of the DDR4 bandwidth. Sure, if it was faster it would be able to take advantage of more bandwidth but the reality is that 32EU just plain sucks.

You don't have to guess or extrapolate, there are already several sites or channels that have investigated DDR4 vs DDR5 IGP scaling for Alder Lake that show no improvement at all. For the 96EU laptop Alder Lake models coming soon, I'm sure we'll see different results like we're accustomed to with the more compent AMD IGPs.

Wait, you said latency is more important than bandwidth.

Do you have any site or channel that shows that specifically, for Alder Lake?
 
Wait, you said latency is more important than bandwidth.

Do you have any site or channel that shows that specifically, for Alder Lake?
Yeah, I just edited my post but GN is one that looked specifically at IGP testing on Alder Lake.

And to clarify, I did not say that latency is more important than bandwidth unilaterally. Everyone knows that bandwidth is very important with mountains of APU benchmarking done since Llano was launched 11 years ago. Where the GPU or IGP is powerful enough the bandwidth needs scale faster than the latency needs.

We're just talking about (and I was quoted replying to) a specific UHD 730 question. That is the only context where latency is more important than bandwidth because the UHD 730 IGP is too shit to use more bandwidth.
 
Last edited:
Yeah but there were people literally switching to Ryzen from i7's(Skylake and DevilCanyon) and then complaining why they were getting lower fps in certain games. Seen a couple of posts on r/intel with people questioning what was going on. You can use the waybackmachine and just check on r/amd how crazy people were during the launchday some of the shit that was being said is hillarious to read now. Like future bios update's that will give performance boost in games... Delusion at it's finest.
I feel the first 2 gen of Ryzen are good if you are using them for applications that supports multithreaded performance. After all, you get affordable 6c/12t and reasonably good value 8c/16t cores processor, while Intel is mainly limited to 4c/8t. Back then, most games don't support more than 4 cores since Intel decided that retail users only need 4 cores. In addition, single core performance is lower (coupled with lower clockspeed) on the Zen 1 and Zen+ chip as compared to Skylake.

IGP dont care about latency, igp need bandwith :roll:

@Chrispy_
Bandwith is all u need on the igp not latency.

A10 7870K DDR3 1600:

CL7
30 FPS

CL9
29,96 FPS

CL11
28,87 FPS


DDR3:
1600 CL7
30 FPS

1866 CL11
34,3 FPS

2133 CL14
36,7 FPS

2400 CL16 (never saw that shit timing, just to compare)
38,9 FPS
This is correct based on my own testing when I was using the Ryzen 5 3400G in the past. Reducing latency of the ram barely moved the performance, if any at all. Increasing the bandwidth results in the biggest improvement in performance because it is bandwidth starved, especially so when both the CPU and GPU needs to access the RAM. That's why by virtue of the increase in bandwidth offered by DDR5, I think we should see significant improvements in iGPU performance. The UHD 730 is not great due to the limited 32EUs, but it is still part of Xe graphics, so with the increase in bandwidth, we should also see a good jump in performance if you just measure by %. Don't bother to measure difference in FPS because the FPS is going to be low and even like 2 to 5 FPS may be a fairly big improvement for an iGPU.
 
Last edited:
why is everyone talking about AMD IGPs in an Intel thread?
 
Yeah, I just edited my post but GN is one that looked specifically at IGP testing on Alder Lake.

And to clarify, I did not say that latency is more important than bandwidth unilaterally. Everyone knows that bandwidth is very important with mountains of APU benchmarking done since Llano was launched 11 years ago. Where the GPU or IGP is powerful enough the bandwidth needs scale faster than the latency needs.

We're just talking about (and I was quoted replying to) a specific UHD 730 question. That is the only context where latency is more important than bandwidth because the UHD 730 IGP is too shit to use more bandwidth.
Unless I've missed a slide somewhere in that video, that's DDR4 vs DDR5, rather than something about latency?
 
Unless I've missed a slide somewhere in that video, that's DDR4 vs DDR5, rather than something about latency?
DDR5 latency is higher than DDR4 latency and the UHD 730 can't use the extra bandwidth, so in the context of DDR4-3400 CL14 and DDR5-5200 CL38, you're effectively seeing the results of two different absolute latencies (8.75ns vs 14.6ns). DDR4 has a minor advantage in most of those tests, likely because of the reduced latency.

If you're asking "has anyone published a controlled test comparing different DDR4 latencies without changing bandwidth for Alder Lake IGPs" then no, not that I've found; That's both extremely specific and pointless because the actual performance of the UHD 730 is dog shit no matter how you look at it and I doubt anyone is going to bother to do in-depth testing of something that scrapes the bottom of the barrel so hard :)

Intel put the UHD 730 into the i3 to make it display web browser pages and spreadsheets. It plays games from a decade ago and it's not much use for anything newer unless it's extremely casual.
 
why is everyone talking about AMD IGPs in an Intel thread?
Very good question. But much thanks to intel for not ignoring the low end crowd. Thumbs up for them and thumbs down and pretty loud boo to another company that deserted their own base that kept them afloat.
 
I think if they had made it a K sku it would've made a lot more sense at $160 usd may have even ended up a fun little chip to tweak they still could have a non k vairent for oems.... Feels like a long time since the i3-9350K came out even though it was only 2 and a half years ago
Don't worry too much about MSRP. If it's wrong (as this one seems to be), it will be corrected at retail.
 
Very good question. But much thanks to intel for not ignoring the low end crowd. Thumbs up for them and thumbs down and pretty loud boo to another company that deserted their own base that kept them afloat.
10400F has been the budget champion for about 18 months now, primarily because it's "good enough" and is widely available on a cheap platform. Alder Lake i3 is strong competition for that but 8T CPUs might age like milk as game developers start to target XBSX+PS5 over last-gen consoles.

Zen3 budget options are AWOL and likely will be until TSMC isn't constrained.
 
Nope, all handled on the RTX card. The CPU load should definitely be proportional to your framerate in any direct comparison of RTX off vs RTX on. If you're also using DLSS and your framerate is going up, then the CPU usage should also go up.

I'm not sure what your issue is, but it's not the normal behaviour. There are a few notes of people using older CPUs complaining about similar things on Nvidia's own forums but it's not intended nor expected. Could be PCIe bandwidth related, there were a couple of hints that updating the motherboard BIOS fixes that. It may also be specific to one game; Not all RTX implementations are bug-free on all platforms.
It's not a bug and it's not my BIOS lol. It's not specific to one game. Definitely not PCIe problem lol. You act like I just have zero clue what I'm talking about
Go learn about BVH.
 
I'm familiar with BVH, Our company spends a lot of time and money rendering cinematics and have been working with Intel, Nvidia and Chaos Group for years before RT-RT was even a thing.

Nvidia performs BVH in hardware via Async compute, and that's their recommendation to all developers for DXR implementations. The only API that allows an RTX card to offload BVH compuation to the CPU is Vulkan, and that's only an optionally supported feature, not something that is recommended.


I'm not saying I don't believe you or that you don't know what you're talking about, just that BVH setup is unlikely to be why your RTX performance increases CPU usage on an older quad-core.
The honest answer is that I don't know why your CPU usage is higher with RTX enabled. It's not the expected result and it doesn't really match the results of countless hundreds of side-by-side RTX on/off comparisons on Youtube with FRAPS running to show CPU usuage.
 
I'm familiar with BVH, Our company spends a lot of time and money rendering cinematics and have been working with Intel, Nvidia and Chaos Group for years before RT-RT was even a thing.

Nvidia performs BVH in hardware via Async compute, and that's their recommendation to all developers for DXR implementations. The only API that allows an RTX card to offload BVH compuation to the CPU is Vulkan, and that's only an optionally supported feature, not something that is recommended.


I'm not saying I don't believe you or that you don't know what you're talking about, just that BVH setup is unlikely to be why your RTX performance increases CPU usage on an older quad-core.
The honest answer is that I don't know why your CPU usage is higher with RTX enabled. It's not the expected result and it doesn't really match the results of countless hundreds of side-by-side RTX on/off comparisons on Youtube with FRAPS running to show CPU usuage.
Most of those benches are on on much newer CPUs. Not DDR3.
 
10400F has been the budget champion for about 18 months now, primarily because it's "good enough" and is widely available on a cheap platform. Alder Lake i3 is strong competition for that but 8T CPUs might age like milk as game developers start to target XBSX+PS5 over last-gen consoles.

Zen3 budget options are AWOL and likely will be until TSMC isn't constrained.
Here in Asia the 10100F is much more popular, because in reality it is good enough (4C/8T) and significantly cheaper. The 10400f is now super-questionable with the existence of the 12100f, PCIE 4.0, etc.
 
10400F has been the budget champion for about 18 months now, primarily because it's "good enough" and is widely available on a cheap platform. Alder Lake i3 is strong competition for that but 8T CPUs might age like milk as game developers start to target XBSX+PS5 over last-gen consoles.
Except i3 Alder is close to i5 10/11 gen even in typical multithreaded loads, so if it ages like milk, so will those older i5s.
 
Except i3 Alder is close to i5 10/11 gen even in typical multithreaded loads, so if it ages like milk, so will those older i5s.
Perhaps. I don't have an accurate method of predicting the future, but 4C/4T became inadequate, then 6C/6T.

4C/8T is okay for now but the writing is on the wall. Slow 6C/12T (eg Ryzen5 1600) have been ageing more gracefully than fast 4C/8T (eg 7700K) but I'll agree that the difference between the two isn't that significant at the moment.
 
Zen3 budget options are AWOL and likely will be until TSMC isn't constrained.
Go home with ure zen 3 for about what, ah the entry of zen 3 is about 246€.

LOL i can get a whole system for that price from intel like the new alderlake i3.


what a bullshitbingo a 7700k is better than the garbage 1600 :roll:
 
Lol exactly, I really wanted Ryzen actually. But with those prices and availability I had... lmao get lost team red, actually provide some chips at reasonable prices like Intel did. Even if mobos are pricier, that's still way cheaper than what I would pay for sth like what? Ryzen 5 3600 maybe kind of comparable to i3 12100F?

Perhaps. I don't have an accurate method of predicting the future, but 4C/4T became inadequate, then 6C/6T.

4C/8T is okay for now but the writing is on the wall. Slow 6C/12T (eg Ryzen5 1600) have been ageing more gracefully than fast 4C/8T (eg 7700K) but I'll agree that the difference between the two isn't that significant at the moment.
Well, we had sort of similar scenario back in LGA775 days, with people wondering whether to go E8400 or Q6600 (dual vs quad) for similar prices.

We know how it played out. A few years down the road and games refused to even launch on dual cores. That'd be a big win for Q6600... except it wasn't exactly adequate for those games either.

And for professional apps, well, Q6600 had lead there from the start. Except by that time much better chips came out too...

It's always the matter of "whether it does the things YOU need it to do".
 
Last edited:
i dont have the money atm but if i want a 6 Core Setup i pay:
9600KF + Board and RAM = 200 €
10400F -------------------- = 240
11400F -------------------- = 260
12400F -------------------- = 286

If i want a Ryzen (or new speech in my base: Scheißen)
246€ only for a 3600X Tray

Amd did a good thing they pushed the cpu realy, but now they are realy scheiße (shit) :laugh:
 
Last edited:
And as this test shows, 3600X has average performance... of about this i3 lol, so i5 12400F is out of its reach.
 
And as this test shows, 3600X has average performance... of about this i3 lol, so i5 12400F is out of its reach.
If we dont compare Cinebench or Games but such on tools like Solidworks or Siemens NX (i like this one),
a 3600X with 6 Cores and 12 Threads is on pair with a 9100F with 4 Cores without HT.

Siemens 'NX is a battlefield for drivers and support, not even a garbage High Thread CPU. ;)
But in the End a 3600X performs on the level of an i3 9100F :laugh:

Siemens NX:
1600 ~ 42 FPS in the CPU Limit
3600X ~ 56 FPS
9100F ~ 60 FPS
5800X ~ 71 FPS
11400F ~ 83 FPS
 
Last edited:
The i3 has come a long way, cheaper and faster than the i7 of 8 years ago.


I'm not sure what people expect from an i3 anymore.
I think there is a lot of 'you need an i5' rhetoric.
Like we had

* i3 = 2/4, i5 = 2/4 (1st gen)
* i3 = 2/4, i5 = 4/4 (2nd-7th gen)
* i3 = 4/4, i5 = 6/6 (8th-9th gen)
* i3 = 4/8, i5 = 6/12 (10th gen-)

So for the longest time it was 2 vs 4 cores, and then it was 'not enough threads', and now we have 8 threads then the idea that by only having 4 instead of 6 cores we are making a terrible mistake is not obvious in the way that 2 vs 4 always was; especially given that if we need many cores (most people don't), then we should just get an i5k or i7
 
Back
Top