newtekie1
Semi-Retired Folder
- Joined
- Nov 22, 2005
- Messages
- 28,473 (4.08/day)
- Location
- Indiana, USA
Processor | Intel Core i7 10850K@5.2GHz |
---|---|
Motherboard | AsRock Z470 Taichi |
Cooling | Corsair H115i Pro w/ Noctua NF-A14 Fans |
Memory | 32GB DDR4-3600 |
Video Card(s) | RTX 2070 Super |
Storage | 500GB SX8200 Pro + 8TB with 1TB SSD Cache |
Display(s) | Acer Nitro VG280K 4K 28" |
Case | Fractal Design Define S |
Audio Device(s) | Onboard is good enough for me |
Power Supply | eVGA SuperNOVA 1000w G3 |
Software | Windows 10 Pro x64 |
I don't think that's accurate. The 2500K falls behind in a lot of tests to the i3s, which (ignoring core clocks) is as strong as the current Pentiums. It has to be overclocked to support the 1080 Ti.
No, it doesn't even have to be overclocked to support the 1080Ti. In any modern game, using settings and resolutions that need a 1080Ti, the 2500K will not be the bottleneck. The 1080Ti will be, that is why we upgrade our GPUs way more than we upgrade our CPUs. I can't think of a single game released recently that this isn't true with.
What he's asking for is a low res test that is CPU limited because the CPU that gets the worse result will be the CPU that starts to bottleneck future GPUs first. It is a relevant test for people who plan to keep the CPU longer than the GPU (almost everyone).
In theory, yes. In real world use, almost never.