- Joined
- May 2, 2017
- Messages
- 7,762 (2.80/day)
- Location
- Back in Norway
System Name | Hotbox |
---|---|
Processor | AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6), |
Motherboard | ASRock Phantom Gaming B550 ITX/ax |
Cooling | LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14 |
Memory | 32GB G.Skill FlareX 3200c14 @3800c15 |
Video Card(s) | PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W |
Storage | 2TB Adata SX8200 Pro |
Display(s) | Dell U2711 main, AOC 24P2C secondary |
Case | SSUPD Meshlicious |
Audio Device(s) | Optoma Nuforce μDAC 3 |
Power Supply | Corsair SF750 Platinum |
Mouse | Logitech G603 |
Keyboard | Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps |
Software | Windows 10 Pro |
Running everything as stock, as configured by Intel and the motherboard maker. They've clarified their testing methodologies at length previously, and reiterated this at the launch of ADL due to its abandonment of tau and boost power limits for K SKUs.I dont what the heck anand is doing wrong but PL2 includes everything, cores cache and rest of chip.
Then all marketing is dishonest - which isn't wrong IMO, but then you're missing the point. There's nothing especially dishonest about this marketing in comparison to the competition - it's pretty much on par with everyone else. As such, calling them out on it specifically is selective.I don't think I said lying, but yes I called it dishonest. In my book misleading makes you dishonest.
"Whether or not they can feed those cores" =About the 3090 vs 3070, I'm not sure its so clear cut. It always depends on the workload and whether or not they can feed those cores. Im pretty sure that a power limited 3090 at 720p gaming will perform worse than a 3070. .
So yes, you're right on that point, but at that point the GPU isn't the determinant of efficiency, the external bottleneck is. Which renders talking about GPU efficiency meaningless.unless specifically bottlenecked elsewhere
And no, a 3090 power limited to the same level as a 3070 will not in general perform worse at 720p. It might in some games - certain applications don't scale equally well in parallel, or hit execution ceilings that need higher clocks or IPC to be overcome - but in general, the 3090 will always be faster in GPU-bound workloads. It could theoretically be VRAM limited, but with 24GB that is quite unlikely, and no games come close to saturating a PCIe 4.0 x16 connection, so that isn't a differentiating factor either. And given how voltage/frequency curves work - the power cost for increasing clocks increases as you push clocks higher - a lower clocked, wider GPU will almost always be more efficient than a smaller, higher clocked GPU at the same power levels.
Cinebench seems to be somewhat of an outlier in terms of M1 performance (which might be down to how well it scales with SMT - that's a big part of AMD's historical advantage over Intel in that comparison, at least, as their SMT is better than Intel's). Not that GeekBench is a representative benchmark either, but in that, the M1U scores ~24000 compared to 26500 for a 3975WX system. That 3975WX system is running quite slow RAM (DDR4-2400) but with 2x the channels of a 3970X it should still overall be faster in anything affected by memory bandwidth. Still, that's a ballpark result IMO, even if it is still a clear victory for the TR chip. "A bit faster", like I said above, fits pretty well.Im sure the 3970x is over twice as fast as the m1 ultra. Going by cbr23 results, it score 47+k. So what do you mean same ballpark?