- Joined
- May 2, 2017
- Messages
- 7,762 (2.80/day)
- Location
- Back in Norway
System Name | Hotbox |
---|---|
Processor | AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6), |
Motherboard | ASRock Phantom Gaming B550 ITX/ax |
Cooling | LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14 |
Memory | 32GB G.Skill FlareX 3200c14 @3800c15 |
Video Card(s) | PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W |
Storage | 2TB Adata SX8200 Pro |
Display(s) | Dell U2711 main, AOC 24P2C secondary |
Case | SSUPD Meshlicious |
Audio Device(s) | Optoma Nuforce μDAC 3 |
Power Supply | Corsair SF750 Platinum |
Mouse | Logitech G603 |
Keyboard | Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps |
Software | Windows 10 Pro |
I don't think the person you quoted mentioned CPUs whatsoever...AMD and Nvidia have idea but there is ARM/QUALCOMM which is on light years ahead of them.
PS. GPU CU numbers scaling is not so bad like CPU number of cores scaling because operations for GPU are more simplify and is more easily to parallelized to many CU's.
The variety of tasks assigned to the CPU makes it difficult to scale and reduces efficiency, because the CPU also solves a lot of single-threaded tasks.
But my interest in this discussion is on GPU's not CPU's.
And, again, if ARM and Qualcomm could scale their GPUs up to much larger sizes without sacrificing efficiency, why haven't they done so? That would allow them entry into huge and very lucrative markets like consoles, gaming PCs, etc. Of course none of these come close to the sales volumes of smartphones, but smartphones also have near zero margins.
You're assuming they have some kind of magical technology that simply doesn't exist, as you're not taking into account the inherent efficiency that comes from designing for a small maximum size and overall limited layout. Smaller designs will always be more efficient than larger designs. Period. There's nothing saying that any current mobile GPU maker could match AMD or Nvidia at the 150-250W range, except maybe Apple. But given the drastic differences between mobile GPUs in power delivery, size and thus internal interconnects, VRAM interfaces and bus widths, thread/workload allocation, driver complexity, etc., etc., etc., there's no way of knowing until one of them tries.