- Joined
- Jan 31, 2017
- Messages
- 50 (0.02/day)
Dishnetwork is doing a great job!
It kinda is necessary for graphics. Dividing the workload between 2 dies is very tough at the performance level we have these days. It's also why SLI/Crossfire died as the cards got more powerful. Nvidia would def kill SLI for money but I am pretty sure AMD (the underdog) would not kill CrossFire unless they couldn't get it to work well enough to be worth it.Nvidia still thinking monolithic is a good idea. Saddening
MI250X and CDNA say hiIt kinda is necessary for graphics. Dividing the workload between 2 dies is very tough at the performance level we have these days. It's also why SLI/Crossfire died as the cards got more powerful. Nvidia would def kill SLI for money but I am pretty sure AMD (the underdog) would not kill CrossFire unless they couldn't get it to work well enough to be worth it.
MI250X and CDNA say hi
Multi GPU is alive and thriving for server workloads. It is dead in consumer gaming hardware.It kinda is necessary for graphics.
Of this I have no doubt. I just wanted to convey that there is a very good reason GPUs are monolithic while CPUs have been able to easily follow a multi die strategy as far back as the first Core 2 Quads. Think about a GPU generating an image - a light source from one corner of the screen can cast a shadow in the other corner of the screen. You can't divide this workload so easily. And these days games use temporal techniques like TAA which rely on past frames generated to create the new frames faster. So you can't divide this workload so easily either. The net result is that you can't divide the workload spacially because that's how light works. And you can't divide the workload temporally because of techniques like TAA. The result is that consumer GPUs have found it impossible to support multi GPU as they evolved to their current state.MCM in consumer gaming hardware is inevitable
Oh yeah, of course. When consumer GPUs will take that leap is unknown but it will happen and I'm interested. As for Hopper, being monolithic on the server side is saddening.Of this I have no doubt. I just wanted to convey that there is a very good reason GPUs are monolithic while CPUs have been able to easily follow a multi die strategy as far back as the first Core 2 Quads. Think about a GPU generating an image - a light source from one corner of the screen can cast a shadow in the other corner of the screen. You can't divide this workload so easily. And these days games use temporal techniques like TAA which rely on past frames generated to create the new frames faster. So you can't divide this workload so easily either. The net result is that you can't divide the workload spacially because that's how light works. And you can't divide the workload temporally because of techniques like TAA. The result is that consumer GPUs have found it impossible to support multi GPU as they evolved to their current state.
I am not saying multi die GPU in consumer graphics is impossible. But it will take a lot more time and effort than we expect to get there.
System Name | Tiny the White Yeti |
---|---|
Processor | 7800X3D |
Motherboard | MSI MAG Mortar b650m wifi |
Cooling | CPU: Thermalright Peerless Assassin / Case: Phanteks T30-120 x3 |
Memory | 32GB Corsair Vengeance 30CL6000 |
Video Card(s) | ASRock RX7900XT Phantom Gaming |
Storage | Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB |
Display(s) | Gigabyte G34QWC (3440x1440) |
Case | Lian Li A3 mATX White |
Audio Device(s) | Harman Kardon AVR137 + 2.1 |
Power Supply | EVGA Supernova G2 750W |
Mouse | Steelseries Aerox 5 |
Keyboard | Lenovo Thinkpad Trackpoint II |
VR HMD | HD 420 - Green Edition ;) |
Software | W11 IoT Enterprise LTSC |
Benchmark Scores | Over 9000 |
We'll see, I'm not gonna make assumptions like that until I see it with my own eyes. I learned my lesson about immediately assuming stuff in the tech space.
MCM in consumer gaming hardware is inevitable, 3090 is the biggest example of that. Look at how inefficient that monolithic design is, look at how difficult it is to get clock increases when every bump you give it has to be applied to 10k+ cores on the same die. Whether or not it works well will have to be determined.