- Joined
- Aug 4, 2020
- Messages
- 1,624 (1.01/day)
- Location
- ::1
wtb xbox w/ unlocked uefi bootloaderAh, you want an Xbox!
GIMME
NAO
wtb xbox w/ unlocked uefi bootloaderAh, you want an Xbox!
System Name | WS#1337 |
---|---|
Processor | Ryzen 7 5700X3D |
Motherboard | ASUS X570-PLUS TUF Gaming |
Cooling | Xigmatek Scylla 240mm AIO |
Memory | 64GB DDR4-3600(4x16) |
Video Card(s) | MSI RTX 3070 Gaming X Trio |
Storage | ADATA Legend 2TB |
Display(s) | Samsung Viewfinity Ultra S6 (34" UW) |
Case | ghetto CM Cosmos RC-1000 |
Audio Device(s) | ALC1220 |
Power Supply | SeaSonic SSR-550FX (80+ GOLD) |
Mouse | Logitech G603 |
Keyboard | Modecom Volcano Blade (Kailh choc LP) |
VR HMD | Google dreamview headset(aka fancy cardboard) |
Software | Windows 11, Ubuntu 24.04 LTS |
That's a good indication of purposeful stagnation. It's almost 2022, and the most we get is a promise of having 1050Ti-level performance, while over 3 years ago we had glimpses of what's possible with i7-8809G or custom Zen+ in Subor Z+. Both had a GTX1060/RX470 level of performance, which is still good enough for 1080p gaming at adequate quality settings. With advances in graphics, and with super-resolution in picture it's more than feasible, and there's a huge demand for it on the market.Current APUs are actually pretty decent, though obviously you can't expect 1080p high or ultra in anything demanding. I recently saw a "leak" (to be taken with the requisite amount of salt, obviously) of a possible Rembrandt/ 6000-series AMD APU (8c8t Z3, unknown no. of RDNA2 CUs) scoring 2700 in Time Spy, placing it ~200 points ahead of a 1050 Ti and ~600 points behind a 1650. Now, a 1650 is hardly a powerful GPU, but an APU beating a 1050 Ti is still pretty good, and a significant improvement over current APUs. Looking forward to seeing what DDR5 and RDNA2 brings to APUs.
System Name | Bragging Rights |
---|---|
Processor | Atom Z3735F 1.33GHz |
Motherboard | It has no markings but it's green |
Cooling | No, it's a 2.2W processor |
Memory | 2GB DDR3L-1333 |
Video Card(s) | Gen7 Intel HD (4EU @ 311MHz) |
Storage | 32GB eMMC and 128GB Sandisk Extreme U3 |
Display(s) | 10" IPS 1280x800 60Hz |
Case | Veddha T2 |
Audio Device(s) | Apparently, yes |
Power Supply | Samsung 18W 5V fast-charger |
Mouse | MX Anywhere 2 |
Keyboard | Logitech MX Keys (not Cherry MX at all) |
VR HMD | Samsung Oddyssey, not that I'd plug it into this though.... |
Software | W10 21H1, barely |
Benchmark Scores | I once clocked a Celeron-300A to 564MHz on an Abit BE6 and it scored over 9000. |
Ah yes, I'd completely forgotten about that - In addition to the hamstrung PCIe bandwidth, TDP limitation, and clockspeed disadvantage, the high latency GDDR wouldn't be helping and I'm going to assume that the silicon's IMC only has support for GDDR6 so there's no option to use DDR4.You're not entirely wrong, but you can't discount the effect of slow GDDR on CPU performance either. CPUs need latency more than bandwidth, so GDDR for a CPU is just begging for a performance loss. IMO that's as much of the fault for the poor CPU performance as the smaller caches - remember, the mobile/desktop APUs have similarly small caches and perform well still.
System Name | Hotbox |
---|---|
Processor | AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6), |
Motherboard | ASRock Phantom Gaming B550 ITX/ax |
Cooling | LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14 |
Memory | 32GB G.Skill FlareX 3200c14 @3800c15 |
Video Card(s) | PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W |
Storage | 2TB Adata SX8200 Pro |
Display(s) | Dell U2711 main, AOC 24P2C secondary |
Case | SSUPD Meshlicious |
Audio Device(s) | Optoma Nuforce μDAC 3 |
Power Supply | Corsair SF750 Platinum |
Mouse | Logitech G603 |
Keyboard | Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps |
Software | Windows 10 Pro |
Is it indicative of purposeful stagnation, though? The Subor Z+ has a custom motherboard with onboard GDDR5, and KBL-G has HBM. Custom DIY desktop motherboards with onboard RAM will never, ever be a sane business decision, and thus won't happen. Even with the advent of DDR5, there are limits to how many CUs you can feed over a 128-bit interface. AMD clearly found that 8 high clocked Vega CUs was as high as there was a point to going with DDR4 (which is supported by how well these APUs scale with RAM overclocking). That's with DDR4-3200 as a baseline. Assuming a move to JEDEC DDR5, DDR5-4800 (which is a likely baseline for laptops and OEM systems for the first generation at least) would represent exactly the same bandwidth/CU/clock speed for a 12CU APU as DDR4-3200 for an 8CU one. I have no idea how the change from Vega to RDNA2 will affect that (if anything, I expect the increased architectural efficiency to increase DRAM bandwidth pressure), but assuming no change in that or clock speeds, 12CUs is about as high as they can reasonably go without increasing the DRAM bottleneck.That's a good indication of purposeful stagnation. It's almost 2022, and the most we get is a promise of having 1050Ti-level performance, while over 3 years ago we had glimpses of what's possible with i7-8809G or custom Zen+ in Subor Z+. Both had a GTX1060/RX470 level of performance, which is still good enough for 1080p gaming at adequate quality settings. With advances in graphics, and with super-resolution in picture it's more than feasible, and there's a huge demand for it on the market.
System Name | Bragging Rights |
---|---|
Processor | Atom Z3735F 1.33GHz |
Motherboard | It has no markings but it's green |
Cooling | No, it's a 2.2W processor |
Memory | 2GB DDR3L-1333 |
Video Card(s) | Gen7 Intel HD (4EU @ 311MHz) |
Storage | 32GB eMMC and 128GB Sandisk Extreme U3 |
Display(s) | 10" IPS 1280x800 60Hz |
Case | Veddha T2 |
Audio Device(s) | Apparently, yes |
Power Supply | Samsung 18W 5V fast-charger |
Mouse | MX Anywhere 2 |
Keyboard | Logitech MX Keys (not Cherry MX at all) |
VR HMD | Samsung Oddyssey, not that I'd plug it into this though.... |
Software | W10 21H1, barely |
Benchmark Scores | I once clocked a Celeron-300A to 564MHz on an Abit BE6 and it scored over 9000. |
Now, I would absolutely love to see a 20+CU APU, and I hope we will in the not too distant future, but given the die space requirements that would necessitate AMD moving to a 2-die or MCM strategy, and so far they've never had more than 1 die, and no MCM APUs - likely for cost reasons. A new monolithic die would be both large and relatively niche, so it would also be expensive, and if it's still held back by RAM bandwidth (and to a disproportionate amount compared to lower CU count APUs) then it might simply not make sense unless they also add something like HBM, which would drive up prices even further.
System Name | Hotbox |
---|---|
Processor | AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6), |
Motherboard | ASRock Phantom Gaming B550 ITX/ax |
Cooling | LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14 |
Memory | 32GB G.Skill FlareX 3200c14 @3800c15 |
Video Card(s) | PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W |
Storage | 2TB Adata SX8200 Pro |
Display(s) | Dell U2711 main, AOC 24P2C secondary |
Case | SSUPD Meshlicious |
Audio Device(s) | Optoma Nuforce μDAC 3 |
Power Supply | Corsair SF750 Platinum |
Mouse | Logitech G603 |
Keyboard | Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps |
Software | Windows 10 Pro |
That's an excellent question. There are pretty persistent rumors of future AMD CPUs with iGPUs, which would most likely indicate a small (4CU?) GPU on the IOD. Two challenges with this:How feasible would it be to merge the IOD with the IGPD? Ie, there'd be compute dies much like the current non-IGP Ryzens, then an IO/IGP die, potentially w/ stripped down IO. Then you can also make parts w/ the same core config, better IO w/ pure IOD, or one w/ the IO/IGP die
not sure how feasible/economical that'd be tho
Yeah, that's what that leak I linked to indicates. Fingers crossed! I would love my next laptop to be a 12CU+LPDDR5(X?) beast.I've been shouting for more CUs in AMD's APUs for years now, and the naysayers have always said it's pointless because DDR4 RAM bandwidth was the limiting factor. DDR5 is here, lets see at least 12CU solutions as the default in silicon please - the extra four CUs really don't use much die space compared to the cache and the cores.
That's my hope. MCM packaging for mobile is tricky due to Z-height (OEMs need their thin laptops!), but hopefully they can get that down to a reasonable level. An IOD that large in a desktop socket will be problematic though - you can't make L-shaped chips, and they need to fit at least one CCD, so there are pretty strict limits to how much larger they can go compared to current IODs. There is room to grow, but not that much - so the main question becomes how much they can shrink the current IOD design if they move to a denser node (considering that IO doesn't scale well), and how much can they cut before people start complaining?As for larger CU solutions, perhaps MCM is the way to go - rather than modifying the Zen3/Zen4 CCD die, why not make an I/O die with an additional 16CU or 24CU IGP on it? It's one step closer to the all-important IMC and it's likely cheaper than designing a large monolithic die. Ultra mobile parts can still get by with the essential IGP on their monolithic die as process node and performance/Watt are still the most vital aspects, but desktops and >25W mobile parts can easily go MCM.
System Name | Hotbox |
---|---|
Processor | AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6), |
Motherboard | ASRock Phantom Gaming B550 ITX/ax |
Cooling | LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14 |
Memory | 32GB G.Skill FlareX 3200c14 @3800c15 |
Video Card(s) | PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W |
Storage | 2TB Adata SX8200 Pro |
Display(s) | Dell U2711 main, AOC 24P2C secondary |
Case | SSUPD Meshlicious |
Audio Device(s) | Optoma Nuforce μDAC 3 |
Power Supply | Corsair SF750 Platinum |
Mouse | Logitech G603 |
Keyboard | Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps |
Software | Windows 10 Pro |
I wonder how much that will improve, seeing how I/O is notorious for not scaling particularly well with node changes - it improves, but nowhere near as well as I/O does. Also, the main power draw in the IOD is likely sadly IF, as that needs to keep 1-2 very high speed links active the entire time the CPU is running, which definitely doesn't come for free.If memory serves Zen 4 IODs will be made on 7nm. The only reason why they're still recycling their current anemic 14nm IODs is because they still have an agreement w/ GloFo of purchasing a certain allotment of wafers from them (or something xd).
There's also real incentive of doing this (mobile mostly) because the current 14nm IOD gobbles power like mad. even when it's not really doing anything it consumes like, 20W which is fairly insane
System Name | WS#1337 |
---|---|
Processor | Ryzen 7 5700X3D |
Motherboard | ASUS X570-PLUS TUF Gaming |
Cooling | Xigmatek Scylla 240mm AIO |
Memory | 64GB DDR4-3600(4x16) |
Video Card(s) | MSI RTX 3070 Gaming X Trio |
Storage | ADATA Legend 2TB |
Display(s) | Samsung Viewfinity Ultra S6 (34" UW) |
Case | ghetto CM Cosmos RC-1000 |
Audio Device(s) | ALC1220 |
Power Supply | SeaSonic SSR-550FX (80+ GOLD) |
Mouse | Logitech G603 |
Keyboard | Modecom Volcano Blade (Kailh choc LP) |
VR HMD | Google dreamview headset(aka fancy cardboard) |
Software | Windows 11, Ubuntu 24.04 LTS |
Dude, not sure if you've noticed, but on today's menu is a custom board with soldered GDDR6.The Subor Z+ has a custom motherboard with onboard GDDR5
Of course. We have 3 generations and 5 lineups of Zen APUs using the same Vega grapics (which ironically shrinked along with tech process). C'mon, we have octacore ULP chips that outperform 90W desktop CPUs from 4 years ago, we have DDR4 that's nearly twice as fast as back then(and DDR5 hitting the shelves as we speak), but even the good-ole Vega iGPUs keep shrinking and shrinking, while compensating solely by higher clocks. And Navi iGPUs are still nowhere to be seen. AMD will jump straight to Navi2 sometime in this century, but until then the best we've got is bragging that our Vega7/8 beats the living s#$t out of GT1030 D4.Is it indicative of purposeful stagnation, though?
System Name | Bragging Rights |
---|---|
Processor | Atom Z3735F 1.33GHz |
Motherboard | It has no markings but it's green |
Cooling | No, it's a 2.2W processor |
Memory | 2GB DDR3L-1333 |
Video Card(s) | Gen7 Intel HD (4EU @ 311MHz) |
Storage | 32GB eMMC and 128GB Sandisk Extreme U3 |
Display(s) | 10" IPS 1280x800 60Hz |
Case | Veddha T2 |
Audio Device(s) | Apparently, yes |
Power Supply | Samsung 18W 5V fast-charger |
Mouse | MX Anywhere 2 |
Keyboard | Logitech MX Keys (not Cherry MX at all) |
VR HMD | Samsung Oddyssey, not that I'd plug it into this though.... |
Software | W10 21H1, barely |
Benchmark Scores | I once clocked a Celeron-300A to 564MHz on an Abit BE6 and it scored over 9000. |
I said >25W, so basically that means the 45W H-series chips and above for DTR laptops, but yeah I'd forgotten that MCM has higher idle power draw - probably the reason AMD have gone monolithic for all their mobile-focused APUs. I guess that means 12CU is as much as we can hope for then, though with the increased clocks that RDNA2 architecture can achieve, coupled with the improved IPC, it should at least be equivalent to a Vega20, even if AMD doesn't actually make a 20CU partOh, btw, MCM at 25W might be rather tricky - Infinity Fabric consumes quite a bit of power even at idle and even on 1-CCD CPUs - easily a noticeable amount for a mobile chip, and to a degree that might be a problem for idle power (which needs to go into the 1W range for a competitive modern mobile CPU/APU design. Though they could likely lower this if they used some kind of EMIB-like packaging for the interconnect rather than going through the substrate, or clocked it much lower, but it's not quite as simple as copying current designs, sadly
System Name | Hotbox |
---|---|
Processor | AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6), |
Motherboard | ASRock Phantom Gaming B550 ITX/ax |
Cooling | LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14 |
Memory | 32GB G.Skill FlareX 3200c14 @3800c15 |
Video Card(s) | PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W |
Storage | 2TB Adata SX8200 Pro |
Display(s) | Dell U2711 main, AOC 24P2C secondary |
Case | SSUPD Meshlicious |
Audio Device(s) | Optoma Nuforce μDAC 3 |
Power Supply | Corsair SF750 Platinum |
Mouse | Logitech G603 |
Keyboard | Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps |
Software | Windows 10 Pro |
...and? Let's see: it's using an SoC that literally doesn't have a DDR4 controller (or any other DRAM controller except GDDR6) and thus can't use anything other than GDDR6; it is an SoC that wouldn't work in any existing socket, whether desktop or mobile BGA, and thus necessitates a custom board; it is a board that isn't sold to the general public and doesn't have broad distribution (OEMs and gray market sellers only); it is a product that exists only because AMD are making tens of millions of console SoCs and even at the low, low defect rates of TSMC 7nm there are bound to be heaps of chips that fail to meet spec. So: this is a product born out of either opportunity ("free chips! What do we do with them?") or necessity ("ugh, we have thousands of these dud console chips, we need to put them to use somehow"), either way it is not a product that fits into any other product stack, and a product with very low costs for AMD (as Sony/MS has already bought the wafers; AMD can likely buy back the defects dirt cheap, or have keeping them stipulated as part of the contracts).Dude, not sure if you've noticed, but on today's menu is a custom board with soldered GDDR6.
That's not a valid comparison. CPUs are, in the vast majority of consumer tasks, in no way limited by DRAM bandwidth. So the growth you're describing in CPUs is what happens when nothing else is holding you back and you have major architectural improvements. GPUs, on the other hand, are essentially always DRAM bandwidth limited. Hence why there's little point to sticking a bigger iGPU on something if you can't keep it fed. And yeah, I also think it's a bit odd that they're still using Vega, but given how it performs, I don't mind. It was probably cheaper, easier, and saved them the headache of either designing a tiny RDNA1 chip or really rushing an RDNA2 design for current APUs. It's probably down to cost savings (or at least investing less) in the end, but I doubt there is something tangibly better that they could have given us up until now. Seeing Vega live on in APUs till 2021 is indeed odd, but it performs fine for what it is, and without DDR5 RDNA(2) likely wouldn't have given us much more performance anyhow.Of course. We have 3 generations and 5 lineups of Zen APUs using the same Vega grapics (which ironically shrinked along with tech process). C'mon, we have octacore ULP chips that outperform 90W desktop CPUs from 4 years ago, we have DDR4 that's nearly twice as fast as back then(and DDR5 hitting the shelves as we speak), but even the good-ole Vega iGPUs keep shrinking and shrinking, while compensating solely by higher clocks. And Navi iGPUs are still nowhere to be seen. AMD will jump straight to Navi2 sometime in this century, but until then the best we've got is bragging that our Vega7/8 beats the living s#$t out of GT1030 D4.