• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD is Allegedly Preparing Navi 31 GPU with Dual 80 CU Chiplet Design

Joined
Nov 21, 2010
Messages
2,360 (0.45/day)
Location
Right where I want to be
System Name Miami
Processor Ryzen 3800X
Motherboard Asus Crosshair VII Formula
Cooling Ek Velocity/ 2x 280mm Radiators/ Alphacool fullcover
Memory F4-3600C16Q-32GTZNC
Video Card(s) XFX 6900 XT Speedster 0
Storage 1TB WD M.2 SSD/ 2TB WD SN750/ 4TB WD Black HDD
Display(s) DELL AW3420DW / HP ZR24w
Case Lian Li O11 Dynamic XL
Audio Device(s) EVGA Nu Audio
Power Supply Seasonic Prime Gold 1000W+750W
Mouse Corsair Scimitar/Glorious Model O-
Keyboard Corsair K95 Platinum
Software Windows 10 Pro
Yeah like the PIII-1000 paper launch, Radeon X800 paper launch, GeForce 6800U, X1900, 8800 GTX, GTX 480... We've never had paper launches before.

tbf, with paper launch there's no need to sweat it, the part is coming eventually, and now it's "How much time and effort could I put towards beating the scalpers and miners? Can I even beat the scalpers/miners?"
 
Joined
Sep 10, 2015
Messages
542 (0.16/day)
System Name My Addiction
Processor AMD Ryzen 7950X3D
Motherboard ASRock B650E PG-ITX WiFi
Cooling Alphacool Core Ocean T38 AIO 240mm
Memory G.Skill 32GB 6000MHz
Video Card(s) Sapphire Pulse 7900XTX
Storage Some SSDs
Display(s) 42" Samsung TV + 22" Dell monitor vertically
Case Lian Li A4-H2O
Audio Device(s) Denon + Bose
Power Supply Corsair SF750
Mouse Logitech
Keyboard Glorious
VR HMD None
Software Win 10
Benchmark Scores None taken
Nothing "paper" about it, besides, perhaps, MSRP price, but I was told gamers were fine with it, 2080Ti, cough.

Well, if you remember Jensen's speech at launch, gamers weren't fine with it. That's why he had to address the GTX10 owners specially saying "Now is the time to buy new GPUs." Screwed them again, but ultimately screwing the company because only the small minority got the cards whom either was able to get it from nVidia at MSRP or whom got the RTX2080Ti too 'cause don't care anyways...
 
Joined
Jul 9, 2015
Messages
3,549 (1.00/day)
System Name M3401 notebook
Processor 5600H
Motherboard NA
Memory 16GB
Video Card(s) 3050
Storage 500GB SSD
Display(s) 14" OLED screen of the laptop
Software Windows 10
Benchmark Scores 3050 scores good 15-20% lower than average, despite ASUS's claims that it has uber cooling.
Those numbers are extremely bad given we are months out from launch. In fact they are downright pathetic. The definitely says paper launch to me.
Your statement makes ZERO SENSE.
You have NO IDEA how many cards were sold by the said retailer normally.
Oh, yeah, actually roughly THE number they sell now.
Oh, and almost none of those cards are sold out either, all yours, just go and buy it.

There is nothing "paper" in this launch at this point, it's just it came with price gauging not only on the green side (which people are used to) but also on red side.
 
Joined
Sep 1, 2020
Messages
2,618 (1.58/day)
Location
Bulgaria
You'd be surprised what governments pay for the right leading edge technology
Show me, please for hardware especially. What is leading edge technology with which government hardware in usage(with DOS, 95/98, 2000, XP...:D) is different and better than hardware(supercomputers or other devices) in private hands? I hope this is not another conspiracy theory. If it is, just stop!
 
Joined
Mar 21, 2020
Messages
77 (0.04/day)
1st amd put two rx 5700xt core one gpus inside,born rx 6000 series, now they want push four 5700xt for ne gpu inside...rx 7000 series.
is it that way,..that nothing really not coming there amd engineering..come on...its just enough...

anywya,looks future, 300W tdp is nothing soon, we took 400w gpus tdp 'normal'


i think its wrong way....but looks this day engineering and process tech ,its unpossible.


maybe when we go about 3nm process tech under 300W tdp might happend...short time.


my opinion is that 300W tdp should be limit...soon we need 1000W psu for normal gaming rig.. not good... i mean these days..
 
Joined
Jul 13, 2016
Messages
3,543 (1.12/day)
Processor Ryzen 7800X3D
Motherboard ASRock X670E Taichi
Cooling Noctua NH-D15 Chromax
Memory 32GB DDR5 6000 CL30
Video Card(s) MSI RTX 4090 Trio
Storage P5800X 1.6TB 4x 15.36TB Micron 9300 Pro 4x WD Black 8TB M.2
Display(s) Acer Predator XB3 27" 240 Hz
Case Thermaltake Core X9
Audio Device(s) JDS Element IV, DCA Aeon II
Power Supply Seasonic Prime Titanium 850w
Mouse PMM P-305
Keyboard Wooting HE60
VR HMD Valve Index
Software Win 10
Oh, and almost none of those cards are sold out either, all yours, just go and buy it.

1. They are factually sold out from every US retailer so yes, they are sold out by definition.
2. Being available from 3rd parties doesn't change the fact of their sold out status for Official retailers.
3. Advising people to buy from scalpers, paying 2- 3 times the price of an already expensive product isn't a solution. When people say you can buy something, it's implied that said product has general availability and comes from a reputable source. I didn't go around and tell my friend they could have bought the 3900X at launch because it was very unlikely that they could in fact at a reasonable cost. Your comment is the picture perfect internet comment, in that you are saying "But you can buy it!!!". Yeah, if you ignore you are paying 2 to 3 times the already high price plus loosing warranty and only from a 3rd party. Context is important and it's precisely there that your point falls flat on it's face. The internet is the only place you can make an argument stripped of such context because if you told your other friends with computers that they could buy the card and then later add "but at 2 - 3 x the price...." when they find out that all retailers are sold out they aren't going to be your friends for very long. If it's not saying you'd say to people you know, why are you trolling us on the forums with this poor logic?


Your statement makes ZERO SENSE.
You have NO IDEA how many cards were sold by the said retailer normally.
Oh, yeah, actually roughly THE number they sell now.

I do have evidence of absence, which is a legitimate form of suitable evidence.

On the other hand, you are submitting a claim with zero evidence. You claim that stock numbers are normal but provide no definition of "normal" nor any data to back it up.

Lay off the weed buddy, you can't even make a cognizant reply, let alone debate a topic.
 
Last edited:
Joined
Apr 30, 2011
Messages
2,758 (0.54/day)
Location
Greece
Processor AMD Ryzen 5 5600@80W
Motherboard MSI B550 Tomahawk
Cooling ZALMAN CNPS9X OPTIMA
Memory 2*8GB PATRIOT PVS416G400C9K@3733MT_C16
Video Card(s) Sapphire Radeon RX 6750 XT Pulse 12GB
Storage Sandisk SSD 128GB, Kingston A2000 NVMe 1TB, Samsung F1 1TB, WD Black 10TB
Display(s) AOC 27G2U/BK IPS 144Hz
Case SHARKOON M25-W 7.1 BLACK
Audio Device(s) Realtek 7.1 onboard
Power Supply Seasonic Core GC 500W
Mouse Sharkoon SHARK Force Black
Keyboard Trust GXT280
Software Win 7 Ultimate 64bit/Win 10 pro 64bit/Manjaro Linux
1st amd put two rx 5700xt core one gpus inside,born rx 6000 series, now they want push four 5700xt for ne gpu inside...rx 7000 series.
is it that way,..that nothing really not coming there amd engineering..come on...its just enough...

anywya,looks future, 300W tdp is nothing soon, we took 400w gpus tdp 'normal'


i think its wrong way....but looks this day engineering and process tech ,its unpossible.


maybe when we go about 3nm process tech under 300W tdp might happend...short time.


my opinion is that 300W tdp should be limit...soon we need 1000W psu for normal gaming rig.. not good... i mean these days..
Since the twice the shaders of 5700XT and made on the same node, 6900XT, even clocked higher consumes only 300W which is just 36% more than the 5700XT power draw having double the VRAM size, imho if made on 5nm node and with HBM VRAM, the Navi31 will consume easily closer to 350W than to 400W.
 
Joined
Jul 9, 2015
Messages
3,549 (1.00/day)
System Name M3401 notebook
Processor 5600H
Motherboard NA
Memory 16GB
Video Card(s) 3050
Storage 500GB SSD
Display(s) 14" OLED screen of the laptop
Software Windows 10
Benchmark Scores 3050 scores good 15-20% lower than average, despite ASUS's claims that it has uber cooling.
They are factually sold out from every US retailer
However they are available FOR WEEKS AND WEEKS in German retailer.
Puzzling eh?

And, what about PS5 and XSeX, by the way, nowhere to be found last time I've checked, were those also "paper launched"?
2. Being available from 3rd parties doesn't change the fact of their sold out status for Official retailers.
There is nothing "innoficial" about mindfactory.
In fact, what is unusual for German market is buying from amd.com site.

3. Advising people to buy from scalpers, paying 2- 3 times the price of an already expensive product isn't a solution.
a) MSRP of AIB cards was higher upfront.
b) It is up to 60%, not 2-3 times
c) There seem to be plenty buyers of it at that price, once market runs out of those people (which why would it, given how many bought 2080Ti for $200 more than official MSRP) prices might go down

I do have evidence of absence, which is a legitimate form of suitable evidence.
BS.
Your statement would only make sense if sales were lower than before, something that you have NO IDEA about, yet make the claim. That describes the whole "paper launch" BS from the whiney camp.

Since the twice the shaders of 5700XT and made on the same node, 6900XT, even clocked higher consumes only 300W which is just 36% more than the 5700XT power draw having double the VRAM size, imho if made on 5nm node and with HBM VRAM, the Navi31 will consume easily closer to 350W than to 400W.
It's not the same 7nm node though.

I doubt that AMD would use expensive HBM again, given how 6900XT manages to beat 3090 despite using notably slower VRAM on a narrower bus.
 
Joined
Jul 16, 2014
Messages
8,237 (2.12/day)
Location
SE Michigan
System Name Dumbass
Processor AMD Ryzen 7800X3D
Motherboard ASUS TUF gaming B650
Cooling Artic Liquid Freezer 2 - 420mm
Memory G.Skill Sniper 32gb DDR5 6000
Video Card(s) GreenTeam 4070 ti super 16gb
Storage Samsung EVO 500gb & 1Tb, 2tb HDD, 500gb WD Black
Display(s) 1x Nixeus NX_EDG27, 2x Dell S2440L (16:9)
Case Phanteks Enthoo Primo w/8 140mm SP Fans
Audio Device(s) onboard (realtek?) - SPKRS:Logitech Z623 200w 2.1
Power Supply Corsair HX1000i
Mouse Steeseries Esports Wireless
Keyboard Corsair K100
Software windows 10 H
Benchmark Scores https://i.imgur.com/aoz3vWY.jpg?2
Why people keep mumming "paper launch", when cards are available in online stores (e.g. mindfactory in Germany) for WEEKS?
Now, yeah, pricing is way above MSRP, but so was 2080Ti's price up until Ampere came.
oh wait I thought this was the intel PR for its 10nm, my bad....


:D :roll: :D
 
Joined
May 2, 2017
Messages
7,762 (2.70/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
And, what about PS5 and XSeX, by the way, nowhere to be found last time I've checked, were those also "paper launched"?
Arguably, yes. Though of course there's also the issue there of no interesting titles available either, so ... more like a "don't worry about availability, it will be better once there is actually a point to buying these."

As for German stocks, Mindfactory doesn't have a single 3080 SKU in stock, but they do have some 3070s ... at prices significantly higher than the 3080's MSRP. They also have some 3090s at near €500 premiums. Caseking has a single €2250 3090 SKU in stock, no 3080s, no 3070s and no 3060 Tis. Anywhere else you had in mind?
It's probably not even a if, but their intended market. My best assumption is AMD is planning to utilize the less perfect monolithic chip yields for MCM for more serious use compute workloads that aren't are latency critical or less so and more parallel computational critical in nature as they improve that tech over time. That similar to EPYC reduces the cost of the more premium perfect single chiplet yields. If it's used for gaming at all it'll likely start with niche enthusiasts initially and be catered more toward niche markets where the latency concerns are less damning like 4K and 8K because frame rates are lower and GPU rendering needs are greatly higher. The latency penalty is going to be more pronounced for lower resolution high refresh rate so they'd target that market last.

The lucrative markets that actually require GPU for work as opposed to fun and games will help subsidize the MCM GPU tech for gaming in future iterations with latency improvements down the road however because it's sure to be refined over time. Luckily what AMD's learned from Ryzen and with RNDA2 will help a lot in practice. I don't think this tech will have as much problems as Ryzen had with CCX issues because some of that is been ironed out with Ryzen already and additionally infinity cache is a big innovation in itself. AMD is gearing up for major innovation in the coming years as is the industry as a whole.



You'd be surprised what governments pay for the right leading edge technology.
This rumor specifically says RDNA, so this isn't headed for compute workloads - that's what CDNA is for. RDNA's biggest improvements came in gaming performance/TFLOP after all, which doesn't quite make sense for compute workloads. CDNA has significantly better compute performance per watt and per die area than RDNA - otherwise AMD wouldn't have gone to the trouble of branching off the two architectures, after all. Of course RDNA 2 delivered massive perf/W gains, but I would be quite shocked if none of those carried over to newer revisions of CDNA as well. I would indeed be surprised if we saw MCM GPUs with RDNA before CDNA - that kind of tech seems like a shoo-in for compute and HPC - but that's not what this rumor alleges. Of course it's entirely possible that there are MCM CDNA GPUs in the pipeline before this - enterprise GPUs don't tend to get much press attention.
 
Joined
Mar 21, 2016
Messages
2,627 (0.80/day)
To be fair if it works well enough in terms of latency even in it's infancy they'll use it for gaming. Frankly it only needs to improve a bit over CF/SLI has traditionally performed and enough people will be interested for AMD to refine it further over time. I still think the lions share of potential and profit for MCM GPU's especially initially doesn't exactly reside with gaming. Also what's to say this isn't the intended revisions to CDNA initially!? Just because it's based on RDNA doesn't mean a lot once you venture down MCM GPU design where you could double, triple, quadruple performance with chiplets potentially at least done well. From a time to market standpoint not redesign another CDNA specific chip and using MCM in place is probably better anyway for now at least. It's hard to say what AMD plans are for which market. Til more details are known it's too hard to tell definitively speaking at this stage. There is a good chance you're right about the RNDA/CDNA aspect though and if it is the case AMD must feel confident enough about the MCM approach and it's potential. To be fair I can't see it being bad myself even if it's basically CF/SLI with some infinity cache between chiplets that's still good progress I feel over the previous situation. Just having a single PCB can cut down on power a lot with a two chip GPU over two discrete ones and if they can make them sync better with less hardware overhead penalty that's going to be big progress. AMD's choice of words is something to think about too "orchestration" that gives the impression of a rather seamlessly well synchronized workload. Let's hope it's more than a clever name title that's more suggestive than actual functional substance.
 
Joined
Jul 9, 2015
Messages
3,549 (1.00/day)
System Name M3401 notebook
Processor 5600H
Motherboard NA
Memory 16GB
Video Card(s) 3050
Storage 500GB SSD
Display(s) 14" OLED screen of the laptop
Software Windows 10
Benchmark Scores 3050 scores good 15-20% lower than average, despite ASUS's claims that it has uber cooling.
Arguably, yes. Though of course there's also the issue there of no interesting titles available either, so ... more like a "don't worry about availability, it will be better once there is actually a point to buying these."
3.4 million PS5 units were sold in the first 4 weeks.
Paper launch is a nonsensical take on "how does availability of new anticipated products work".

Mindfactory doesn't have a single 3080 SKU in stock
We were talking about AMD.
NV is clearly discouraged to produce 3080 (why bother with GA102, when GA104 3070 sells for nearly as much).
I think the last time mindfactory had several hundred of those, it took it about 15 hours to sell them at somewhat above 1k Euros.
 
Joined
Sep 8, 2018
Messages
51 (0.02/day)
I just feel sorry for people who didn't pick up a modern GPU last generation. They're completely boned right now.

It's understandable, too, because Turing was a complete rip-off for the first year on the market, and then when Navi arrived on the scene a year late the party it was underwhelming AF; The 5700 wasn't even 15% faster than the Vega56, and even at MSRP, it actually offered lower performance/$. Street pricing on Vega56 by that point was way lower than the 5700-series ever reached.

The only "good" thing that happened that entire generation is that AMD rejoining the GPU market after a year of total nothingness brought TU104 down to under $500 in the 2070 Super. That didn't make it good value, but at least it restored price/performance to somewhere that wasn't obscene. Unfortunately Nvidia stopped manufacturing Turing around 9 months after the Super series launched, so if you didn't grab one during that short window you're SOL right now.

vega56 is actually kind of amazing you can flash a 64 bios on them and they perform at 95% like a vega64 in games, which itself performs between a 1080 and a 1080ti.

for the price it was hard to do better perf/$ until this gen ... 3060/3060ti when their price drop is no brainer in this area.
 
Joined
Nov 15, 2016
Messages
454 (0.15/day)
System Name Sillicon Nightmares
Processor Intel i7 9700KF 5ghz (5.1ghz 4 core load, no avx offset), 4.7ghz ring, 1.412vcore 1.3vcio 1.264vcsa
Motherboard Asus Z390 Strix F
Cooling DEEPCOOL Gamer Storm CAPTAIN 360
Memory 2x8GB G.Skill Trident Z RGB (B-Die) 3600 14-14-14-28 1t, tRFC 220 tREFI 65535, tFAW 16, 1.545vddq
Video Card(s) ASUS GTX 1060 Strix 6GB XOC, Core: 2202-2240, Vcore: 1.075v, Mem: 9818mhz (Sillicon Lottery Jackpot)
Storage Samsung 840 EVO 1TB SSD, WD Blue 1TB, Seagate 3TB, Samsung 970 Evo Plus 512GB
Display(s) BenQ XL2430 1080p 144HZ + (2) Samsung SyncMaster 913v 1280x1024 75HZ + A Shitty TV For Movies
Case Deepcool Genome ROG Edition
Audio Device(s) Bunta Sniff Speakers From The Tip Edition With Extra Kenwoods
Power Supply Corsair AX860i/Cable Mod Cables
Mouse Logitech G602 Spilled Beer Edition
Keyboard Dell KB4021
Software Windows 10 x64
Benchmark Scores 13543 Firestrike (3dmark.com/fs/22336777) 601 points CPU-Z ST 37.4ns AIDA Memory
cOmpUtE MonSteR
is slower than 3090 in fp32
 
Joined
May 3, 2018
Messages
2,881 (1.15/day)
So in Australia I've seen the 6900XT being advertised for $2K so I would expect this ridiculous card to be over $4K as we'll no doubt see AMD also increase prices again with next gen CPUs and GPUs.

I'd rather they increased bus width as it's patently obvious that the reason Navi 21 falls behind at 4K is the 256 bit bus width. It was always a stupid decision for the high end cards despite what infinity cache achieves. Same will happen with 6700XT being only 192 bit.
 
Joined
Apr 8, 2012
Messages
270 (0.06/day)
Location
Canada
System Name custom
Processor intel i7 9700
Motherboard asrock taichi z370
Cooling EK-AIO 360 D-RGB
Memory 24G Kingston HyperX Fury 2666mhz
Video Card(s) GTX 2080 Ti FE
Storage SSD 960GB crucial + 2 Crucial 500go SSD + 2TO crucial M2
Display(s) BENQ XL2420T
Case Lian-li o11 dynamic der8auer Edition
Audio Device(s) Asus Xonar Essence STX
Power Supply corsair ax1200i
Mouse MX518 legendary edition
Keyboard gigabyte Aivia Osmium
VR HMD PSVR2
Software windows 11
one word: driver
 
Joined
May 2, 2017
Messages
7,762 (2.70/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
Joined
Jan 8, 2009
Messages
551 (0.09/day)
System Name AMD RyZen PC
Processor AMD RyZen 5950x
Motherboard ASUS Crosshair VIII Hero 570x WIFI
Cooling Custom Loop
Memory 64GB G.Skill Trident Z DDR4 3200 MHz 14C x4
Video Card(s) Evga 3080 TI
Storage Seagate 8TB + 3TB + 4TB + 2TB external + 512 Samsung 980
Display(s) LG 4K 144Hz 27GN950-B
Case Thermaltake CA-1F8-00M1WN-02 Core X71 Tempered Glass Edition Black
Audio Device(s) XI-FI 8.1
Power Supply EVGA 700W
Mouse Microsoft
Keyboard Microsoft
Software Windows 10 x64 Pro
If they can release it in 2nd half, it would be awesome
 
Joined
May 2, 2017
Messages
7,762 (2.70/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
3.4 million PS5 units were sold in the first 4 weeks.
Paper launch is a nonsensical take on "how does availability of new anticipated products work".


We were talking about AMD.
NV is clearly discouraged to produce 3080 (why bother with GA102, when GA104 3070 sells for nearly as much).
I think the last time mindfactory had several hundred of those, it took it about 15 hours to sell them at somewhat above 1k Euros.
Console APUs have been in mass production since before summer 2020; stocks have had a lot more time to build up than GPUs. Them still being out of stock speaks of gross underestimations of demand there too. At least there are no miners using consoles ...

As for speaking of AMD vs. Nvidia, I still can't find any noticeable stock there either, so...

Also, what they are encouraged/discouraged to produce is meaningless - if a GPU hits shelves in Europe/the US today, that means its silicon started its journey through the fab 3-4 months ago at the very least. If Nvidia made adjustments to which die each wafer made is immediately after launch, we'd be seeing those adjustments about now, if not a bit into the future. And there's no way they made any such adjustments that early. So your quasi-conspiracy theory has pretty wobbly legs to stand on at best.
 
Joined
Jan 14, 2019
Messages
15,041 (6.68/day)
Location
Midlands, UK
System Name My second and third PCs are Intel + Nvidia
Processor AMD Ryzen 7 7800X3D
Motherboard MSi Pro B650M-A Wifi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance EXPO DDR5-6000 CL36
Video Card(s) PowerColor Reaper Radeon RX 9070 XT
Storage 2 TB Corsair MP600 GS, 4 TB Seagate Barracuda
Display(s) Dell S3422DWG 34" 1440 UW 144 Hz
Case Kolink Citadel Mesh
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply 750 W Seasonic Prime GX
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Bazzite (Fedora Linux) KDE Plasma
Based on what evidence exactly? I wouldn't expect a single PCB RDNA3 twin chip design with a CU count like the RX 6900 XT to be worse on power draw than two discrete RNDA2 RX 6900 XT's. There should be power optimized refinements from RDNA2 to RNDA3 and just because it's using a pair of chips with the same CU doesn't mean on a single PCB they will be clocked the same which even if they were and AMD's optimized various aspects of it to increase power and efficiency it'll still draw less power. Even if AMD optimized nothing at all a single board design is going to be more efficient in practice. The more challenging aspect would be cooling it, but they'll probably make it water cooled or at least a 3-slot cooler rather than a 2-slot design. I don't know if they'd go with a 4-slot cooler or not they certainly could, but if they don't increase clock speeds while making it more power efficient for RDNA3 they might not need to. Just put a vapor chamber on both sides of the PCB with 4 U-shaped heat pipes connected spaced out across the PCB length and two blower fans on each side of the PCB on the rear exhausting all the heat. That would exhaust plenty of heat all outside of the case. The heat-pipes themselves could be filled with a bit of liquid metal if they aren't already done that way these days.
Why would I need evidence to support my speculation that is based on a news article that is also based on speculation? I am not stating truths here, merely guessing based on current GPU trends.

Clocking different cores of a CPU differently makes a lot of sense as they're usually working on different tasks. Clocking two chiplets of the same GPU differently sounds like a terrrible idea to me. Mismatched SLI never worked, and there's plenty of videos on youtube on mismatched Crossfire setups performing worse than single cards in some games. Chiplets are on the same die, have the same memory allocation, are connected to the same display, they essentially perform together as one chip. They can't be clocked differently, otherwise shader processors of current GPUs could be clocked differently too. But again, I'm merely speculating and we'll see what the future brings.

Let's just hope those people who commented about RDNA 3's 5 nm process being a lot more energy efficient than 7 nm are right. :) I have no desire for 300 W+ cards either PSU or cooling-wise.
 
Joined
May 2, 2017
Messages
7,762 (2.70/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
Why would I need evidence to support my speculation that is based on a news article that is also based on speculation? I am not stating truths here, merely guessing based on current GPU trends.

Clocking different cores of a CPU differently makes a lot of sense as they're usually working on different tasks. Clocking two chiplets of the same GPU differently sounds like a terrrible idea to me. Mismatched SLI never worked, and there's plenty of videos on youtube on mismatched Crossfire setups performing worse than single cards in some games. Chiplets are on the same die, have the same memory allocation, are connected to the same display, they essentially perform together as one chip. They can't be clocked differently, otherwise shader processors of current GPUs could be clocked differently too. But again, I'm merely speculating and we'll see what the future brings.

Let's just hope those people who commented about RDNA 3's 5 nm process being a lot more energy efficient than 7 nm are right. :) I have no desire for 300 W+ cards either PSU or cooling-wise.
I think you misread what they said about clocks: I don't think they meant we might see two dice on the same board with different clocks, but that we shouldn't expect to see dice clocked as high when we have two on one board as when there's only one (like the 6900 XT). Given that power draw increases roughly by the square of voltage, dropping voltage even 100mV can make a huge difference, so a small-ish downclock can definitely make a dual-die solution stay at or below 300W. You could probably run 2x80CU RDNA2 7nm at ~1800-2000MHz at 300W if you tune the voltages ever so slightly.

Oh, and chiplets are not on the same die. That's kind of the definition of MCM/chiplet architectures: multiple chips/chiplets/dice (those terms are nearly, though not quite, interchangeable) on the same substrate or package.
 
Joined
Mar 21, 2016
Messages
2,627 (0.80/day)
I think you misread what they said about clocks: I don't think they meant we might see two dice on the same board with different clocks, but that we shouldn't expect to see dice clocked as high when we have two on one board as when there's only one (like the 6900 XT). Given that power draw increases roughly by the square of voltage, dropping voltage even 100mV can make a huge difference, so a small-ish downclock can definitely make a dual-die solution stay at or below 300W. You could probably run 2x80CU RDNA2 7nm at ~1800-2000MHz at 300W if you tune the voltages ever so slightly.

Oh, and chiplets are not on the same die. That's kind of the definition of MCM/chiplet architectures: multiple chips/chiplets/dice (those terms are nearly, though not quite, interchangeable) on the same substrate or package.
This I was implying using different chiplet clock speeds between single monolithic and MCM GPU designs. The MCM GPU's could be a lower grade chip, but with lower voltages and heat dissipation. They might be better chips alternatively and just better distribution of heat across the heat spreader potentially though it's less likely between the two. AMD could maximize profit combining lower grade chiplets on MCM's and saving the best binned chips for monolithic single chiplet GPU's. I imagine eventually they will all be MCM GPU's or all, but highest binned monolithic chips. I think outside of the gaming market though AMD will take a pairing of premium well binned chips in the other more lucrative markets.

Let's just hope those people who commented about RDNA 3's 5 nm process being a lot more energy efficient than 7 nm are right. :) I have no desire for 300 W+ cards either PSU or cooling-wise.
It'll defiantly offer possibilities of reduce power consumption even with higher performance, but how much comes down to AMD's design decisions about ideal trade off between the two. It'll probably be similar to 12nm to 7nm, but maybe a bit less pronounced. It's tough to say though if it enables them to actually physically arrange and squeeze in more chips underneath a heat spreader that changes things a lot and it could end up being higher as a byproduct.

Far as the based on what evidence exactly thing goes I just meant what makes you feel that way aside from a gut feeling. I personally think it can make things cheaper because they can utilize lower quality smaller more imperfect chips together in pairs or more that require less heat and voltage added together monolithic chip of similar design circuitry would require. It allows them to better dissipate heat across the heat spreader even as well which of course is a advantage. It has the potential to offer more performance better efficiency and lowered cost plus it makes binning more flexible as well. It's actually what AMD needs to better compete with Nvidia that often trounces them on SKU offerings for a GPU architectural generation. Nvidia routinely has way more chips diced and binned into more GPU brackets covering a wider swath of performance. They have a tendency to squeeze AMD on both sides of competing GPU offerings since they've got a bigger R&D budget and aren't competing against Intel simutaniously to the same degree.
 
Last edited:
Top