• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Sapphire Reps Leak Juicy Details on AMD Radeon Navi

Joined
Sep 17, 2014
Messages
22,431 (6.03/day)
Location
The Washing Machine
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling Thermalright Peerless Assassin
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
Aren't you all correct here about AIOs?

Yes, its possible to air cool 300W just fine on a PCIe slot form factor.
Yes, WC might do it with less surface area and with a lower maximum temp under load, at a higher build cost and higher failrate.

The reason you see WC on a 1070 was because AIOs are all the rage for some and the midrange can be sold at premium. They sell. The reason you saw it on the Fury X was because AMD considered it the best option given the tradeoff to an air cooler (bulky/weight wise, even despite the higher cost) - so that dóes indicate that WC was related to high power consumption. Then, with Radeon VII they showed us that they can do the air option just fine as well with a large triplefan setup- and its likely cost played a major role here because the margin on that card isn't great.

The reason we don't see WC on low-end is because there is no way you will sell budget cards at premium price.
 
Joined
May 2, 2017
Messages
7,762 (2.81/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
More like, that's what you need to believe.
Because surely you have noticed there's a dearth of water cooled RX 560 or GTX 1030 cards.
Not to sound overly snide, but you do know there are wattages in between 50 and 275, right? As mentioned above, there have been water cooled GTX 1070 cards (150W), there are plenty of water cooled RTX 2070 cards (175W). In other words, partner cards with AIOs are in no way necessarily proof of high power consumption, just that the cards is in a high enough price bracket where "premium cooling" allows AIB partners to demand premium pricing.

And as @Vayra86 pointed out above: low-end cards don't sell if they're too expensive. Sticking a $70 AIO on a $200 RX 580 doesn't make sense, but it does so on a $500 RTX 2070, even though they're roughly the same wattage, as the cost of the cooler would then represent a much smaller percentage of the total price, and that market segment is generally more open to "premium cooling".

So there is a God written rule that they need to name it in a specific way ? GCN 5 is drastically different from GCN 1 in pretty much every way, they are worlds apart both in feature set and microarthitectural differences that change the clocks/power etc. It's a label that they may chose to keep using or not, it doesn't mean anything in particular if they do.
No, but it does make sense to not change the fundamentals of a chip architecture and keep the same name - that would be very confusing for everyone involved, particularly the people writing drivers for the hardware. And as pointed out above, GCN has not been fundamentally changed since its inception, it has been iterated upon, tweaked and upgraded, expanded, had features added - but the base architecture is still roughly the same, and works within the same framework - unlike, say, Nvidia's transition from Kepler to Maxwell, where driver compatibility fell off a cliff due to major architectural differences.
 
Joined
May 8, 2018
Messages
1,568 (0.66/day)
Location
London, UK
Navi is not vega, polaris is a good example of how efficient amd can be when they want to, vega in my book is terrible x power consumption while polaris is great and navi should follow polaris.
 
Joined
Sep 9, 2013
Messages
535 (0.13/day)
System Name Can I run it
Processor delidded i9-10900KF @ 5.1Ghz SVID best case scenario +LLC5+Supercool direct die waterblock
Motherboard ASUS Maximus XII Apex 2801 BIOS
Cooling Main = GTS 360 GTX 240, EK PE 360,XSPC EX 360,2x EK-XRES 100 Revo D5 PWM, 12x T30, AC High Flow Next
Memory 2x16GB TridentZ 3600@4600 16-16-16-36@1.61V+EK Monarch, Separate loop with GTS 120&Freezemod DDC
Video Card(s) Gigabyte RTX 3080 Ti Gaming OC @ 0.762V 1785Mhz core 20.8Gbps mem + Barrow full cover waterblock
Storage Transcend PCIE 220S 1TB (main), WD Blue 3D NAND 250GB for OC testing, Seagate Barracuda 4TB
Display(s) Samsung Odyssey OLED G9 49" 5120x1440 240Hz calibrated by X-Rite i1 Display Pro Plus
Case Thermaltake View 71
Audio Device(s) Q Acoustics M20 HD speakers with Q Acoustics QB12 subwoofer
Power Supply Thermaltake PF3 1200W 80+ Platinum
Mouse Logitech G Pro Wireless
Keyboard Logitech G913 (GL Linear)
Software Windows 11
AdoredTV shat on the face again.
 

bug

Joined
May 22, 2015
Messages
13,755 (3.96/day)
Processor Intel i5-12600k
Motherboard Asus H670 TUF
Cooling Arctic Freezer 34
Memory 2x16GB DDR4 3600 G.Skill Ripjaws V
Video Card(s) EVGA GTX 1060 SC
Storage 500GB Samsung 970 EVO, 500GB Samsung 850 EVO, 1TB Crucial MX300 and 2TB Crucial MX500
Display(s) Dell U3219Q + HP ZR24w
Case Raijintek Thetis
Audio Device(s) Audioquest Dragonfly Red :D
Power Supply Seasonic 620W M12
Mouse Logitech G502 Proteus Core
Keyboard G.Skill KM780R
Software Arch Linux + Win10
Not to sound overly snide, but you do know there are wattages in between 50 and 275, right? As mentioned above, there have been water cooled GTX 1070 cards (150W), there are plenty of water cooled RTX 2070 cards (175W). In other words, partner cards with AIOs are in no way necessarily proof of high power consumption, just that the cards is in a high enough price bracket where "premium cooling" allows AIB partners to demand premium pricing.

I'm guessing we could spin this a million different ways.
What I know right now is:
1. Cards with water cooling sport above average power draw.
2. Till now GCN didn't do TBR, so it had power draw well above Nvidia's.
3. The first glimpse we have at Navi is apparently water cooled.

People keep hoping for a GPU's Zen, I keep seeing Bulldozer iterations...
 
Joined
May 21, 2019
Messages
12 (0.01/day)
Uhm, no. It's a core architecture, which AMD has iterated on since they abandoned the previous TeraScale architecture. There are many variants, but they share a core framework and a lot of functionality. No GCN variant is fundamentally different from any other - just improved upon, added features to, etc. That's why AMD's early GCN cards have had such excellent longevity.

It's more of an ISA than "core architecture". GCN is a quad-SIMD design utilizing 1 instruction for the set, usually tasked in 64-thread groups. AMD's "next-gen" architecture still looks similar to GCN and is even executed similarly to current ISA, but has moved to VLIW2 (Super SIMD) and has drastically reworked CU clusters and caches. It probably won't be called GCN though, simply because AMD wants to retire that nomenclature. Vega was the largest change to GCN to date. Previously, ROPs used their own local cache, but the new tiling rasterizers need the ROPs connected to L2 to keep track of occluded primitives within pixels to cull them and reuse data for immediate mode tiling (hybrid raster). 2xFP16 is also useful in certain scenarios.

Vega and Turing both have new small geometry shaders that replace certain tessellation stages. In Vega, they're called primitive shaders, and in Turing, simply, mesh shaders. AMD is waiting for standardization in major APIs, while Nvidia seems fine with using a proprietary API extension to call them. Both types will further speed small geometry creation to enhance game realism, while AMD can also use them to speed geometry culling using their shader arrays to help their geometry front-ends.

Nvidia's basic GPC design (mini-GPUs within an interconnect fabric) dates back to Fermi, although Kepler fixed many of Fermi's shortcomings, Maxwell was the one to really propel it forward in perf/watt and not just from moving to immediate mode tiling rasterizers. Nvidia has also iterated on their GPC architecture, but in a much more aggressive manner (it helps to have a large R&D budget). Turing is still a VLIW2 GPC design*, using up to 6 GPCs in TU102. 7nm can extend that up to 8 GPCs when Nvidia moves to Ampere, but with RT taking priority now, Nvidia may just dedicate more die space to accelerating BVH traversal and intersection, trying to reduce ray tracing's very random hits to VRAM, and of course, making hybrid rendering, as a whole, more efficient and performant.

But, both AMD's GCN (2011) and Nvidia's GPC (2010) designs have been around for quite some time.

* Turing has to execute 2 SMs concurrently due to INT32 taking up 64 of 128 cores within an SM. So, using 2 SMs, 128 FP32/CUDA cores are tasked (warp is still 32 threads), similarly to Pascal and prior and thereby retains compatibility.
 
Joined
Oct 1, 2006
Messages
4,931 (0.74/day)
Location
Hong Kong
Processor Core i7-12700k
Motherboard Z690 Aero G D4
Cooling Custom loop water, 3x 420 Rad
Video Card(s) RX 7900 XTX Phantom Gaming
Storage Plextor M10P 2TB
Display(s) InnoCN 27M2V
Case Thermaltake Level 20 XT
Audio Device(s) Soundblaster AE-5 Plus
Power Supply FSP Aurum PT 1200W
Software Windows 11 Pro 64-bit
Navi is not vega, polaris is a good example of how efficient amd can be when they want to, vega in my book is terrible x power consumption while polaris is great and navi should follow polaris.
Really?
https://www.techpowerup.com/reviews/EVGA/GTX_1650_SC_Ultra_Black/28.html
Vega is more efficient than Polaris.
Polaris is prime example when AMD tried to clock a GPU way past it's efficiency curve.
The original RX400 series were okay on performance / watt, but after 1060 released AMD try to get that little bit of performance for a rather large TDP increase with their RX580 refresh.
 
Last edited:
Joined
Aug 6, 2017
Messages
7,412 (2.78/day)
Location
Poland
System Name Purple rain
Processor 10.5 thousand 4.2G 1.1v
Motherboard Zee 490 Aorus Elite
Cooling Noctua D15S
Memory 16GB 4133 CL16-16-16-31 Viper Steel
Video Card(s) RTX 2070 Super Gaming X Trio
Storage SU900 128,8200Pro 1TB,850 Pro 512+256+256,860 Evo 500,XPG950 480, Skyhawk 2TB
Display(s) Acer XB241YU+Dell S2716DG
Case P600S Silent w. Alpenfohn wing boost 3 ARGBT+ fans
Audio Device(s) K612 Pro w. FiiO E10k DAC,W830BT wireless
Power Supply Superflower Leadex Gold 850W
Mouse G903 lightspeed+powerplay,G403 wireless + Steelseries DeX + Roccat rest
Keyboard HyperX Alloy SilverSpeed (w.HyperX wrist rest),Razer Deathstalker
Software Windows 10
Benchmark Scores A LOT
You are assuming that Navi as an architecture is slower than Turing on the basis of Vega being slower than Turing?
Lol of course it's slower if the full chip (56cu is it?) is targeting tu106
 
Last edited:
Joined
Jan 24, 2011
Messages
180 (0.04/day)
Polaris is prime example when AMD tried to clock an GPU way past it's efficiency curve.
The original RX400 series were decent on performance / watt, but after 1060 released AMD try to get that little bit of performance for a rather large TDP increase with their RX580 refresh.
And Vega is not clocked past it's efficiency curve?
 
Joined
Feb 19, 2019
Messages
324 (0.15/day)
Vega is efficient, it just not fast as NV's parts so they had to compensate over it with Clock speed and thus got out of the efficiency curve - same issue with Intel's parts that pushing clocks towards 5Ghz at "95" TDP parts with actual power draw of 150W+.
 
Joined
Feb 6, 2018
Messages
74 (0.03/day)
Processor i7 2600
Motherboard ASUS H67 rev. B3
Cooling EVO 212
Memory Kingston 8GB
Video Card(s) MSi 7870 GHz Edt.
Storage EVO 850 250GB + WD Black 1TB
Display(s) Dell U2412M
Case ASUS Essentio CG8250
Power Supply Delta 550W
Mouse Logitech
Keyboard Logitech
Software Windows 7
As much as I love AMD, this nävi looks like POS.
1. Power-hungry
2. Sound-hungry
3. Perf.-hungry?!
 
Joined
Oct 1, 2006
Messages
4,931 (0.74/day)
Location
Hong Kong
Processor Core i7-12700k
Motherboard Z690 Aero G D4
Cooling Custom loop water, 3x 420 Rad
Video Card(s) RX 7900 XTX Phantom Gaming
Storage Plextor M10P 2TB
Display(s) InnoCN 27M2V
Case Thermaltake Level 20 XT
Audio Device(s) Soundblaster AE-5 Plus
Power Supply FSP Aurum PT 1200W
Software Windows 11 Pro 64-bit
And Vega is not clocked past it's efficiency curve?
Vega is as well, but Vega was designed to reach a higher clock speed than polaris in the first place.
Therefore it (at least for Vega 56) wasn't as far off the efficiency curve as Polaris ended up.
But you do see the same crazy power draw happening with the AIO version of Vega64, that performance / watt dropped off a cliff.
 
Last edited:

bug

Joined
May 22, 2015
Messages
13,755 (3.96/day)
Processor Intel i5-12600k
Motherboard Asus H670 TUF
Cooling Arctic Freezer 34
Memory 2x16GB DDR4 3600 G.Skill Ripjaws V
Video Card(s) EVGA GTX 1060 SC
Storage 500GB Samsung 970 EVO, 500GB Samsung 850 EVO, 1TB Crucial MX300 and 2TB Crucial MX500
Display(s) Dell U3219Q + HP ZR24w
Case Raijintek Thetis
Audio Device(s) Audioquest Dragonfly Red :D
Power Supply Seasonic 620W M12
Mouse Logitech G502 Proteus Core
Keyboard G.Skill KM780R
Software Arch Linux + Win10
It's more of an ISA than "core architecture". GCN is a quad-SIMD design utilizing 1 instruction for the set, usually tasked in 64-thread groups. AMD's "next-gen" architecture still looks similar to GCN and is even executed similarly to current ISA, but has moved to VLIW2 (Super SIMD) and has drastically reworked CU clusters and caches. It probably won't be called GCN though, simply because AMD wants to retire that nomenclature. Vega was the largest change to GCN to date. Previously, ROPs used their own local cache, but the new tiling rasterizers need the ROPs connected to L2 to keep track of occluded primitives within pixels to cull them and reuse data for immediate mode tiling (hybrid raster). 2xFP16 is also useful in certain scenarios.

Vega and Turing both have new small geometry shaders that replace certain tessellation stages. In Vega, they're called primitive shaders, and in Turing, simply, mesh shaders. AMD is waiting for standardization in major APIs, while Nvidia seems fine with using a proprietary API extension to call them. Both types will further speed small geometry creation to enhance game realism, while AMD can also use them to speed geometry culling using their shader arrays to help their geometry front-ends.

Nvidia's basic GPC design (mini-GPUs within an interconnect fabric) dates back to Fermi, although Kepler fixed many of Fermi's shortcomings, Maxwell was the one to really propel it forward in perf/watt and not just from moving to immediate mode tiling rasterizers. Nvidia has also iterated on their GPC architecture, but in a much more aggressive manner (it helps to have a large R&D budget). Turing is still a VLIW2 GPC design*, using up to 6 GPCs in TU102. 7nm can extend that up to 8 GPCs when Nvidia moves to Ampere, but with RT taking priority now, Nvidia may just dedicate more die space to accelerating BVH traversal and intersection, trying to reduce ray tracing's very random hits to VRAM, and of course, making hybrid rendering, as a whole, more efficient and performant.

But, both AMD's GCN (2011) and Nvidia's GPC (2010) designs have been around for quite some time.

* Turing has to execute 2 SMs concurrently due to INT32 taking up 64 of 128 cores within an SM. So, using 2 SMs, 128 FP32/CUDA cores are tasked (warp is still 32 threads), similarly to Pascal and prior and thereby retains compatibility.
Hey, welcome to TPU.
Just so you know, informed, to the point posts are not the norm here. But this being your first, I won't report it ;)
 
Joined
Jan 24, 2011
Messages
180 (0.04/day)
Vega is as well, but Vega was designed to reach a higher clock speed than polaris in the first place.
Therefore it wasn't as far off the efficiency curve as Polaris ended up.
And you know Polaris or Vega's actual efficiency curve? You can't really say It was the clocks RX470 or RX480 had, because If you underclocked those chips, then they would have most likely better performance/power ratio than at their default clocks. Then I could also claim they are past their efficiency curve at their default clocks.

BTW comparing Polaris to Vega is unfair to begin with. Vega has more efficient HBM2 memory, but is also a more powerful gpu. Vega 64(4096SP, 256TMU, 64ROPs) has 10215-12665 GFLOPs vs RX570(2048SP, 128TMU, 32ROPs)which has 4784-5095 GFLOPs. Vega 64 is 114-149% more powerful on paper, but in reality It's only 97.5% faster than RX 570 in 4K resolution.
If we really wanted to compare which one is more efficient, we would need to have a 32-36CU version of Vega without HBM2.
 
Joined
Aug 6, 2017
Messages
7,412 (2.78/day)
Location
Poland
System Name Purple rain
Processor 10.5 thousand 4.2G 1.1v
Motherboard Zee 490 Aorus Elite
Cooling Noctua D15S
Memory 16GB 4133 CL16-16-16-31 Viper Steel
Video Card(s) RTX 2070 Super Gaming X Trio
Storage SU900 128,8200Pro 1TB,850 Pro 512+256+256,860 Evo 500,XPG950 480, Skyhawk 2TB
Display(s) Acer XB241YU+Dell S2716DG
Case P600S Silent w. Alpenfohn wing boost 3 ARGBT+ fans
Audio Device(s) K612 Pro w. FiiO E10k DAC,W830BT wireless
Power Supply Superflower Leadex Gold 850W
Mouse G903 lightspeed+powerplay,G403 wireless + Steelseries DeX + Roccat rest
Keyboard HyperX Alloy SilverSpeed (w.HyperX wrist rest),Razer Deathstalker
Software Windows 10
Benchmark Scores A LOT
The $399 Navi "Pro" is probably being designed with a performance target somewhere between the RTX 2060 and RTX 2070, so you typically pay $50 more than you would for an RTX 2060, for noticeably higher performance.




stronger than 2070

I hope no one here has short terms memory loss to believe what reps say



updated may 19


123404



this has ddr6,less cu,lower clocks and worse performance per cu than R7 which beats 2070 by 6%.
stronger than 2070,yeah,right.

Not as much now it seems
come on,let's not pretend that 90% of such channels cater for anything more than one or the other fanbases exclusively."this video is nothing new from what you've already seen a hundred times" doesn't earn clicks.look at pcgh test above.or the one that computerbase.de recently updated too.
worthless videos.but you go ahead and believe what they tell you.and don't forget to like and subscribe :)

they're gonna have to throw in one hell of a game bundle for people to defend this.
 
Last edited:
Joined
Oct 1, 2006
Messages
4,931 (0.74/day)
Location
Hong Kong
Processor Core i7-12700k
Motherboard Z690 Aero G D4
Cooling Custom loop water, 3x 420 Rad
Video Card(s) RX 7900 XTX Phantom Gaming
Storage Plextor M10P 2TB
Display(s) InnoCN 27M2V
Case Thermaltake Level 20 XT
Audio Device(s) Soundblaster AE-5 Plus
Power Supply FSP Aurum PT 1200W
Software Windows 11 Pro 64-bit
BTW comparing Polaris to Vega is unfair to begin with. Vega has more efficient HBM2 memory, but is also a more powerful gpu. Vega 64(4096SP, 256TMU, 64ROPs) has 10215-12665 GFLOPs vs RX570(2048SP, 128TMU, 32ROPs)which has 4784-5095 GFLOPs. Vega 64 is 114-149% more powerful on paper, but in reality It's only 97.5% faster than RX 570 in 4K resolution.
If we really wanted to compare which one is more efficient, we would need to have a 32-36CU version of Vega without HBM2.
One thing you left out of that comparison, that is the Geometry performance of Vega vs Polaris.
This have become the Achilles heel of GCN the four 4 Shader Engine / Geometry Engine limit thus far.
 
Last edited:
Joined
Jan 24, 2011
Messages
180 (0.04/day)
One thing you left out of that comparison, that is the Geometry performance of Vega vs Polaris.
This have become the Achilles hill of GCN the four 4 Shader Engine / Geometry Engine limit thus far.
I just wanted to point out that big Vega loses a lot of performance, which in turn causes It to have worse performance/W ratio than If It was smaller.
 
Joined
May 2, 2017
Messages
7,762 (2.81/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
As I've been saying for a while now, AMD got stuck with a rather serious problem when they maxed out the CU config of GCN with Fiji - it was competitive at the time, but left zero room to grow by adding CUs, so further improvements required pushing clocks past their sweet spot (in the mean time, Nvidia has increased their CUDA core count by a whopping 55% at the high end). Which gave us Vega. Not a bad arch update or bad GPUs overall, but they delivered a rather poor efficiency improvement considering the move from 28nm to 14nm - again, because GCN stopped AMD from adding more CUs, forcing them to squeeze as high clocks as possible from the chips. Not to mention that this made it look like they were chasing Nvidia's clock speeds for no good reason, while both failing at matching them and losing efficiency. A bit of a pile-up of bad consequences of an inherent architectural trait, sadly. I would imagine an 80-CU Vega at ~1200MHz would perform amazingly, and do a decent job at perf/W too. If AMD matched Nvidia's core count increase since 2015 (980 Ti/Fury X) we'd now have a 100CU/6400SP Vega card - which it's not hard to imagine would compete quite well with Nvidia's top end cards even at low clocks and on 14nm. The die would be large, just like the Fury X, so a compromise around 80-90 CUs and clocks in the 1300-1450MHz range might be better, but all in all, AMD is being bottlenecked by being incapable of widening their chip designs, and this is what has truly been holding them back since 2015. Fingers crossed that the NG arch takes this into account by allowing ~unlimited core count scaling.
 
Joined
Jan 8, 2017
Messages
9,434 (3.28/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
It wasn't up until very recently that AMD could make a 64 CU GPU in under 500 mm^2. People weren't exactly thrilled with Vega as it was , making it even more expensive would have served no purpose. AMD's performance problemes can't and shouldn't be solved by adding more CUs. Besides I don't even think they could have even made such a GPU feasible with GloFo's 14nm node and TSMC's 7nm probably doesn't allow for huge dies at moment either.
 
Joined
Jun 10, 2014
Messages
2,985 (0.78/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
As I've been saying for a while now, AMD got stuck with a rather serious problem when they maxed out the CU config of GCN with Fiji
I've seen this claim over and over again, but has it been explicitly stated from AMD that GCN can't do more than 64 CUs/4096 SPs?
To my knowledge there is no architectural reason why it wouldn't be possible, but there is a very good reason why they don't do it; adding e.g. 50% SPs would increase the energy consumption by ~50% but only increase the performance by ~20-30% at best, because a GPU with more clusters would need more powerful scheduling, and to maintain higher efficiency than the predecessor it would require more than 50% better scheduling. The problem for GCN have always been management of resources, and this is the reason why GCN has fallen behind Nvidia. GCN have plenty of computational power, just not the means to harness it.
 
Joined
May 2, 2017
Messages
7,762 (2.81/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
I've seen this claim over and over again, but has it been explicitly stated from AMD that GCN can't do more than 64 CUs/4096 SPs?
To my knowledge there is no architectural reason why it wouldn't be possible, but there is a very good reason why they don't do it; adding e.g. 50% SPs would increase the energy consumption by ~50% but only increase the performance by ~20-30% at best, because a GPU with more clusters would need more powerful scheduling, and to maintain higher efficiency than the predecessor it would require more than 50% better scheduling. The problem for GCN have always been management of resources, and this is the reason why GCN has fallen behind Nvidia. GCN have plenty of computational power, just not the means to harness it.
They haven't confirmed it, no (why would they? That'd be pretty much the same as saying "we can't compete in the high end until our next arch, no matter what! - and that's bad business strategy), but three subsequent generations with ~the same specs save for clocks, cache and other minor tweaks (in terms of real-world performance) does tell us something. What you're saying is not an argument for not scaling out the die - after all, pushing clocks puts just as much demand on scheduling as making a wider design. It might of course be that AMD would gain more by "rebalancing" their architecture by increasing the number of other components than SPs alone, but that's besides the point. Also, Nvidia has demonstrated pretty well that your scaling numbers are on the pessimistic side. With a similar node shrink (28nm to 12nm) and two small-to-medium architecture updates they've increased the CUDA core count by 55%, increased clocks by ~60% (at least, depending on whether you look at real-world boost or not), and kept power draw at the same level. Of course there are highly complex technical reasons for why this works for Nvidia and not AMD, but claiming that AMD has deliberately chosen not to increase their CU count while their main competitor has increased theirs by 55% - and at the same time run off with the high-end GPU segment - sounds a bit like wishful thinking.

With all this being said, my Fury X is getting long enough in the tooth that I might still get one of these if they match or beat the 2070. But I'd really like for Arcturus(?) to arrive sooner rather than later.
 
Joined
Jun 10, 2014
Messages
2,985 (0.78/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
…but claiming that AMD has deliberately chosen not to increase their CU count while their main competitor has increased theirs by 55% - and at the same time run off with the high-end GPU segment - sounds a bit like wishful thinking.
The fact remains that AMD have plenty of computational performance, while Nvidia manages to squeeze more gaming performance out of less theoretical performance, because AMD chose to focus on "brute force" performance rather than efficiency.
 
Joined
Feb 11, 2009
Messages
5,545 (0.96/day)
System Name Cyberline
Processor Intel Core i7 2600k -> 12600k
Motherboard Asus P8P67 LE Rev 3.0 -> Gigabyte Z690 Auros Elite DDR4
Cooling Tuniq Tower 120 -> Custom Watercoolingloop
Memory Corsair (4x2) 8gb 1600mhz -> Crucial (8x2) 16gb 3600mhz
Video Card(s) AMD RX480 -> RX7800XT
Storage Samsung 750 Evo 250gb SSD + WD 1tb x 2 + WD 2tb -> 2tb MVMe SSD
Display(s) Philips 32inch LPF5605H (television) -> Dell S3220DGF
Case antec 600 -> Thermaltake Tenor HTCP case
Audio Device(s) Focusrite 2i4 (USB)
Power Supply Seasonic 620watt 80+ Platinum
Mouse Elecom EX-G
Keyboard Rapoo V700
Software Windows 10 Pro 64bit
you would think people would welcome some competition of any kind...but no, instant negativity.
 
Top