Thursday, December 12th 2019
AMD Radeon RX 5600 Series SKUs Feature 6GB and 8GB Variants
AMD's Radeon RX 5600-series could see the company take on the top-end of NVIDIA's GeForce 16-series, such as the GTX 1660 Super and the GTX 1660 Ti. A report from earlier this month pegged a December 2019 product announcement for the RX 5600-series and subsequent availability in the weeks following. Regulatory filings by AMD AIB (add-in board) partners with the Eurasian Economic Commission (EEC) shed more light on the product differentiation within the RX 5600 series. The filings reveal that the RX 5600 and RX 5600 XT feature 6 GB and 8 GB sub-variants.
The regulatory filing by ASUS references products across its ROG Strix, TUF Gaming, and Dual lines of graphics cards. As mentioned in the older report, we expect AMD to carve the RX 5600 series out of the larger "Navi 10" silicon, by disabling many more RDNA compute units than the RX 5700, and narrowing the GDDR6 memory bus to 192-bit for the 6 GB variants. AMD has an opportunity to harvest "Navi 10" chips down to stream processor counts such as 1,792 (28 CUs) or 2,048 (32 CUs). It also has the opportunity to use cost-effective 12 Gbps GDDR6 memory chips.
Source:
WCCFTech
The regulatory filing by ASUS references products across its ROG Strix, TUF Gaming, and Dual lines of graphics cards. As mentioned in the older report, we expect AMD to carve the RX 5600 series out of the larger "Navi 10" silicon, by disabling many more RDNA compute units than the RX 5700, and narrowing the GDDR6 memory bus to 192-bit for the 6 GB variants. AMD has an opportunity to harvest "Navi 10" chips down to stream processor counts such as 1,792 (28 CUs) or 2,048 (32 CUs). It also has the opportunity to use cost-effective 12 Gbps GDDR6 memory chips.
27 Comments on AMD Radeon RX 5600 Series SKUs Feature 6GB and 8GB Variants
At least later today we can get a glimpse into 16Gb size GDDR6 chips.
;)
If so, a 6G variant will also need to go down to 48ROP, wich is probably only possible when 1 whole array gets shut down, including the prim-unit etc.
If that chip is a cut-down Navi10, well you get 3 times 5 WGP (Dual-Compute-Unit) leading to 1920 Cores, maxed out.
We will se if thats true, next year.
Lets wait for reviews & an official launch though :p
e: And since it's AMD, it wouldn't be a surprise if the first 6GB models are 8GB physically, just artificially slowed down via bios.
They where braindead addicted to the mining-hype and did nearly nothing to make any big efficiency gains.
So you fight in 2019/2020 versus the efficiency of Nvidias chips of 2016, thats a pitty if you asked me.
You gotta realize at some point that there is a difference between an architecture which is inefficient at it's core and a GPU which is inefficient. Fermi was a great example of an architecture that was very inefficient in all it's forms, Vega (the architecture), for instance, wasn't inefficient shown by the fact that you can have it into APUs that sip power while Vega 56 and 64 (the GPUs) were indeed very inefficient GPUs.
I'm looking for an upgrade from my RX 570 in that price range but so far theres no alternative,and I would rather not buy used this time since I want to keep the new card for years.
The Fury X which is the basis of Polaris, was aimed to do games and not maths. Navi is currently based on this approach and makes it meh at mining. It's also the Nvidia approach for basically everything but Volta really. NV gaming cards get strapped with brutal FP32 to FP64 dividers.
The R290 X which gave rise to the Vega, was more meant for professional stuff, and had great maths. Great for professional stuff, meh for games currently, unless you actually use the maths for RTRT or something. It's great for mining and maths.
R9 290X smacks a Fury X at FP64 and mining. The Fury X desperately needed 8GB Vram though.
7nm EUV brings more, at least, according to declared spec.
I would love to get my hands on a dual 64CU V20 card, that uses the fancy interconnect to make the two GPUs act as one. Sadly that's huge money.
I mean even the Fury X2 or Radeon Pro Duo, still commands a hefty price tag used and it's crossfire.
Interestingly I think it's a bit telling that the Xbox One/S and PS4 variants use basically a Polaris based GPU and either can't do 4K or seriously struggle with it and need to use 'optimizations'. While the Xbox One X uses a GPU based on the R290/Vega line and as Sony whined 'brute forces 4K'.
I think having piles of compute power may actually be more future proof. Look at Crytek and their RTRT software demo, they used a Vega 56, and got decent results. The V20 core has significantly more computational power even in consumer dress.
FP64 perf: = Shaders/TMUs/ROPs/CUs
V10 - Vega 64: 0.786 TFlops = 4096/256/64/64
Fastest FP64: 0.854TFlops - Water cooled V64
V20 - V2/VII: 3.360 TFlops = 3840/240/64/60
Fastest FP64: 7.373TFlops- Instinct M60
The V2 gets stuck with less compute hardware, and doubled divider at 1:4 vs the pro cards getting 1:2 for FP64. Seems V10 had the FP64 divider couldn't go past 1:16.
For comparisons sake the fastest Navi GPU and the fastest 2080 Ti...
5700XT PC Liquid Devil: 0.662TFlops
2080 Ti Zo Amp Extreme: 0.494TFlops
So in summary the V20 stuff is meant to crush numbers very quickly. Too bad AMD locked the bios so can't try to flash unlock the V2 cores into fully functional ones like in the past.
I will say though the consumer air cooler for the V2 is probably the best stock air BBA Radeon cooler ever and the 50th AE one just looks sexy to me.