Thursday, December 12th 2019

AMD Radeon RX 5600 Series SKUs Feature 6GB and 8GB Variants

AMD's Radeon RX 5600-series could see the company take on the top-end of NVIDIA's GeForce 16-series, such as the GTX 1660 Super and the GTX 1660 Ti. A report from earlier this month pegged a December 2019 product announcement for the RX 5600-series and subsequent availability in the weeks following. Regulatory filings by AMD AIB (add-in board) partners with the Eurasian Economic Commission (EEC) shed more light on the product differentiation within the RX 5600 series. The filings reveal that the RX 5600 and RX 5600 XT feature 6 GB and 8 GB sub-variants.

The regulatory filing by ASUS references products across its ROG Strix, TUF Gaming, and Dual lines of graphics cards. As mentioned in the older report, we expect AMD to carve the RX 5600 series out of the larger "Navi 10" silicon, by disabling many more RDNA compute units than the RX 5700, and narrowing the GDDR6 memory bus to 192-bit for the 6 GB variants. AMD has an opportunity to harvest "Navi 10" chips down to stream processor counts such as 1,792 (28 CUs) or 2,048 (32 CUs). It also has the opportunity to use cost-effective 12 Gbps GDDR6 memory chips.
Source: WCCFTech
Add your own comment

27 Comments on AMD Radeon RX 5600 Series SKUs Feature 6GB and 8GB Variants

#26
EarthDog
gamefoo21Yeah, AMD shouldn't have marketed the V2 so heavily on gaming. It's saving grace in my opinion is that the minimum frame rates are noticeably better, but for most people they only care about what it hits at max, and compared to a 5700XT it's not great.

I would love to get my hands on a dual 64CU V20 card, that uses the fancy interconnect to make the two GPUs act as one. Sadly that's huge money.

I mean even the Fury X2 or Radeon Pro Duo, still commands a hefty price tag used and it's crossfire.

Interestingly I think it's a bit telling that the Xbox One/S and PS4 variants use basically a Polaris based GPU and either can't do 4K or seriously struggle with it and need to use 'optimizations'. While the Xbox One X uses a GPU based on the R290/Vega line and as Sony whined 'brute forces 4K'.

I think having piles of compute power may actually be more future proof. Look at Crytek and their RTRT software demo, they used a Vega 56, and got decent results. The V20 core has significantly more computational power even in consumer dress.

FP64 perf: = Shaders/TMUs/ROPs/CUs
V10 - Vega 64: 0.786 TFlops = 4096/256/64/64
Fastest FP64: 0.854TFlops - Water cooled V64
V20 - V2/VII: 3.360 TFlops = 3840/240/64/60
Fastest FP64: 7.373TFlops- Instinct M60

The V2 gets stuck with less compute hardware, and doubled divider at 1:4 vs the pro cards getting 1:2 for FP64. Seems V10 had the FP64 divider couldn't go past 1:16.

For comparisons sake the fastest Navi GPU and the fastest 2080 Ti...

5700XT PC Liquid Devil: 0.662TFlops
2080 Ti Zo Amp Extreme: 0.494TFlops

So in summary the V20 stuff is meant to crush numbers very quickly. Too bad AMD locked the bios so can't try to flash unlock the V2 cores into fully functional ones like in the past.

I will say though the consumer air cooler for the V2 is probably the best stock air BBA Radeon cooler ever and the 50th AE one just looks sexy to me.
I game... doesnt really matter to me the details. Good mins and good average (which doesn't take a compute heavy card) ftw.
Posted on Reply
#27
gamefoo21
Yeah AMD screwed up pushing a render beast as a gaming card.
Posted on Reply
Add your own comment
Jan 18th, 2025 01:00 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts