Tuesday, September 22nd 2020

AMD Radeon "Navy Flounder" Features 40CU, 192-bit GDDR6 Memory

AMD uses offbeat codenames such as the "Great Horned Owl," "Sienna Cichlid" and "Navy Flounder" to identify sources of leaks internally. One such upcoming product, codenamed "Navy Flounder," is shaping up to be a possible successor to the RX 5500 XT, the company's 1080p segment-leading product. According to ROCm compute code fished out by stblr on Reddit, this GPU is configured with 40 compute units, a step up from 14 on the RX 5500 XT, and retains a 192-bit wide GDDR6 memory interface.

Assuming the RDNA2 compute unit on next-gen Radeon RX graphics processors has the same number of stream processors per CU, we're looking at 2,560 stream processors for the "Navy Flounder," compared to 80 on "Sienna Cichlid." The 192-bit wide memory interface allows a high degree of segmentation for AMD's product managers for graphics cards under the $250-mark.
Sources: VideoCardz, stblr (Reddit)
Add your own comment

135 Comments on AMD Radeon "Navy Flounder" Features 40CU, 192-bit GDDR6 Memory

#126
InVasMani
Personally I'm not pinning my hopes on anything nor am I expecting anything. We won't know anything definitively until the dust settles. The node shrink isn't the only metric to consider when looking at AMD and RDNA2 improvements that can be made or realized is what I was alluding toward. Now with that in mind 30% is more achievable than 50% w/o a doubt. We don't know anything about the clock speeds or how it might be handled and achieved perhaps it's a short burst clock speed, but only sustained briefly similar to Intel's turbo and perhaps not across all stream cores. We literally know just about nothing about it officially AMD is being tight lipped about it and playing their cards close.
Posted on Reply
#127
Valantar
InVasManiPersonally I'm not pinning my hopes on anything nor am I expecting anything. We won't know anything definitively until the dust settles. The node shrink isn't the only metric to consider when looking at AMD and RDNA2 improvements that can be made or realized is what I was alluding toward. Now with that in mind 30% is more achievable than 50% w/o a doubt. We don't know anything about the clock speeds or how it might be handled and achieved perhaps it's a short burst clock speed, but only sustained briefly similar to Intel's turbo and perhaps not across all stream cores. We literally know just about nothing about it officially AMD is being tight lipped about it and playing their cards close.
It's true that we don't know anything about how these future products will work, but we do have some basic guidelines from the history of silicon manufacturing. For example, your comparison to Intel's boost strategy is misleading - in Intel's case, boost is a short-term clock speed increase that bypasses baseline power draw limits but must operate within the thermal and voltage stability limits of the silicon (otherwise it would crash, obviously). Thus, the only thing stopping the chip from operating at that clock speed all the time is power and cooling limitations. Which is why desktop chips on certain motherboards and with good coolers can often run at these speeds 24/7. GPUs already do this - that's why they have base and boost speed specs - but no GPU has ever come close to 2.5 GHz with conventional cooling. RDNA 1 is barely able to exceed 2 GHz when overclocked with air cooling. It wouldn't matter whatsoever if a boost spec going higher than this was for a short or long period, as it would crash. It would not be stable, no matter what. You can't bypass stability limits by shortening the time spent past those limits, as you can't predict when the crash will happen. So, reaching 2.5GHz, no matter the duration, would then mean exceeding the maximum stable clock of RDNA 1 by near 25%. Without a node change, just a tweaked node. Would that alone be possible? Sure. Not likely, but possible. But it would cost a lot of power, as we have seen by the changes Intel has made to their 14nm node to reach their high clocks - higher clocks require higher voltages, which increase power draw.

The issue comes with the leaks also saying that this will happen at 150W (170W in other leaks), down from 225W for a stock 5700 XT and more like 280W for one operating at ~2GHz. Given that power draw on the same node increases more than linearly as clock speeds increase, that would mean a massive architectural and node efficiency improvement on top of significant tweaks to the node to reach those clock speeds at all. This is where the "this isn't going to happen" perspective comes in, as the likelihood for both of these things coming true at the same time is so small as to render it impossible.

And remember, these things stack, so we're not talking about the 30-50% numbers you're mentioning here (that's clock speed alone), we're talking an outright >100% increase in perf/W if the rumored numbers are all true. That, as I have said repeatedly, is completely unprecedented in modern silicon manufacturing. I have no problem thinking that AMD's promised "up to 50%" perf/W increase might be true (especially given that they didn't specify the comparison, so it might be between the least efficient RDNA 1 GPU, the 5700 XT, and an ultra-efficient RDNA 2 SKU similar to the 5600 XT). But even a sustained 50% improvement would be extremely impressive and far surpassing what can typically be expected without a node improvement. Remember, even Maxwell only beat Kepler by ~50% perf/W, so if AMD is able to match that it would be one hell of an achievement. Doubling that is out of the question. I would be very, very happy if AMD managed a 50% overall improvement, but even 30-40% would be very, very good.
Posted on Reply
#129
BoboOOZ
Just give it one week guys, we'll know more. i'm pretty sure some of the latest leaks are fakes.
Posted on Reply
#130
dragontamer5788
BoboOOZJust give it one week guys, we'll know more. i'm pretty sure some of the latest leaks are fakes.
Next week is the Zen 3 announcement. End of October for the Navi 2x thing.
Posted on Reply
#131
BoboOOZ
dragontamer5788Next week is the Zen 3 announcement. End of October for the Navi 2x thing.
Indeed, but I'm hoping for some non-fake leaks to come out in the following days ;) , the official launch is still a bit far off.

Anyways, seeing how this launch feels so rushed by Nvidia, I don't think their marketing just got dumb all of a sudden, I think they have better info than us and they feel some pressure. I do not think that pressure comes from consoles, because 400 USD consoles do not compete with 800USD graphic cards, so I think the pressure must come from RDNA2. But the numbers we've seen in the past 5 days really look to good to be true, like it's not even a hype train anymore, it's a hype jet.
Posted on Reply
#132
dragontamer5788
BoboOOZIndeed, but I'm hoping for some non-fake leaks to come out in the following days ;) , the official launch is still a bit far off.

Anyways, seeing how this launch feels so rushed by Nvidia, I don't think their marketing just got dumb all of a sudden, I think they have better info than us and they feel some pressure. I do not think that pressure comes from consoles, because 400 USD consoles do not compete with 800USD graphic cards, so I think the pressure must come from RDNA2. But the numbers we've seen in the past 5 days really look to good to be true, like it's not even a hype train anymore, it's a hype jet.
The NVidia thing is just "anti-leak stupidity" IMO. They didn't give working drivers to any board partner pre-launch, because they were too worried about leaks. I mean, I understand the anti-leak mindset. But NVidia went too far, and it affected their launch partners and diminished the quality of their drives (temporarily so far: but... its not a good look regardless).
Posted on Reply
#133
BoboOOZ
dragontamer5788The NVidia thing is just "anti-leak stupidity" IMO. They didn't give working drivers to any board partner pre-launch, because they were too worried about leaks. I mean, I understand the anti-leak mindset. But NVidia went too far, and it affected their launch partners and diminished the quality of their drives (temporarily so far: but... its not a good look regardless).
Well, I agree that the driver/crash/POSCAP issue is mostly that, that and the fact that the Samsung node is not so awesome and they have pushed it very close to its maximum capabilities, unlike in past generations.

But there's very little availability of the cards everywhere and for that, I can think of only 2 reasons, they launched very in a hurry without building up any stock (there must've been like 20k total cards worldwide), or their yields are much lower than expected, but I feel that they should've known how the yields are since August, at least.
Posted on Reply
#134
dragontamer5788
BoboOOZBut there's very little availability of the cards everywhere and for that, I can think of only 2 reasons, they launched very in a hurry without building up any stock (there must've been like 20k total cards worldwide), or their yields are much lower than expected, but I feel that they should've known how the yields are since August, at least.
Think economics: Luxury good vs normal good vs inferior good.

A big part of the draw of these things is having an item no one else has. This plays into NVidia's marketing strategy, and is overall beneficial to NVidia IMO. Its how you market luxury goods. If anything, AMD should learn from NVidia and work towards that kind of marketing. If everyone had a thing, it isn't a luxury anymore. Its just normal.

AMD, for better or worse, seems to be using an inferior good strategy. IMO, that diminishes the brand a bit, but it does make AMD's stuff a bit more personable. I don't believe that the average Joe buys a $799 GPU, and seeing AMD consistently release stuff in the $150 to $400 market is a laudable goal (especially because NVidia seems to ignore that market). The argument AMD makes is almost always price/performance, but that just solidifies the idea of "inferior goods" to the mindset of people. Its subconscious, but that's the effect.
Posted on Reply
#135
BoboOOZ
dragontamer5788Think economics: Luxury good vs normal good vs inferior good.

A big part of the draw of these things is having an item no one else has. This plays into NVidia's marketing strategy, and is overall beneficial to NVidia IMO. Its how you market luxury goods. If anything, AMD should learn from NVidia and work towards that kind of marketing. If everyone had a thing, it isn't a luxury anymore. Its just normal.

AMD, for better or worse, seems to be using an inferior good strategy. IMO, that diminishes the brand a bit, but it does make AMD's stuff a bit more personable. I don't believe that the average Joe buys a $799 GPU, and seeing AMD consistently release stuff in the $150 to $400 market is a laudable goal (especially because NVidia seems to ignore that market). The argument AMD makes is almost always price/performance, but that just solidifies the idea of "inferior goods" to the mindset of people. Its subconscious, but that's the effect.
Meh, Nvidia sells millions of these card at each generation, and I'm pretty sure if there were 100k 3080s they would sell fast at prices significantly above MSRP, but I think nobody has them, not even Nvidia. But i guess we'll know more soon.
Posted on Reply
Add your own comment
Jan 10th, 2025 07:58 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts