Tuesday, September 22nd 2020
AMD Radeon "Navy Flounder" Features 40CU, 192-bit GDDR6 Memory
AMD uses offbeat codenames such as the "Great Horned Owl," "Sienna Cichlid" and "Navy Flounder" to identify sources of leaks internally. One such upcoming product, codenamed "Navy Flounder," is shaping up to be a possible successor to the RX 5500 XT, the company's 1080p segment-leading product. According to ROCm compute code fished out by stblr on Reddit, this GPU is configured with 40 compute units, a step up from 14 on the RX 5500 XT, and retains a 192-bit wide GDDR6 memory interface.
Assuming the RDNA2 compute unit on next-gen Radeon RX graphics processors has the same number of stream processors per CU, we're looking at 2,560 stream processors for the "Navy Flounder," compared to 80 on "Sienna Cichlid." The 192-bit wide memory interface allows a high degree of segmentation for AMD's product managers for graphics cards under the $250-mark.
Sources:
VideoCardz, stblr (Reddit)
Assuming the RDNA2 compute unit on next-gen Radeon RX graphics processors has the same number of stream processors per CU, we're looking at 2,560 stream processors for the "Navy Flounder," compared to 80 on "Sienna Cichlid." The 192-bit wide memory interface allows a high degree of segmentation for AMD's product managers for graphics cards under the $250-mark.
135 Comments on AMD Radeon "Navy Flounder" Features 40CU, 192-bit GDDR6 Memory
That being said, AMD could launch something really expensive for prosumers with HBM2, on the plus side they could make fun of the false claims of Micron with their GDDR6X.
I think Camm is talking about the rumored "cache chip", a L4 128MB or 64MB SRAM cache that's rumored to be on NAVI. I don't know where that rumor came from... but the general gist is that everyone's looking at 192-bit bus or 128-bit bus designs thinking AMD has an "ace in the hole" or something, to compete against 3080 or 3070 chips.
With regards to the L4 cache rumors: a fast SRAM may be good for framebuffer operations, but most operations would be going to VRAM anyway (because 4k or 8k textures are so damn big... as is the vertex data). I can't imagine a 64MB or 128MB cache really helping these intense games. But I'm willing to be proven wrong on the issue. I'll think about the issue more if the "super-large cache" rumors are confirmed.
It might be misinformation from AMD, but it's not completely crazy, especially if you listen to Mark Cerney's diatribe in the second video.
And, as I said on the previous page, we don't know anything about the actual configurations of retail SKUs here. While I also expect a top-end 40CU card with no power or clock limits to exceed the 5700 XT - it's the same CU count on an updated arch and node, after all - that doesn't tell us anything at all about cut down versions, power targets, clock speeds, etc. And given the massive perf/$ jump we've (finally!) seen with Ampere, pricing for a 5700 XT equivalent necessarily moves downwards by quite a bit. Lower midrange - below $300, hopefully closer to $200 - sounds about right. That would certainly represent a return to the previous norm of performance/$ actually increasing each generation, rather than the stagnation we've seen in later years. So it's entirely possible that AMD chooses to present that 40CU GPU as a successor to the 5600 XT, as their "high fps 1080p gaming" card (despite it being perfectly capable of mid-to-high settings 1440p) and thus make 6GB of VRAM a completely reasonable amount (barring any cache shenaningans, of course).
Remember power supplies don't run on 100% efficiency.
also here's a link from techpowerup
www.techpowerup.com/gpu-specs/playstation-5-gpu.c3480
And in case you believe that one isn't confirmed then take a long at this link with ps4 pro
www.techpowerup.com/gpu-specs/playstation-4-pro-gpu.c2876
ps4 pro has 310watt power supply and it's SOC is rated at 150w tdp
as far as rumors go, I been hearing that sony is keeping power usage around the same as last gen, so if anything it could be as low as 150watt, though I think it's more likely to be around 175w mark
nVidia managed to fool people by comparing prices to Turing, and let's not forget their lies about MSRP.
Now, a lot of leaks come from random people with dumpster accounts rather than messaging one of the 'leaker' channels, but this one has fuck all providence.
Buuuuuut. There could be some truth to the rumour. Namely, by adjusting your stack size you could theoretically cut down on memory price, whilst still achieving better thermals, latency and even bandwidth. The better thermal argument also might make sense considering how costly those ampere coolers likely are (making a switch to HBM2e possibly BOM neutral or advantageous).
HBM2e is rated up to twelve stack, but I do believe the highest product is still 8 stacks atm.
192 Channel Bit = 6 stack HBM2e, this would give up to 12Gb of memory and 345 Gb/s per stack
256 Channel Bit = 8 Stack HBM2e, giving up to 16Gb and 460 Gb/s per stack.
Now, do you believe this, or that AMD has magical cache technology that lets it compete at 3080 levels with a significantly more narrow bus?
Seriously, I'm pretty sure AMD is playing us big time, and all the leaks are either directly from them or from fooled AIB partners.
In any case, I'm pretty sure they are much smarter than the average forumite, and they know they cannot disrupt the market with 256 bit bus gddr6.
These days asking a difficult question is enough for attention it seems. In the land of the blind and those with no life... And in the meantime, everyone can just flee into whatever they like to believe best. Wishful thinking...
Results are similar at other resolutions, though more expensive GPUs "improve" at 4k. As for value increasing as you drop in price: just look at the 5600 XT compared to the 5700 XT in those charts. Same architecture, same generation, same die, yet the cheaper card delivers significantly higher perf/W than the more expensive one.
As for your comparison: you're comparing across generations, so any value comparison is inherently skewed to the point of being invalid. If it weren't the case that you got more bang for your buck in a new generation, something would be very wrong. As it admittedly has been for a couple of generations now, and the 3080 hasn't fixed it either, just brought us back closer to where things should be. It is absolutely to be expected that all future GPUs from both manufacturers at lower points in their product stacks will deliver significantly better perf/$ than the 3080. That's the nature of premium products: you pay more for the privilege of having the best. $700 GPUs have never been a good value proposition. We're literally a week post launch. Demand has been crazy, scalpers with bots have run rampant, and everything is sold out. Give it a while to normalize before you comment on "real-world" prices. And while Nvidia's previous "here's the MSRP, here's the somehow premium, but also base-line FE card at a premium price" shenanigans and the near-nonexistence of cards at MSRP, to be fair to them they seem to have stopped doing that this time around. Right. As if I ever said that. Maybe actually read my post? Nice straw man you've got there. Again: accounted for, if you had bothered to actually read my post.
Let me refresh your memory: What I'm saying here is that the SoC TDP only accounting for 50% of the PSU's rating, which might include PSU losses due to efficiency sounds a bit low. I'm asking you to source a number that you're stating as fact. Remember, you said: No source, no mention that this is speculation or even anything beyond established fact. Which is what I was asking you to provide. Unsourced, based on rumors and speculation. The page says as much. That is at least a correlation, but correlation does not mean that the rumor you are quoting as fact is actually true. This is what you call speculation. And here is the core of the matter: you are repeating rumors and "what you think is likely" as if it is indisputable fact. It is of course entirely possible that the PS5 SoC has a TDP somewhere around 175W - but you don't have any actual proof of this. So please stop repeating rumors as if they are facts. That is a really bad habit.
A Twitter user is on par or lower than a Reddit user.
I like my facts to be real.
Besides, as I said, moving down from the 5700 XT already got you significant gains in perf/$, meaning every performance segment needs to shift further downwards. All of these products are competing in the same market, after all. Did my post indicate I was only talking about official prices? As I said: everything is sold out except for scalpers, demand is crazy high, and scalpers are using bots to buy everything they can. Of course prices are artificially inflated. But if we wait a bit for supply to improve, those prices will inevitably come down. You're arguing on the basis of a short-term fluke and making broad, sweeping points based on it. The average price of a 3080 in the future obviously won't be $699 - that's the baseline price, after all - but there will be cards available at and/or close to that price point once supply overtakes demand.