Tuesday, October 6th 2020

AMD Big Navi GPU Features Infinity Cache?

As we are nearing the launch of AMD's highly hyped, next-generation RDNA 2 GPU codenamed "Big Navi", we are seeing more details emerge and crawl their way to us. We already got some rumors suggesting that this card is supposedly going to be called AMD Radeon RX 6900 and it is going to be AMD's top offering. Using a 256-bit bus with 16 GB of GDDR6 memory, the GPU will not use any type of HBM memory, which has historically been rather pricey. Instead, it looks like AMD will compensate for a smaller bus with a new technology it has developed. Thanks to the new findings on Justia Trademarks website by @momomo_us, we have information about the alleged "infinity cache" technology the new GPU uses.

It is reported by VideoCardz that the internal name for this technology is not Infinity Cache, however, it seems that AMD could have changed it recently. What does exactly you might wonder? Well, it is a bit of a mystery for now. What it could be, is a new cache technology which would allow for L1 GPU cache sharing across the cores, or some connection between the caches found across the whole GPU unit. This information should be taken with a grain of salt, as we are yet to see what this technology does and how it works, when AMD announces their new GPU on October 28th.
Source: VideoCardz
Add your own comment

141 Comments on AMD Big Navi GPU Features Infinity Cache?

#51
BoboOOZ
efikkanGuys, please, if you mean performance per clock then say performance per clock. Don't use big words like "IPC" if you don't know what the technical term actually means. IPC is only relevant when comparing CPUs running the same ISA and workload for a single thread, while GPUs issues varying instructions across varying amounts of threads based on the GPU configuration, even within the same architecture.
I think it's safe to say that most people here talk about perf per clock, and this goes for the popular YouTubers, too.
Posted on Reply
#52
Fluffmeister
mechtech"Highly Hyped" ?? I must be living under a rock, I haven't seen much news on it. I recall seeing more stuff on Ampere over the past several months compared to RDNA 2.
The hype for RDNA 2 is through the roof, which the lack of news helps. Nvidia are finally doomed and must play single fiddle for years to come.
Posted on Reply
#53
TheoneandonlyMrK
This is either the rumour that grew the biggest legs or close to the truth, I'm not sure, I'm not sure if I like it either, that big Navi GPU better not be half memory , but then at this point who knows.
Posted on Reply
#54
Nkd
FrickIt's less about being stupid and more about managing expectations. High tier AMD cards have burned people in the past because they expected too much. The only sensible thing to do is to wait for reviews.
math doesn’t add up. Only way it adds up is if 6900xt is 60CU instead of 80CU at 2ghz. If it’s 80 CU it’s going to be competing with 3080 minimum. There are not ifs and buts about it. It’s just simple math.
Posted on Reply
#55
M2B
BoboOOZI think it's safe to say that most people here talk about perf per clock, and this goes for the popular YouTubers, too.

Even AMD uses the term IPC for their GPUs, though everybody here probably knows that IPC is mostly a CPU terminology and we just it for the sake of simplicity.
Posted on Reply
#56
ShurikN
Either Big Navi is not high end (hence 256-bit bus), and was never meant to compete with GA102,
OR
it is high end and has some sort of hidden mumbo-jumbo, in this case Infinity Cache (aka very large cache) to offset the bandwidth.

Do you ppl really think AMD (it's engineers) went and made a 3080 competitor and then one day sat at a table and went "You know what this bad boy needs, a crippled memory bus. Let us go fuck this chip up so much that no one will ever buy it". And then everyone clapped and popped champagne bottles and ate caviar, confetti was flying, strippers came and everything.
Posted on Reply
#57
Frick
Fishfaced Nincompoop
gruffiWhy do people like to poke around in the past? That should never ever be a valid argument. Things can always change for the good or the bad. Or did you expect the Ampere launch to be such a mess? Just concentrate on the facts and do the math. Big Navi will have twice the CUs of Navi 10 (80 vs 40), higher IPC per CU (10-15% ?) and higher gaming clock speeds (1.75 vs >2 GHz). Even without perfect scaling it shouldn't be hard to see that Big Navi could be 80-100% faster than Navi 10. What about power consumption? Navi 10 has a TDP of 225W, Big Navi is rumored to have up to 300W TDP. That's 33.33% more. With AMD's claimed 50% power efficiency improvement of RDNA 2 that means it can be twice as fast per watt. To sum it up, Big Navi has everything to be twice as fast as Navi 10. Or at least to be close to that, 1.8-1.9x. And some people still think it will be only 2080 Ti level. Which is ~40-50% faster than Navi 10.
It's just how people work. And if they expect it to be on 2080ti levels and it exceeds that they'll be pleasantly surprised, as opposed to dissapointed.
Posted on Reply
#58
kingDR
I personally don't think that the new Big Navy will be only 256-bit card, it's just impossible.
Posted on Reply
#59
Jism
It's possible. Perhaps AMD has developped a new memory compression technique to stamp more data through a smaller bus. Nvidia does it too. You can tell by the color difference (=image quality) both cards and drivers have to offer. Nvidia is basicly a bit more blurry compared to ATI/AMD, and that might explain why nvidia has a upperhand in some scenarios. The GPU has less pixels, particles to draw.
Posted on Reply
#60
Valantar
JismIt's possible. Perhaps AMD has developped a new memory compression technique to stamp more data through a smaller bus. Nvidia does it too. You can tell by the color difference (=image quality) both cards and drivers have to offer. Nvidia is basicly a bit more blurry compared to ATI/AMD, and that might explain why nvidia has a upperhand in some scenarios. The GPU has less pixels, particles to draw.
AMD has had memory compression for years, just like Nvidia. Nvidia's has historically been better, but the difference isn't major. Either way, this won't alleviate a tiny memory bus like this - it's for compressing color data after all, not for compressing texture assets and the like.
Posted on Reply
#61
Bansaku
FrickIt's less about being stupid and more about managing expectations. High tier AMD cards have burned people in the past because they expected too much. The only sensible thing to do is to wait for reviews.
As an RX Vega 64 owner, I endorse this comment! :peace:
Posted on Reply
#62
DeathtoGnomes
So it seems AMD is playing the 2nd place but affordable card once again. Why GDDR6 and not GDDR6X? What else could AMD have done to be a better match to a 3080?
Posted on Reply
#63
Bansaku
DeathtoGnomesSo it seems AMD is playing the 2nd place but affordable card once again. Why GDDR6 and not GDDR6X? What else could AMD have done to be a better match to a 3080?
Because GDDR6X is an NVIDIA exclusive. Google is your friend. :p
Posted on Reply
#64
DeathtoGnomes
BansakuBecause GDDR6X is an NVIDIA exclusive. Google is your friend. :p
you know where you can put your Google, right up your DuckDuckgo.
Posted on Reply
#65
bug
BansakuBecause GDDR6X is an NVIDIA exclusive. Google is your friend. :p
I don't think it's exclusive, as much as Nvidia offered to be the guinea pigs for GDDR6X and gobbled up all the available supply.
I'm also quite reluctant to put "affordable" next to a $500+ GPU.
Posted on Reply
#66
Minus Infinity
FrickIt's less about being stupid and more about managing expectations. High tier AMD cards have burned people in the past because they expected too much. The only sensible thing to do is to wait for reviews.
No it's about simple math. Top tier Navi 21 will be 72-80CU, 2.1-2.2GHz, 20% IPC uplift, so easily can double 5700XT and that would smash the 2080Ti and be on par with 3080 while using 50W+ less.
Posted on Reply
#67
SIGSEGV
FluffmeisterThe hype for RDNA 2 is through the roof, which the lack of news helps. Nvidia are finally doomed and must play single fiddle for years to come.
Meh, AMD didn't do anything to hype. It's you. Yes, all of you with your beyond overpowered analysis.
Nvidia are finally doomed and must play single fiddle for years to come
I have really high hope that your statement will come true.
Posted on Reply
#68
Caring1
M2B
Even AMD uses the term IPC for their GPUs, though everybody here probably knows that IPC is mostly a CPU terminology and we just it for the sake of simplicity.
Well that only proves AMD is confusing the issue by making up a similar phrase with the same abbreviation as IPC.
Which by the way is Instructions Per Clock, NOT Improved Performance per Clock. :shadedshu:
Posted on Reply
#69
InVasMani
M2B
Even AMD uses the term IPC for their GPUs, though everybody here probably knows that IPC is mostly a CPU terminology and we just it for the sake of simplicity.

A lot of attention will be paid to the IPC/Clock Speed, but seems like logic enhancement situations right between the two and could pay nice dividends as a result hopefully.
kingDRI personally don't think that the new Big Navi will be only 256-bit card, it's just impossible.
I was thinking 320-bit or 384-bit might make sense for the high end card, but if it has 16GB VRAM it certainly stands to reason might be 256-bit it would be more surprising if it wasn't. That said maybe that isn't Big Navi, but rather the mid range card that's 256-bit unless AMD confirmed otherwise.
DeathtoGnomesSo it seems AMD is playing the 2nd place but affordable card once again. Why GDDR6 and not GDDR6X? What else could AMD have done to be a better match to a 3080?
Cost could be a contributing factor actually, but it does stand to reason that Nvidia might've gotten the bulk of the supply which was another factor. Either way if AMD seems to have come up with a cost effective work around in the form a infinity cache both AMD and consumers should in essence win. I see that being beneficial to both. I think AMD wanted to avoid another HBM cost price tag premium scenario. Still wouldn't be shocked if 256-bit isn't the premium Big Navi card, but rather the mid range or maybe it is and they might use GDDR6 for one and HBM2 on another with that same memory bus. Really with HBM2 it's got heaps of bandwidth in the first place so they can get away with a smaller bus. I'm not sure that adds up entirely, but if you factor in the infinity cache along with it I think it certainly could. Perhaps they just use GDDR6 on the lowest model w/o the cache and the two tiers above it both use the cache one uses GDDR6 while the other swaps it out for HBM2. I'm not sure what differences they'd do with the cores between them, but that's here nor there.
Posted on Reply
#71
nguyen
So the initial leaks suggesting Navi21XT selling for 550usd make real sense now. The XTX flavor is probably reserved for Pro series cards.

Well at least AMD can earn higher margin on these Navi21 chips than the ones for XBX/PS5, so it's not all loss. AMD has already contracted to purchase 30 000 wafers of TSMC 7nm, they have to make use of them.
Posted on Reply
#72
Vayra86
gruffiWhy do people like to poke around in the past? That should never ever be a valid argument. Things can always change for the good or the bad. Or did you expect the Ampere launch to be such a mess? Just concentrate on the facts and do the math. Big Navi will have twice the CUs of Navi 10 (80 vs 40), higher IPC per CU (10-15% ?) and higher gaming clock speeds (1.75 vs >2 GHz). Even without perfect scaling it shouldn't be hard to see that Big Navi could be 80-100% faster than Navi 10. What about power consumption? Navi 10 has a TDP of 225W, Big Navi is rumored to have up to 300W TDP. That's 33.33% more. With AMD's claimed 50% power efficiency improvement of RDNA 2 that means it can be twice as fast per watt. To sum it up, Big Navi has everything to be twice as fast as Navi 10. Or at least to be close to that, 1.8-1.9x. And some people still think it will be only 2080 Ti level. Which is ~40-50% faster than Navi 10.
225 > 300W

That is not 80-90% in any sliver of reality I know of. Not even with a minor shrink and IPC / efficiency bump. Because 50%... yeah. That is what they call en.wikipedia.org/wiki/Magical_thinking

The stars dó align with what we know of Navi so far and that is: +75~100W puts them at peak TDP budget. 256 bit GDDR6 severely limits their bandwidth cap to around 500GB/s, and even a 2080ti already has a good 20% more on tap. So EVEN if they have some magical cache design that provides breathing room... let's say they gain 20% and get an effective 600GB/s throughput or whatever performance equivalent that is for games. That'd be magical already.

So if they really did get 50% efficiency and really do get 300W TDP they have a grossly unbalanced GPU that will be memory starved half the time.

You have to be a really selective believer in rumors to get to your conclusion.
Posted on Reply
#73
bug
SIGSEGVMeh, AMD didn't do anything to hype. It's you. Yes, all of you with your beyond overpowered analysis.
Oh but they did. By throwing us bits and pieces, they practically created the hype.

I mean look at Ampere, for comparison: we had good hints about a new power connector, some insane amount of VRAM, doubled RTRT performance. Of course, no product leaked entirely, but we had enough to set most expectations.
Posted on Reply
#74
InVasMani
I think a lot of what Nvidia did with Ampere was expected you knew when the die shrink they'd have a bit of a home run opportunity on hand. I really am a bit doubtful AMD beats out Ampere at the high end, but I'd certainly welcome to competition if they did so. I do think they can win potentially at the mid range and lower end and make some progress there at being more competitive relative to Nvidia, but I'd say that still largely depends on AMD's ambition to do so at this stage. It does seem like there follow up to RNDA2 though could really strike hard especially with how well AMD's done financially lately from a company standpoint.

I think it's expected that we'll see some really good increases in competition out of AMD the longer Intel struggles to regain it's foothold in a convincing way as opposed to hey I'm great at low resolution high refresh rate gaming and bad security. On the plus side for Intel at least they've become more convincing at multi-core performance it sure is nice to not have 4 quad cores representing the high end CPU market these days in fact it's pretty much now the low end outside of laptops for now and that won't last either. I think 8 core CPU's will be lower mid range minimum sooner rather than later the way things have been going, but we might see compromise in that region like big LITTLE which isn't too terrible it's tolerable for that end of the market, but not ideal as much at the other end.
Posted on Reply
#75
SIGSEGV
bugOh but they did. By throwing us bits and pieces, they practically created the hype.
show me! give me your claim to support your beyond creative analysis about upcoming Radeon lineups and their performance. The source must be officially from AMD.

:clap:
I mean look at Ampere, for comparison: we had good hints about a new power connector, some insane amount of VRAM, doubled RTRT performance. Of course, no product leaked entirely, but we had enough to set most expectations.
Aw, come on dude, don't change the topic. I know the NVIDIA is impeccable for you.
Posted on Reply
Add your own comment
Nov 24th, 2024 00:04 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts