Tuesday, June 11th 2024
Curious "Navi 48 XTX" Graphics Card Prototype Detected in Regulatory Filings
A curiously described graphics card was detected by Olrak29 as it was making it through international shipping. The shipment description for the card reads "GRAPHIC CARD NAVI48 G28201 DT XTX REVB-PRE-CORRELATION AO PLATSI TT(SAMSUNG)-Q2 2024-3A-102-G28201." This can be decoded as a graphics card with the board number "G28201," for the desktop platform. It features a maxed out version of the "Navi 48" silicon, and is based on the B revision of the PCB. It features Samsung-made memory chips, and is dated Q2-2024.
AMD is planning to retreat from the enthusiast segment of gaming graphics cards with the RDNA 4 generation. The company originally entered this segment with the RX 6800 series and RX 6900 series RDNA 2 generation, where it saw unexpected success with the crypto-mining market boom, besides being competitive with the RTX 3080 and RTX 3090. This bust by the time RDNA 3 and the RX 7900 series arrived, and the chip wasn't competitive with NVIDIA's top-end. Around this time, the AI acceleration boom squeezed foundry allocation of all major chipmakers, including AMD, making large chips based on the latest process nodes even less viable for a market such as enthusiast graphics—the company would rather make CDNA AI accelerators with its allocation. Given all this, the company's fastest GPUs from the RDNA 4 generation could be the ones that succeed the current RX 7800 XT and RX 7700 XT, so AMD could capture a slice of the performance segment.
Sources:
Olrak29 (Twitter), HXL (Twitter), VideoCardz
AMD is planning to retreat from the enthusiast segment of gaming graphics cards with the RDNA 4 generation. The company originally entered this segment with the RX 6800 series and RX 6900 series RDNA 2 generation, where it saw unexpected success with the crypto-mining market boom, besides being competitive with the RTX 3080 and RTX 3090. This bust by the time RDNA 3 and the RX 7900 series arrived, and the chip wasn't competitive with NVIDIA's top-end. Around this time, the AI acceleration boom squeezed foundry allocation of all major chipmakers, including AMD, making large chips based on the latest process nodes even less viable for a market such as enthusiast graphics—the company would rather make CDNA AI accelerators with its allocation. Given all this, the company's fastest GPUs from the RDNA 4 generation could be the ones that succeed the current RX 7800 XT and RX 7700 XT, so AMD could capture a slice of the performance segment.
73 Comments on Curious "Navi 48 XTX" Graphics Card Prototype Detected in Regulatory Filings
And NO it's NOT 40-50% faster. The top RDNA3 cards are just BIGGER dies compared to RDNA2 GPUs and that's why they seem to improve in RT performance. Because they are BIGGER chips. The improvements there are a result of bigger GPUs not faster RT. Are you intentionally pretending to not understanding what I am posting? Do I lose my time with you? (obviously)
5800X 2020, Milan X and 5800X3D 2022. Excuses are rarely arguments. In this reply of yours, they are not.
And it is a screw up on AMD's part because the reception of AM5 was lukewarm for a number of reasons. AM5 became exciting again when the X3D models came out. If AMD had come out with X3D models quickly, the reception of AM5 platform would have been much better. You not being able to understand, it's also NOT an argument. And using an IOD makes it easier. It's not a big deal the Intel platform seeing as cheaper at a time that Intel was also advertising higher number of cores? Nice one.
The rest of your post are speculations that don't make sense. Something that could be done in 2008 can be done easier today. And it had nothing to do with power. The mem controller not in use just shuts down. You totally lost it here. When I talk about Intel and mention Hybrid CPUs, obviously I don't mean Phenom era. Try to READ what others reply to you. Until now you are the only one that there are indications of trolling.
The practical realities of AMD ended the time it could offer 30-40 billions dollars in shares and cash for Xilinx. We are not in 2015. Dual memory support had nothing to do. As I said when a mem controller is not used, it just shuts down. We are not in 1999 where the whole chip is constantly on. Modern CPUs shut down their parts not in use. The problem with Intel and power consumption is simply process. Take a Ryzen 7950X and build it at Intel's 7nm process. See where power consumption will go. My God. Are you 12? Don't show me how much DDR5 dropped, compare me the price of (for example) 32GB DDR4 3200 with 32GB DDR5 and tell me there is no significant difference in price. You think DDR4 remained the same?
You are trying to hard to ignore logic to fabricate non existing arguments. You still seem to have problems understanding what the other person says or what the subject is. Or maybe it's my English.
AMD didn't really had much problems in the past (10+ years ago) with software. The only problem they had was that developers where building games on Intel+Nvidia hardware, so when the games where coming out, Radeon cards where either not performing optimally or/and had to deal with bugs. That lead to AMD's bad reputation in drivers. Other than that AMD was fine and Nvidia was trying hard to differentiate it's hardware with PhysX and Gameworks libraries.
The rest of the history lesson presented as excuses, are (again) NOT arguments. You keep reading and misinterpreting my posts. AMD builds the CPUs you know. Not MS. Win 10 is out there 10 years now. AMD can build hardware to be compatible with both Win10 and Win11. Not being able to use the NPU under Win10 could be seen as an advantage from the point of view of people hating the idea of AI in their OS. But they DID find wafers available from TSMC. It's not that TSMC told them "Sorry, no capacity left for you". Intel DID found. AMD didN'T tried to allocate more when there was capacity available.
Already posted that about how they could use Samsung. And yes it needs time and money, but they could free up capacity for more high end products while having much higher supply for mid range products. While we all are bitching at Dell for being glued to Intel, Intel can warranty Dell huge supply. AMD seems to not be able to do so. So, they need to do steps in the direction where they will start growing as a company. If that means double sourcing, they should do so. Nvidia took it's risks and now is a 3 trillion company. AMD plays it safe and remains vulnerable. It's valuation is a result of investors hoping AMD to start moving it's legs. When they start losing faith and they have the last few months, AMD's valuation will start going down and AMD will become even more vulnerable. If Intel fixes it's manufacturing and Windows on ARM starts making some progress, probably slow but progress, things will start getting ugly.
Certain models became faster than their predecessors because they have more execution units and VRAM, but the architecture itself didn't. CU vs CU, RDNA 3 has the same speed in most cases as RDNA 2.
Edit: The 7600 is a slightly more advanced 6650 XT, and the 7800 XT is a slightly more advanced 6800 XT. Same CU count, mostly the same speed.
All performance in GPUs is relative performance. Why? Because as a buyer, you're comparing products. You compare your current GPU to the one you're gonna buy. You compare Nvidia's offerings against AMDs.
With each product in the tiers of each company comes an associated price, but also an associated product, with a die size, which defines its 'value'. Big dies are more expensive. So if AMD needs a 50% bigger die (hypothetically) to get 50% more RT performance, have they really increased RT performance by 50%? Or have they just thrown 50% more hardware at it, with a 50% higher cost - often even more than 50% more cost, because bigger dies have worse yields.
So it really matters whether the 50% is gained relatively compared to the last generation or whether it comes from absolutes like square mm of die space. If its the first, you've just made the same product (or a new version of that product) 50% more valuable in that aspect. If its the latter, you've just wasted resources and made zero progress. Now again, this is a crucial concept to understand. It defines the market, and it defines what you purchase and why. If you don't understand this, you don't understand hardware at all - no offense intended! Just to emphasize how much it matters and how important the context is around saying 'X gained 50% in Y'.
It will behave in a different manner affecting efficiency, but overall power tends to remain the same.
Anyway that is off topic for this thread.
Remember my original comment was about performance only, not perf/ALU or perf/transistor, etc. Milan X came out months before 5800X3D. March or so vs June 2022. So it wasn't that tech didn't exist like you earlier claimed. It was that AMD decided to launch the extra cache dies on server parts first. Its not excuses. I'm giving you the facts.
They just don't have the resources or money to do everything they want. That is a big part of the reason why they're generally seen as the underdog in the x86/GPU biz!
If you want to prove me wrong here you've got to show they had billions more to spend year after year on software development and other software related stuff like compilers during the Bulldozer years at a minimum. Or even now (hint: you can't, their debt load is pretty high, they pushed themselves to the limit buying Xilinix). Just saying "nuh uh" isn't going to convince anyone of anything here. It wasn't a 'number of reasons'. It was that the platform was expensive. People started buying it more when the price dropped a bit on the mobos and CPU's. Having X3D at launch would've been nice but not a game changer and wouldn't have addressed the cost issue. That doesn't change that you can't compare apples v oranges here. They're going to be very different by default.
Having a IOD makes changes easier true but it doesn't solve the fundamental issues with bigger die needed for another memory controller or the heat/power used.
Gating the transistors off when not in use is ideal but apparently not always possible. The IOD power use is already a big problem for why AMD systems use more power at idle then they should. If they could power gate all of it off as needed they would've already done so. But they can't. You made the claim that Intel's DDR4/5 support was the reason that platform did well and now you're switching goal posts to talking about core numbers? Especially when e cores are involved?
Look pick one argument and stick to it. You brought up Phenom as a example though. If its not a valid comparison then why even bring it up? AMD bought Xilinix in 2022. The debt load from that deal is something they'll have to be dealing with for a long time. Why are you even bringing up 2015? Zen1 wasn't even out until 2017. This is goal post shifting.
I showed how DDR5 prices dropped which was all that was required to show that AMD had a reasonable approach to delaying supporting DDR5. You claimed this was a mistake. So in order for you to be correct you have show that DDR5 prices stayed the same or rose instead. Good luck with that. So they have compliers as good as NV's for their GPU's? OpenCL apps have as much marketshare as CUDA apps? No. OpenCL and AMD compliers, and software support in general for their GPU's for compute, is generally pretty lousy.
AMD game drivers are generally pretty good these days but that is a smaller part of puzzle when talking about software in general. Facts support arguments and what I've been saying all along is that AMD has been financially and resource constrained for most of their existence which is hardly news.
You can't ignore their financial and resource issues and be reasonable at the same time. So if AMD throws more ALU's, cache, or whatever transistor on die then the libraries and OS software support magically spring from nowhere?
No.
MS controls their OS and that means MS has to develop the support in their OS for new tech such as NPU's. Neither Intel, AMD, or QC can force that issue by adding more hardware. You can already use win10 without NPU support! With either AMD or Intel! They don't have to do anything! Win10 will simply ignore the hardware AI processor since it doesn't know how to use it. It'd be like using win9x on a multicore system. Win9x will still run fine on a Q6600 or a PhenomII X4 for instance. It will just only use 1 core.
That Intel was able to get wafers doesn't matter. They can do stuff AMD can't all the time since they have more money and resources! Money and resources matter massively here!
"Double sourcing" is FAR easier said then done. And it is still very resource and money intensive! If they don't have enough of either then they're stuck.
AMD took a huge risk paying as much as they did for Xilinix so I think you don't know whats really at stake here. Again their debt load is rather high. How exactly can they afford to take on billions more a year at this point? Didn't I specify how exactly I'm looking at it earlier in the thread by talking only about ray tracing performance of RDNA3 vs RDNA2 and not performance v ALU or performance vs die size across all features or GPUs?
If you keep insisting on talking about some other metric isn't that apples v oranges? I don't care about hypotheticals. I care about real world performance that I can actually buy and the price I'm paying for from a actual store and not what it should theoretically be according to whatever hypothetical someone cooks up.
You can say that the 7900 XTX has more RT performance than the 6900 XT, but that's a card vs card comparison, not an architecture vs architecture one. To that, my answer is the 7600 vs 6650 XT or the 7800 XT vs 6800 XT. Perhaps. What I meant is that adding more RT units to one card doesn't equate to having more RT performance across the whole architecture/generation.
And I don't need to remember anything. Your original comment was wrong and now you try to present it as something different, because you are obviously not in denial, right?
You said that "RDNA3 is typically 40-50% faster at ray tracing than RDNA2." You didn't said that RX 7900XTX is 40-50% faster than RX 6950XT for example. You clearly and wrongly insisted that RDNA3 was faster IN GENERAL, meaning as an architecture, by 40-50% compared to RDNA2.
I guess you wouldn't want to admit anything and this is clear also from your next reply where I point at you that 5800X came out 2 years before the X3D chips and you intentionally ignore that so you can continue insisting on your false narrative.
So, I stop here. Not reading the rest of your post. In fact no reason to read any of your future posts. I am not going to lose the weekend when you obviously don't want to admit even the most obvious of your wrongs.
Congratulations. You win.
:respect:
PS OK, I am reading the rest of your post out of curiosity and facepalming in every sentence. For example, that comment about AMD's debt and you keep going back to Bulldozer era to just dig out some irrelevant arguments. lol. I am going to miss some priceless comments from you. :p
PS2 Ahahahahahahahaha!!!!!!!............................................. oh my god ............(I keep reading)
You care about the price and no hypotheticals, but nobody is talking about hypotheticals here, we are all talking about real gpus with real performance deficits, measured and tested and they dont show the gain you speak of. You took a piece of marketing bullshit and ran with it, so we are here to place that in the right perspective.
Youre just wrong, but if this is the hill you want to die on... okay. Believe what you want to believe while everyone else knows better ;)
Hence, RX 7800 XT is overall slower than RX 6800 XT which is a better buy.
www.techpowerup.com/review/asus-radeon-rx-7900-gre-tuf/12.html
The topic of the conversation was the similar RT performance due to RDNA 2 and 3 having the same RT cores before you butted in. You take Counter-Strike 2 as your cherry-picked example? Seriously now? :kookoo:
Edit: I don't know why you always have to say something that entirely misses the point of the conversation, but it's super annoying.
There are many dozens of games which are not included in the above review, so it's not worth it to post the average results from it, it is in fact cherry-picking only the recent games. Which is wrong.
What about people like me who don't play CS2, so couldn't care less about how it is optimised to utilise shaders? Hm? :rolleyes:
I know you have an answer that trumps all others, because you always do. You're the wisest person in existence, obviously, therefore all further conversation with you is pointless.
Why did you say that RX 6800 XT and RX 7800 XT have the same compute units count?
It can't be a case of "CS2's developers are geniuses and every other game developer in the entire world is damn stupid", surely?
Also, by admitting you made a mistake, that means that everything you posted until now (and everything you will post in the future) is wrong, which makes the other interlocutor always correct.
PS * In the 80s the prime minister of Greece was Andreas Papandreou. Great manipulator of the masses. So when once he had to admit in public that he did a mistake, he didn't say "I did a mistake". Instead he preferred the Latin phrase "Mea Culpa" knowing that most people in Greece didn't knew Latin and also knowing that many will just be amazed from the usage of that Latin phrase. So he turned a negative situation to his advantage.
Either way I don't think that is legit and I have to say it seems rather rude at best to me!
I'd prefer it if you can't actually address what I'm saying in a honest manner that you 2 not reply to me at all. Add me to your ignore lists if it helps at all, thanks! If you want to ignore parts of the product line that don't suit your metric sure. But that isn't reasonable.
More performance is more performance even if they get it in a way that you don't find pleasing for one reason or another.