Tuesday, June 11th 2024
Curious "Navi 48 XTX" Graphics Card Prototype Detected in Regulatory Filings
A curiously described graphics card was detected by Olrak29 as it was making it through international shipping. The shipment description for the card reads "GRAPHIC CARD NAVI48 G28201 DT XTX REVB-PRE-CORRELATION AO PLATSI TT(SAMSUNG)-Q2 2024-3A-102-G28201." This can be decoded as a graphics card with the board number "G28201," for the desktop platform. It features a maxed out version of the "Navi 48" silicon, and is based on the B revision of the PCB. It features Samsung-made memory chips, and is dated Q2-2024.
AMD is planning to retreat from the enthusiast segment of gaming graphics cards with the RDNA 4 generation. The company originally entered this segment with the RX 6800 series and RX 6900 series RDNA 2 generation, where it saw unexpected success with the crypto-mining market boom, besides being competitive with the RTX 3080 and RTX 3090. This bust by the time RDNA 3 and the RX 7900 series arrived, and the chip wasn't competitive with NVIDIA's top-end. Around this time, the AI acceleration boom squeezed foundry allocation of all major chipmakers, including AMD, making large chips based on the latest process nodes even less viable for a market such as enthusiast graphics—the company would rather make CDNA AI accelerators with its allocation. Given all this, the company's fastest GPUs from the RDNA 4 generation could be the ones that succeed the current RX 7800 XT and RX 7700 XT, so AMD could capture a slice of the performance segment.
Sources:
Olrak29 (Twitter), HXL (Twitter), VideoCardz
AMD is planning to retreat from the enthusiast segment of gaming graphics cards with the RDNA 4 generation. The company originally entered this segment with the RX 6800 series and RX 6900 series RDNA 2 generation, where it saw unexpected success with the crypto-mining market boom, besides being competitive with the RTX 3080 and RTX 3090. This bust by the time RDNA 3 and the RX 7900 series arrived, and the chip wasn't competitive with NVIDIA's top-end. Around this time, the AI acceleration boom squeezed foundry allocation of all major chipmakers, including AMD, making large chips based on the latest process nodes even less viable for a market such as enthusiast graphics—the company would rather make CDNA AI accelerators with its allocation. Given all this, the company's fastest GPUs from the RDNA 4 generation could be the ones that succeed the current RX 7800 XT and RX 7700 XT, so AMD could capture a slice of the performance segment.
73 Comments on Curious "Navi 48 XTX" Graphics Card Prototype Detected in Regulatory Filings
Saw it with zen 3 and rdna3 as well. The tech rumor mill is always exciting but people base too much on it and then get disappointed when it turns out to be hyperbolic. That said, I was referring to the troll arf but if you want to start making stuff up you are welcome to it.
I was avoiding his channel because of what was posted almost everywhere about his predictions, but lately I do watch those 10-20 minutes videos with "information". They are not that bad. He talks about his sources that might be real or might be not, he does have some arguments that do look to make sense, he is not as bad as he is been described left and right. I mean, his short videos, not those 2 hours -you have to be on drugs to watch - but the short videos, are nice to watch(with a grain of salt). Yeah, he is in my block list the last couple of months. Rarely quote him. In my years online I have seen many jumping in a thread, posting one line the type of "you are all hallucinating" and then disappearing without explanations.
You are not one of them, are you?
PS And here we have his usual kind of reply
But what is concerning for me, in this "Navi 48 XTX", is 48, which means some lower end SKU, and XTX, which usually means the most overclocked, and the most power hungly GPU, available. But this is just my take. Yes, nVidia jumped HW RT first. However AMD's software RT was available for ages. Yet, barely anyone was interested in RT. And once JHH said that it's developers "holy grail", everyone have imediately become an aposltle of Geforce RTX. Exactly. All they want is money, to feed their greedy investors. The consumer graphics cards, are just a placeholder, and maintain the pretense of "being interested" in what is a really abandoned sector. This is depressing. And they all want people to jump onto their streaming subscription platforms, and want it to become prevailing way of gaming ASAP. A failure... at some point.
1. Chiplets are going strong in their Ml Instinct. Look at their success. Don't see anyone complaining there.
2. Performance is not enough, obviously. But which where the targets is to be known. The 4090 was the crypto/AI bait, and designed for them in the first place.
3. Sales are bad, due to underwhelming rewievs, and a bad price. So is the nVidia's price formation and proposal, to say the least. The reason why nVidia outsold it's Ada cards, is purely due to public mindshare, and sometimes due to task specific requirements. I hardly see everybody, who have bought the nVidia cards needs it for some huge compute loads, or is going to be a streamer. Not to mention, the pricing required to step into uncanny valley, for barely visible performance difference outside 4090.
However, yes, nVidia checks more tick boxes. Most of them are uncompelling though, to be honest.
4. Price is indeed atrocious. This is probably the most crucial factor, that have lead to the poor sales. Overal these two factors are extremely interrelated and crucial, and the flop in each will lead to the bad results. nVidia will still outsell AMD 9:1, until AMD will have the enough allocation for chips. Still the grasp on their prices is too hard, and they are not going to lower it any time soom. Sadly.
AMD makes ton of profits on the Enterprise. They could easilly make a positive influxe, by shifting $50 down the entire stack, making it more favorable in the eyes of consumers, which being gouged from left and right.
5. Ray-traycing poor performance. LOL. Have you seen the green team results, anytime, recently? The entire point of DLSS and fake frames, is to substitute the dull RTRT performance. Turn it off, and you barely see any difference to Ampere in raw RT performance.
It was garbage since the very first day, nVidia decided to roll out the GPU with "limited" RT capabilities. But truth is, it's about several more generations, until the RT capabilities will actually reach the minimal acceptable level of real time ray tracing. Until then any current RT solution, is a complete joke, regardless of brand name. There's the reason, why cinema and CGI use entire farms, to render a particular amount of scripted scenes.
6. FSR is upscaler. All upcallers stretch the low resolution to big screen. It doesn't take an science degree to undertand, that this is completely flawed technology. No matter what seasoning one adds, the taste if still remain garbage. I don't see what miracle people want to see from this inferior, compared to the native big resolution technology. It's even possible to run low resolution on big screen natively, the upscallers are supposed to just improve the already inferior image.
The only reason why nVidia solution is much better, is the result of huge AI training, and huge amount of compute resources put into. There's no magic. Without it, it would be as ugly as FSR. But at least FSR and XeSS are open, and do not require someone's permission to use it.
7. Most of the reviews seems show, that the AMD software is much less of an issue, lately. And considering, that reddit is mostly "calm" about this, can be a "indication" of that. The drivers seems to be much better than before, but might not be flawwless, yet. But so is nVidia, which has some issues. The only issue with AMD software and drivers, is yet big media playback power consumption.
8. This is actually completely true. All three nVidia, AMD and Intel, are not interested in consumer 'gamer' entertaining multipurpose video cards. What they are interested are insane, never possible before profit margins. And right now the source of it is AI. The greed consumed them all.
Just sole silence from AMD about their consumer GPU status, is just the another indication that they treat Radeon as the least of their priorities. But again, this is related to all of the three of them.
But at least they don't give much hope. They'd better do their job silently, and make RDNA5 a real milestone, rather than make Vega-like advertisements, to just "sit in a puddle" with underperforming product later. The hype train can get off rails, and trap into collision.
Discuss with civility.
Too much overhead for my liking. Also recently AMD bit into the AI craze just like the rest of the Magnificent 7 Stocks.
They need to perform well with their CPU's and not cost an arm and leg in the process to upgrade.
Maybe they jumped the gun on chiplets with RDNA3, but even Nvidia will switch with Blackwell's successor most likely. EUV high NA cannot produce chips bigger than ~429mm^2, much smaller than low NA (~858mm^2) and Nvidia is up against that limit already.
Now if RDNA4 8800XT can match or exceed 7900XT in raster as many leaks claim, and has a nice bump in RTing and will come with the vastly better FSR3.1 and possibly new AI FSR 4.0, it should do very well at $550-600 or so, (16GB and 256bit) . If true, expect fire sales on 7900XT or a restriction in 8800XT supply until 7900XT dries up. AMD really has to deliver a big bump in raster and even bigger bump in RTing though to survive with RDNA4. 7600, 7700 and 7800 are bitterly disappointing updates IMO and do not deserve their names. Not that Nvidia is any different with 4060 crap.
Personally, what I want from RDNA 4 is a bit better RT - not much, just enough to run light RT games like Avatar: FoP decently without FSR, and improved idle / video playback power consumption, all this paired with the promised 7900 XT-like performance. I'll be looking for a new GPU when it comes out anyway, and if it delivers on these fronts, I'll be happy to get an 8800 XT or whatever name AMD will come up for it.
My issue is 8 cores isn't enough I'd like 16 x3d cores but that's very unlikely so intel it probably is if arrow lake is good, graphics wise it's got to be Nvidia this time owning 3 7900xtx cards that not one was decent left a very sour taste in my mouth so 5090 or possibly 5080 if it's not good enough then 4090 will be my upgrade path because I'm not waiting goodness knows how long it will take AMD to get MCD working guessing that lack of investment in GPU devision is really biting hard especially now with Nvidia just sling shotting to 2nd richest company on earth on the back of AI, yet another thing AMD is missing out on. AMD needs to learn you got to be right most of the time and to be nimble or Nvidia will keep eating their lunch.
As for AMD missing out on AI, have you heard of XDNA?
Edit: You could also get a non-X3D CPU and call it a day.
RDNA3 is typically 40-50% faster at ray tracing than RDNA2. Still behind Nvidia's 4000 series but how is that not significantly better? X3D chips always come later. They came MUCH later for Zen3! Zen4 X3D chips were actually launched much faster than for Zen3. Zen5 X3D chips are coming even sooner and might be out before the end of the year. ???? What?!? Are you trolling or something? AM5 only supports DDR5 because AMD would've had to do a IO die that supported both memory standards and they considered that too burdensome since the IOD was already getting too big and hot for DDR5 alone. That and they were launching their DDR5 platform well after Intel's when DDR5 was supposed to drop in price which is normal for them. They usually lag Intel on adopting new memory standards and its generally not considered a big issue since usually the new memory standard is expensive and not much faster than the previous one at introduction. Yeah their software development efforts are a joke but have been for decades unfortunately.
That isn't a new problem.
The issue here is they'd have to spend giant sums of money they probably don't have for years to see any real improvement here and its all they can do to fund R&D of new CPU's and GPU's along with working to get chipsets done with AsMedia. They just don't have the resources. That isn't a AMD issue. The libraries and OS support just flat out aren't there for win10 which is nearing EoL status.
Blame MS.
Its the same for Intel and QC as well. NPU support for win10 is dead on arrival. They have to compete with Apple, Nvidia, and all the other companies for wafers. There isn't much they can do here. They certainly can't get a fab going again themselves. Don't have the resources for it. Neither does their old partner/spin off GF which threw in the towel at 16/12nm. They do seem to be making overtures for wafers from Samsung but I don't know if that is going to go anywhere. Point is they are trying but everyone is fab limited right now if you want something as good or competitive with TSMC's best 5, 4, and 3nm processes.
AMD did that trick with AM3 CPUs that could be used with DDR2 and DDR3. Guess what. That was an advantage for those chips. You had an old DDR2 AM2+ motherboard and 8GBs of fast DDR2? Just throw a new 6 core Thuban on it. You change the platform latter.
Intel did this with it's Hybrid CPUs. It won market share.
You think winning market share and giving consumers options is trolling? AMD done it 15 years ago. Intel did it recently. Probably it's not that hard integrating two memory controllers on a CPU. The rest of your sentence is again some history lesson, which is not really an argument. In fact that "supposed to drop" that you write, agrees with what I said and you thought it was trolling. They didn't had such a problem in the past, because things where simpler then. Today they have to throw money on software, no matter if they have that money or not. And they do have the money. They just don't know how to use it when it is about software. They lack the experience. You don't need to support all the CPU features on Win 10. Just make it work in Win 10. If someone hates the idea of AI and NPU and whatever Win 11 represents, give them the option to run Win 10 with the NPU disabled. Intel with all those problems lately did find wafers for Arrow Lake. I am pretty sure AMD could also find a few extra wafers. But AMD prefers to play it safe, to not have to pay penalties for unallocated wafers, like it was doing with GF, or end up with huge inventory. But playing it safe 24/7 limits what someone can achieve ending up remaining stagnant with no real growth. I mean, look AMD's financial's today. If you remove Xilinx's part, AMD is not getting any bigger the last couple of years.
Spend some time reading that. They go over all the details for you. Long story short they didn't get anywhere near the 80% improvement AMD was claiming but 40-50% faster on micro benches does show its got real improvements vs RDNA1. You know better than to judge a video card based on 1 game's performance. Especially one that is known for running poorly on almost all hardware.
Top end RNDA3 can come close to a 3090Ti on ray tracing which is a big step up over RDNA2 even if its still less than NV's 4000 series. I guess those Milan X 7003 Epyc's that got launched months before the X3D chips just never existed then huh?
And how does what you're saying bolster your comments about this somehow being a screw up on AMD's part? Bear in mind that it was still very new packaging tech and they were dependent on TSMC to really make it all work too. Further bear in mind that AMD makes more money on server CPU's than desktop parts so it makes lots of sense to target them first and foremost as a business decision.
Whether I or you like that doesn't really matter here since AMD can't ignore their bottom line. Did AM3 CPU's have a separate IOD like AM4/5 CPU's do?
If not then the comparison doesn't make sense.
To me I think supporting DDR4/5 on the same socket would've been nice but its not a big deal and there were real technical issues for AMD pulling it off with the split die approach they've gone for. Remember the IOD is on a larger process, which helps with cost and platform flexibility, but uses quite a bit of power. Ignoring or downplaying that isn't reasonable. Intel won market share because all AMD had was PhenomII Thubans or Propus vs Intel's Core2 i7 2600k SandyBridge or i5 661 Clarksdales, which lets face it were much better CPUs, that also tended to get good to great overclocks.
It had nothing to do with memory standard support. If you're ignoring the practical realities and business limitations that AMD is facing to reach your conclusions then yes.
At best if you're not trolling then you're just making stuff up out of nowhere to form your opinions. Which isn't reasonable either. Sure they did it on the 12xxx series CPU's which weren't at all known for good power consumption either at load or idle.
The power issues with that CPU can't all be blamed on the dual memory support but it sure didn't help! So DDR5 didn't drop in price by the time AMD launched AM5 in late 2022 vs when Intel started supporting it in late 2021 with AlderLake? Or continue falling in 2023?
cdn.wccftech.com/wp-content/uploads/2022/06/1-1080.03056867.png
You're trying way to hard to read whatever you want into my previous comment. The "supposed to drop" was what AMD was saying at the time as part of the reason for their delaying supporting DDR5 and as a future looking statement hedging their bets is common when doing marketing. I recall many were fine with this since DDR5 was terribly expensive at first and didn't offer much in the way of performance benefit since early DDR5 didn't reach the higher clocks or lower timings it can now vs the top tier DDR4 3600/3200 many were still using back then. Sure they did. That was why they didn't hardly develop their GPGPU software or compilers for any of GCN variants or Terascale either. And why 3DNow! went virtually nowhere outside of 2-3 games.
Remember AMD was hurting financially, almost went under really before Zen1 got out, for a long long time with Bulldozer, since at least Core/Conroe/Penryn came out really, and they had to drop their prices on Phenom to get sales.
Remember too that even before that AMD overpaid massively on ATi and more or less used up all their financial gains they made vs Netburst on that buy + getting the Dresden fab up and running. And if you go back before K7 AMD was doing not so well with the later K6 III or II's vs P!!! or PII since its FPU wasn't so good even if integer performance was solid vs those CPU's. And if you go back before THAT then you've got to remember the financial troubles they had with K5, which also BTW nearly sank the company. Sanders had to go around begging investors for hundreds of millions (in early-mid 90's dollars mind you, so I think it'd almost be a billion today) to keep the company afloat!
They've been drastically financially constrained nearly all the time they've been around!
Certainly up until around Zen2 or so if you only want to focus on recent times when they finally finished paying off a big chunk of their debt load they racked up during Bulldozer. Of course then they bought the FPGA company Xilinix as soon as they had some cash, which was a $30 billion+ deal, and they're back to being heavily in debt again.
They'd need to spend billions more they don't have to hire the hundreds or thousands of programmers, like NV does, to develop their software up to the same degree and they just don't have the money. Its also why they tend to be more prone to open sourcing stuff vs NV too. Its not out of good will. They know they can't get it done alone and are hoping things take off with open sourced development. Again they can't do that without MS doing the basic OS level work for them.
No one can. That is why I mentioned that neither Intel or QC will support NPU's on win10 either.
And you can already run win10 without NPU support on a SoC that supports that has one so your needs are already met I guess? The bigger deal is chipset support which is going to go away fast over time but for now win10 is still supported by AMD and others for their chipsets. Not a valid comparison. Intel has their own fabs that produce the lions' share of their products + was willing to outsource some of their product to TSMC.
AMD has no fabs at all. Based on what?
TSMC is very public about being typically wafer constrained for their higher performing process tech, for several years now at least, and is constantly trying to build more fabs but THAT takes years to get one done too. I think all their 3 and 4nm production capacity has already been bought out for the entire year for instance.
So how can AMD buy what TSMC has already sold to Apple and others? Bear in mind too that Apple always gets first pick at TSMC's wafers due to the financial deal those 2 companies have.
And who else can they go to that has a competitive 3 or 4 or 5, or heck even 7, nanometer process that also has spare capacity they can buy? The closest anyone comes is Samsung and I've already said they're in talks with them to buy wafers. That is however a whole different process and they'd have to essentially redesign their product to support it. Which can take 6-12 months and cost hundreds of millions of dollars for a modern high end process before a single wafer is made!
Samsung's process tech also usually lags behind TSMC. So it'd probably only be for lower to mid end products to get produced there. APU's and such. Not higher end desktop or server chips. So even then there would still be supply issues for certain product lines.
The only performance gain you have over last gen is due to having more of said units (and everything else), same as on Ada compared to Ampere. Brute forcing, if you will.
Edit: This is what happens with RDNA 2 vs 3 with an identical number of execution units:
I don't see "40-50% RT performance gain" anywhere.
The whole point is to be faster over all. How you get there exactly (ie. more cache, clocks, ALU's, whatever) doesn't actually matter, so long as things like cost or heat don't get out of hand, from either a practical product POV or from the POV of the discussion.
But these GPU's are all essentially piles of relatively simple (vs CPU's) ALU's with a heap of cache and lots of high bandwidth these days since they're all brute forcing for parallelism in similar manners. The closest thing I've seen to way to address that issue is DLSS, FSR, or XeSS. Those are a big deal.Certainly much bigger than ray tracing.
If you add more cylinders to a car's engine, the car will be faster overall. You're not getting more "cylinder performance" (that is, performance per cylinder). You'll just have more of them.
Better RT performance means experiencing a smaller performance drop when you enable it compared to when you don't. That is clearly not the case with RDNA 3 vs 2, just like it isn't with Ada vs Ampere.
If you're agreeing that performance went up then there isn't anything else to discuss with regard to my comments to the other guy or the thread