Tuesday, June 11th 2024

Curious "Navi 48 XTX" Graphics Card Prototype Detected in Regulatory Filings

A curiously described graphics card was detected by Olrak29 as it was making it through international shipping. The shipment description for the card reads "GRAPHIC CARD NAVI48 G28201 DT XTX REVB-PRE-CORRELATION AO PLATSI TT(SAMSUNG)-Q2 2024-3A-102-G28201." This can be decoded as a graphics card with the board number "G28201," for the desktop platform. It features a maxed out version of the "Navi 48" silicon, and is based on the B revision of the PCB. It features Samsung-made memory chips, and is dated Q2-2024.

AMD is planning to retreat from the enthusiast segment of gaming graphics cards with the RDNA 4 generation. The company originally entered this segment with the RX 6800 series and RX 6900 series RDNA 2 generation, where it saw unexpected success with the crypto-mining market boom, besides being competitive with the RTX 3080 and RTX 3090. This bust by the time RDNA 3 and the RX 7900 series arrived, and the chip wasn't competitive with NVIDIA's top-end. Around this time, the AI acceleration boom squeezed foundry allocation of all major chipmakers, including AMD, making large chips based on the latest process nodes even less viable for a market such as enthusiast graphics—the company would rather make CDNA AI accelerators with its allocation. Given all this, the company's fastest GPUs from the RDNA 4 generation could be the ones that succeed the current RX 7800 XT and RX 7700 XT, so AMD could capture a slice of the performance segment.
Sources: Olrak29 (Twitter), HXL (Twitter), VideoCardz
Add your own comment

73 Comments on Curious "Navi 48 XTX" Graphics Card Prototype Detected in Regulatory Filings

#26
Patriot
Got a few people in here that are qualified to compete against MLID, got that sub 50% accuracy going.
Posted on Reply
#27
john_
PatriotGot a few people in here that are qualified to compete against MLID, got that sub 50% accuracy going.
Enlighten us with the knowledge you have and throws those MLID type of predictions at least 50% off. I mean if you can talk with numbers, I bet you also have concrete arguments and knowledge of what will happen. Right?
Posted on Reply
#28
Patriot
john_Enlighten us with the knowledge you have and throws those MLID type of predictions at least 50% off. I mean if you can talk with numbers, I bet you also have concrete arguments and knowledge of what will happen. Right?
It's pretty straightforward, he was claiming a 40% gain for zen5 just a quick second ago. He is part of the perpetual hype train that causes people to be disappointed with the consistent 15% ipc gains we keep on seeing.
Saw it with zen 3 and rdna3 as well. The tech rumor mill is always exciting but people base too much on it and then get disappointed when it turns out to be hyperbolic. That said, I was referring to the troll arf but if you want to start making stuff up you are welcome to it.
Posted on Reply
#29
john_
PatriotIt's pretty straightforward, he was claiming a 40% gain for zen5 just a quick second ago. He is part of the perpetual hype train that causes people to be disappointed with the consistent 15% ipc gains we keep on seeing.
Saw it with zen 3 and rdna3 as well. The tech rumor mill is always exciting but people base too much on it and then get disappointed when it turns out to be hyperbolic.
He does predictions. Predictions can be spot on or totally wrong. Also that 40% could be on AI apps. Someone tells him that he seen a 40% uplift, doesn't know at that time that that 40% is in one app not an average, here we are with the "40% uplift" rumor. And it's a nice click bait rumor also, obviously.
I was avoiding his channel because of what was posted almost everywhere about his predictions, but lately I do watch those 10-20 minutes videos with "information". They are not that bad. He talks about his sources that might be real or might be not, he does have some arguments that do look to make sense, he is not as bad as he is been described left and right. I mean, his short videos, not those 2 hours -you have to be on drugs to watch - but the short videos, are nice to watch(with a grain of salt).
That said, I was referring to the troll arf
Yeah, he is in my block list the last couple of months. Rarely quote him.
but if you want to start making stuff up you are welcome to it.
In my years online I have seen many jumping in a thread, posting one line the type of "you are all hallucinating" and then disappearing without explanations.
You are not one of them, are you?



PS And here we have his usual kind of reply
Posted on Reply
#30
Random_User
At this point, any affordable mass produced video card with performance of RX7800XT/7900GRE and superior efficiency, is tremendously apreciated. I dare to say, it is way, way more appreciated, by much wider target audience (maybe hundreds of thousands, if not millions of consumers), than e.g. something like a 4080/7900XTX for a one and a half grand (real price). And RT performance for this GPU class is just pointless anyway. So, all it takes is to deliver the cards to the end user. This is just sad, that instead of real products, people being fed the endless rumors.

But what is concerning for me, in this "Navi 48 XTX", is 48, which means some lower end SKU, and XTX, which usually means the most overclocked, and the most power hungly GPU, available. But this is just my take.
AusWolfMisjudgement, or lack of better RT hardware available? Nvidia jumped into the game a lot sooner than AMD, so I'm not blaming AMD for not having much to compete with for two generations. RDNA 2's RT unit is basically an "oh shit, we better do something fast" solution, and the one in RDNA 3 is basically a copy-paste. I don't think AMD would have done the same thing twice if they had anything else at the time. It's probably not because they didn't think better RT was necessary. They just didn't have anything else.
Yes, nVidia jumped HW RT first. However AMD's software RT was available for ages. Yet, barely anyone was interested in RT. And once JHH said that it's developers "holy grail", everyone have imediately become an aposltle of Geforce RTX.
DavenNone of the three companies said anything about next gen GPUs. No Battlemage, no RDNA4 and no Blackwell information was shared.
Exactly. All they want is money, to feed their greedy investors. The consumer graphics cards, are just a placeholder, and maintain the pretense of "being interested" in what is a really abandoned sector. This is depressing. And they all want people to jump onto their streaming subscription platforms, and want it to become prevailing way of gaming ASAP.
ARFRX 7000 is a failure no matter how you look at it.
1. Chiplet design was abandoned.
2. Performance targets were not met.
3. Sales are extremely low.
4. Prices are very high.
5. Ray-tracing poor performance.
6. FSR a joke, extremely low quality image.
7. Drivers not released regularly, instead bugs stay for quarters without anyone paying attention about fixing them.
8. Maybe lost the interest in the GPU department, and potentially leaving the market segment?
A failure... at some point.

1. Chiplets are going strong in their Ml Instinct. Look at their success. Don't see anyone complaining there.

2. Performance is not enough, obviously. But which where the targets is to be known. The 4090 was the crypto/AI bait, and designed for them in the first place.

3. Sales are bad, due to underwhelming rewievs, and a bad price. So is the nVidia's price formation and proposal, to say the least. The reason why nVidia outsold it's Ada cards, is purely due to public mindshare, and sometimes due to task specific requirements. I hardly see everybody, who have bought the nVidia cards needs it for some huge compute loads, or is going to be a streamer. Not to mention, the pricing required to step into uncanny valley, for barely visible performance difference outside 4090.
However, yes, nVidia checks more tick boxes. Most of them are uncompelling though, to be honest.

4. Price is indeed atrocious. This is probably the most crucial factor, that have lead to the poor sales. Overal these two factors are extremely interrelated and crucial, and the flop in each will lead to the bad results. nVidia will still outsell AMD 9:1, until AMD will have the enough allocation for chips. Still the grasp on their prices is too hard, and they are not going to lower it any time soom. Sadly.

AMD makes ton of profits on the Enterprise. They could easilly make a positive influxe, by shifting $50 down the entire stack, making it more favorable in the eyes of consumers, which being gouged from left and right.

5. Ray-traycing poor performance. LOL. Have you seen the green team results, anytime, recently? The entire point of DLSS and fake frames, is to substitute the dull RTRT performance. Turn it off, and you barely see any difference to Ampere in raw RT performance.

It was garbage since the very first day, nVidia decided to roll out the GPU with "limited" RT capabilities. But truth is, it's about several more generations, until the RT capabilities will actually reach the minimal acceptable level of real time ray tracing. Until then any current RT solution, is a complete joke, regardless of brand name. There's the reason, why cinema and CGI use entire farms, to render a particular amount of scripted scenes.

6. FSR is upscaler. All upcallers stretch the low resolution to big screen. It doesn't take an science degree to undertand, that this is completely flawed technology. No matter what seasoning one adds, the taste if still remain garbage. I don't see what miracle people want to see from this inferior, compared to the native big resolution technology. It's even possible to run low resolution on big screen natively, the upscallers are supposed to just improve the already inferior image.

The only reason why nVidia solution is much better, is the result of huge AI training, and huge amount of compute resources put into. There's no magic. Without it, it would be as ugly as FSR. But at least FSR and XeSS are open, and do not require someone's permission to use it.

7. Most of the reviews seems show, that the AMD software is much less of an issue, lately. And considering, that reddit is mostly "calm" about this, can be a "indication" of that. The drivers seems to be much better than before, but might not be flawwless, yet. But so is nVidia, which has some issues. The only issue with AMD software and drivers, is yet big media playback power consumption.

8. This is actually completely true. All three nVidia, AMD and Intel, are not interested in consumer 'gamer' entertaining multipurpose video cards. What they are interested are insane, never possible before profit margins. And right now the source of it is AI. The greed consumed them all.
Just sole silence from AMD about their consumer GPU status, is just the another indication that they treat Radeon as the least of their priorities. But again, this is related to all of the three of them.

But at least they don't give much hope. They'd better do their job silently, and make RDNA5 a real milestone, rather than make Vega-like advertisements, to just "sit in a puddle" with underperforming product later. The hype train can get off rails, and trap into collision.
Posted on Reply
#31
95Viper
Stick to discussing the topic... not each other.
Discuss with civility.
Posted on Reply
#32
Icon Charlie
Vayra86What are you smoking?

Lisa Su made AMD soar to unforeseen heights.

If you look at the net profits from the recent financials. They made a 100 million, however 50+ million was tax insentives.
Too much overhead for my liking. Also recently AMD bit into the AI craze just like the rest of the Magnificent 7 Stocks.

They need to perform well with their CPU's and not cost an arm and leg in the process to upgrade.
Posted on Reply
#33
Minus Infinity
AVATARATAre you sure?
Only for RDNA4, definitely not for RDNA5. It wasn't even abandoned as the lower tier cards don't use chiplets anyway. RDNA4 had 20 chiplets for flagship vs 7 for RDNA3 and was proving difficult to get working. They decided to abandon the high-end in RDNA4 and give more resource to RDNA5 team, which was working in parallel and making good progress. If they persisted with flagship RDNA4 it would have meant RDNA5 being delayed considerably.

Maybe they jumped the gun on chiplets with RDNA3, but even Nvidia will switch with Blackwell's successor most likely. EUV high NA cannot produce chips bigger than ~429mm^2, much smaller than low NA (~858mm^2) and Nvidia is up against that limit already.
Posted on Reply
#34
AusWolf
Minus InfinityOnly for RDNA4, definitely not for RDNA5. It wasn't even abandoned as the lower tier cards don't use chiplets anyway. RDNA4 had 20 chiplets for flagship vs 7 for RDNA3 and was proving difficult to get working. They decided to abandon the high-end in RDNA4 and give more resource to RDNA5 team, which was working in parallel and making good progress. If they persisted with flagship RDNA4 it would have meant RDNA5 being delayed considerably.

Maybe they jumped the gun on chiplets with RDNA3, but even Nvidia will switch with Blackwell's successor most likely. EUV high NA cannot produce chips bigger than ~429mm^2, much smaller than low NA (~858mm^2) and Nvidia is up against that limit already.
I knew I've read about this somewhere! Now there's two of us saying the same. Although, I couldn't find the source since. Do you have it by any chance?
Posted on Reply
#35
Minus Infinity
AusWolfI knew I've read about this somewhere! Now there's two of us saying the same. Although, I couldn't find the source since. Do you have it by any chance?
Which bit, the die size or about RDNA4? Die size info is widely available for current low NA and new high NA EUV. The stuff about RDNA4 troubles is from what I've heard from a couple of sites claiming it was from insiders who were refuting claims AMD just abandoned the high end for more pricey AI hardware like MI300. It sounds plausible to me. I highly doubt AMD is just giving up only to launch high-end RDNA5 late next year. The 5090 will get fanboy hype, but in the real world it's mid-tier or lower that most people buy.

Now if RDNA4 8800XT can match or exceed 7900XT in raster as many leaks claim, and has a nice bump in RTing and will come with the vastly better FSR3.1 and possibly new AI FSR 4.0, it should do very well at $550-600 or so, (16GB and 256bit) . If true, expect fire sales on 7900XT or a restriction in 8800XT supply until 7900XT dries up. AMD really has to deliver a big bump in raster and even bigger bump in RTing though to survive with RDNA4. 7600, 7700 and 7800 are bitterly disappointing updates IMO and do not deserve their names. Not that Nvidia is any different with 4060 crap.
Posted on Reply
#36
AusWolf
Minus InfinityWhich bit, the die size or about RDNA4? Die size info is widely available for current low NA and new high NA EUV. The stuff about RDNA4 troubles is from what I've heard from a couple of sites claiming it was from insiders who were refuting claims AMD just abandoned the high end for more pricey AI hardware like MI300. It sounds plausible to me. I highly doubt AMD is just giving up only to launch high-end RDNA5 late next year. The 5090 will get fanboy hype, but in the real world it's mid-tier or lower that most people buy.

Now if RDNA4 8800XT can match or exceed 7900XT in raster as many leaks claim, and has a nice bump in RTing and will come with the vastly better FSR3.1 and possibly new AI FSR 4.0, it should do very well at $550-600 or so, (16GB and 256bit) . If true, expect fire sales on 7900XT or a restriction in 8800XT supply until 7900XT dries up. AMD really has to deliver a big bump in raster and even bigger bump in RTing though to survive with RDNA4. 7600, 7700 and 7800 are bitterly disappointing updates IMO and do not deserve their names. Not that Nvidia is any different with 4060 crap.
Sorry about the confusion. I meant the bit about RDNA 4 being monolithic due to AMD working on it parallel with RDNA 5 - which makes RDNA 4 kind of a stop-gap solution, which I think is fine, as long as it delivers on the price-to-performance metric. The 7900 XT still being sold shouldn't be an issue, either. AMD can just lower its price a bit to make it a more enticing buy. Or leave it as it is and let stores deal with the inventory collecting dust, kind of like Nvidia does.

Personally, what I want from RDNA 4 is a bit better RT - not much, just enough to run light RT games like Avatar: FoP decently without FSR, and improved idle / video playback power consumption, all this paired with the promised 7900 XT-like performance. I'll be looking for a new GPU when it comes out anyway, and if it delivers on these fronts, I'll be happy to get an 8800 XT or whatever name AMD will come up for it.
Posted on Reply
#37
Broken Processor
I'm feeling really uncomfortable with hardware releases currently, I was hoping to upgrade my 5800x3d and 6800xt this year maybe first half of next.

My issue is 8 cores isn't enough I'd like 16 x3d cores but that's very unlikely so intel it probably is if arrow lake is good, graphics wise it's got to be Nvidia this time owning 3 7900xtx cards that not one was decent left a very sour taste in my mouth so 5090 or possibly 5080 if it's not good enough then 4090 will be my upgrade path because I'm not waiting goodness knows how long it will take AMD to get MCD working guessing that lack of investment in GPU devision is really biting hard especially now with Nvidia just sling shotting to 2nd richest company on earth on the back of AI, yet another thing AMD is missing out on. AMD needs to learn you got to be right most of the time and to be nimble or Nvidia will keep eating their lunch.
Posted on Reply
#38
AusWolf
Broken ProcessorI'm feeling really uncomfortable with hardware releases currently, I was hoping to upgrade my 5800x3d and 6800xt this year maybe first half of next.

My issue is 8 cores isn't enough I'd like 16 x3d cores but that's very unlikely so intel it probably is if arrow lake is good, graphics wise it's got to be Nvidia this time owning 3 7900xtx cards that not one was decent left a very sour taste in my mouth so 5090 or possibly 5080 if it's not good enough then 4090 will be my upgrade path because I'm not waiting goodness knows how long it will take AMD to get MCD working guessing that lack of investment in GPU devision is really biting hard especially now with Nvidia just sling shotting to 2nd richest company on earth on the back of AI, yet another thing AMD is missing out on. AMD needs to learn you got to be right most of the time and to be nimble or Nvidia will keep eating their lunch.
I'm not saying that I completely disagree with you, but I'm curious, what do you need 16 X3D cores for? There isn't a single game in my knowledge that needs more than 8, and productivity is hurt by the reduced clock speeds of X3D.

As for AMD missing out on AI, have you heard of XDNA?
Posted on Reply
#39
Random_User
Broken ProcessorI'm feeling really uncomfortable with hardware releases currently, I was hoping to upgrade my 5800x3d and 6800xt this year maybe first half of next.

My issue is 8 cores isn't enough I'd like 16 x3d cores but that's very unlikely so intel it probably is if arrow lake is good, graphics wise it's got to be Nvidia this time owning 3 7900xtx cards that not one was decent left a very sour taste in my mouth so 5090 or possibly 5080 if it's not good enough then 4090 will be my upgrade path because I'm not waiting goodness knows how long it will take AMD to get MCD working guessing that lack of investment in GPU devision is really biting hard especially now with Nvidia just sling shotting to 2nd richest company on earth on the back of AI, yet another thing AMD is missing out on. AMD needs to learn you got to be right most of the time and to be nimble or Nvidia will keep eating their lunch.
I understand, that you would like to get some performance uplift. But I hardly see any point of upgrading right now. Even if 7800X3D presumably is to remain top chip until the end of the year, the initial pricing would be atrocious, though. Also, it's probably the a better idea, to wait a bit more for the X800 series motherboards. Although, the chipset is mostly the same, the inclusion of USB4 and the overall better layout and feature set, is worth waiting, IMHO.
As about having the extra cores. The thing is, this is mostly moot point, since as it was mentioned before, the games, even now, in 2024, with some exceptions, in most cases. do not use more than eight cores anyway.
What is actually matters for gaming performance gain, is IPC uplift per core. And the throne is so far belongs to the octa core CPU. So no point in getting extra CCD, since AMD isn't going to make 16 core X3D CPUs with both of them having 3D V-Cash. So your and 7800/9800X3D still have the uniformity, and would work much better with Windows scheduler anyway.
As for the upgrading to 4090, I think it's not viable think, as depending on location, the prices will remain 'extortion level' infalted, up until the very 5090 release, or even untill the EOL phase. Maybe only second hand, but again, people do not sell thouse willingly. This is the sad state of the HW market.
AusWolfI'm not saying that I completely disagree with you, but I'm curious, what do you need 16 X3D cores for? There isn't a single game in my knowledge that needs more than 8, and productivity is hurt by the reduced clock speeds of X3D.

As for AMD missing out on AI, have you heard of XDNA?
Yes. But the advantage of X3D cores seems to be reduced by "ordinary" cores heat due to common thermal envelope. Thus having CPU, which is unable to show the benefit in neither of tasks. Jack of all trades, master of none. Not to mention, that Windows scheduler doesn't like such hybrid aproach. Intel even had to make the special SW profiles, in order to make certain games use the defferent cores correctly. And I doubt many game developers are eager to invest more time, into scheduling the exotic CPUs, while their target audience is about to use the eight cores for at least couple of years in the future.
Posted on Reply
#40
Broken Processor
AusWolfI'm not saying that I completely disagree with you, but I'm curious, what do you need 16 X3D cores for? There isn't a single game in my knowledge that needs more than 8, and productivity is hurt by the reduced clock speeds of X3D.

As for AMD missing out on AI, have you heard of XDNA?
Problem is I don't want to deal with AMD scheduling and I mainly game but do other work loads where 8 cores is no longer enough.
Posted on Reply
#41
AusWolf
Broken ProcessorProblem is I don't want to deal with AMD scheduling and I mainly game but do other work loads where 8 cores is no longer enough.
Then I guess you'll have to decide which one is more important for you: not having to deal with the scheduler, or multi-core performance. I'm not saying it's an ideal situation, but it is what it is unfortunately.

Edit: You could also get a non-X3D CPU and call it a day.
Posted on Reply
#42
phubar
john_RX 7000 not having significantly better RT performance than RX 6000 lead to lost market share.
This isn't true.

RDNA3 is typically 40-50% faster at ray tracing than RDNA2. Still behind Nvidia's 4000 series but how is that not significantly better?
john_Zen 5 X3D chips coming to AM5 much latter than the platform introduction, which resulted in a lukewarm reception of the platform by press and public.
X3D chips always come later. They came MUCH later for Zen3! Zen4 X3D chips were actually launched much faster than for Zen3. Zen5 X3D chips are coming even sooner and might be out before the end of the year.
john_AM5 not supporting DDR4 because someone predicted that DDR5 will be ultra cheap soon.
???? What?!? Are you trolling or something? AM5 only supports DDR5 because AMD would've had to do a IO die that supported both memory standards and they considered that too burdensome since the IOD was already getting too big and hot for DDR5 alone. That and they were launching their DDR5 platform well after Intel's when DDR5 was supposed to drop in price which is normal for them. They usually lag Intel on adopting new memory standards and its generally not considered a big issue since usually the new memory standard is expensive and not much faster than the previous one at introduction.
john_Still not much from their software department.
Yeah their software development efforts are a joke but have been for decades unfortunately.

That isn't a new problem.

The issue here is they'd have to spend giant sums of money they probably don't have for years to see any real improvement here and its all they can do to fund R&D of new CPU's and GPU's along with working to get chipsets done with AsMedia. They just don't have the resources.
john_Just yesterday reading that the new Ryzen AI will only support Windows 11 and not Windows 10, that's also stupid.
That isn't a AMD issue. The libraries and OS support just flat out aren't there for win10 which is nearing EoL status.

Blame MS.

Its the same for Intel and QC as well. NPU support for win10 is dead on arrival.
john_Not securing enough capacity at TSMC to take advantage of all the opportunities they had.
They have to compete with Apple, Nvidia, and all the other companies for wafers. There isn't much they can do here. They certainly can't get a fab going again themselves. Don't have the resources for it. Neither does their old partner/spin off GF which threw in the towel at 16/12nm. They do seem to be making overtures for wafers from Samsung but I don't know if that is going to go anywhere. Point is they are trying but everyone is fab limited right now if you want something as good or competitive with TSMC's best 5, 4, and 3nm processes.
Posted on Reply
#43
john_
phubarThis isn't true.

RDNA3 is typically 40-50% faster at ray tracing than RDNA2. Still behind Nvidia's 4000 series but how is that not significantly better?
It is true and your numbers are not correct.
X3D chips always come later. They came MUCH later for Zen3! Zen4 X3D chips were actually launched much faster than for Zen3. Zen5 X3D chips are coming even sooner and might be out before the end of the year.
They came much latter for Zen 3 because the tech didn't existed. They got delayed just too much with the AM5 platform. And yes I know the rumors about September for Zen 5 X3D. I guess AMD agrees with me in this one.
???? What?!? Are you trolling or something? AM5 only supports DDR5 because AMD would've had to do a IO die that supported both memory standards and they considered that too burdensome since the IOD was already getting too big and hot for DDR5 alone. That and they were launching their DDR5 platform well after Intel's when DDR5 was supposed to drop in price which is normal for them. They usually lag Intel on adopting new memory standards and its generally not considered a big issue since usually the new memory standard is expensive and not much faster than the previous one at introduction.
I am thinking the same thing reading your post. Coincidence?
AMD did that trick with AM3 CPUs that could be used with DDR2 and DDR3. Guess what. That was an advantage for those chips. You had an old DDR2 AM2+ motherboard and 8GBs of fast DDR2? Just throw a new 6 core Thuban on it. You change the platform latter.
Intel did this with it's Hybrid CPUs. It won market share.
You think winning market share and giving consumers options is trolling? AMD done it 15 years ago. Intel did it recently. Probably it's not that hard integrating two memory controllers on a CPU. The rest of your sentence is again some history lesson, which is not really an argument. In fact that "supposed to drop" that you write, agrees with what I said and you thought it was trolling.
Yeah their software development efforts are a joke but have been for decades unfortunately.

That isn't a new problem.

The issue here is they'd have to spend giant sums of money they probably don't have for years to see any real improvement here and its all they can do to fund R&D of new CPU's and GPU's along with working to get chipsets done with AsMedia. They just don't have the resources.
They didn't had such a problem in the past, because things where simpler then. Today they have to throw money on software, no matter if they have that money or not. And they do have the money. They just don't know how to use it when it is about software. They lack the experience.
That isn't a AMD issue. The libraries and OS support just flat out aren't there for win10 which is nearing EoL status.

Blame MS.

Its the same for Intel and QC as well. NPU support for win10 is dead on arrival.
You don't need to support all the CPU features on Win 10. Just make it work in Win 10. If someone hates the idea of AI and NPU and whatever Win 11 represents, give them the option to run Win 10 with the NPU disabled.
They have to compete with Apple, Nvidia, and all the other companies for wafers. There isn't much they can do here. They certainly can't get a fab going again themselves. Don't have the resources for it. Neither does their old partner/spin off GF which threw in the towel at 16/12nm. They do seem to be making overtures for wafers from Samsung but I don't know if that is going to go anywhere. Point is they are trying but everyone is fab limited right now if you want something as good or competitive with TSMC's best 5, 4, and 3nm processes.
Intel with all those problems lately did find wafers for Arrow Lake. I am pretty sure AMD could also find a few extra wafers. But AMD prefers to play it safe, to not have to pay penalties for unallocated wafers, like it was doing with GF, or end up with huge inventory. But playing it safe 24/7 limits what someone can achieve ending up remaining stagnant with no real growth. I mean, look AMD's financial's today. If you remove Xilinx's part, AMD is not getting any bigger the last couple of years.
Posted on Reply
#44
AusWolf
phubarRDNA3 is typically 40-50% faster at ray tracing than RDNA2. Still behind Nvidia's 4000 series but how is that not significantly better?
Huh?
Posted on Reply
#45
phubar
john_It is true and your numbers are not correct.
chipsandcheese.com/2023/03/22/raytracing-on-amds-rdna-2-3-and-nvidias-turing-and-pascal/

Spend some time reading that. They go over all the details for you. Long story short they didn't get anywhere near the 80% improvement AMD was claiming but 40-50% faster on micro benches does show its got real improvements vs RDNA1.
AusWolfHuh?
You know better than to judge a video card based on 1 game's performance. Especially one that is known for running poorly on almost all hardware.

Top end RNDA3 can come close to a 3090Ti on ray tracing which is a big step up over RDNA2 even if its still less than NV's 4000 series.
john_They came much latter for Zen 3 because the tech didn't existed.
I guess those Milan X 7003 Epyc's that got launched months before the X3D chips just never existed then huh?

And how does what you're saying bolster your comments about this somehow being a screw up on AMD's part? Bear in mind that it was still very new packaging tech and they were dependent on TSMC to really make it all work too. Further bear in mind that AMD makes more money on server CPU's than desktop parts so it makes lots of sense to target them first and foremost as a business decision.

Whether I or you like that doesn't really matter here since AMD can't ignore their bottom line.
john_AMD did that trick with AM3 CPUs that could be used with DDR2 and DDR3.
Did AM3 CPU's have a separate IOD like AM4/5 CPU's do?

If not then the comparison doesn't make sense.

To me I think supporting DDR4/5 on the same socket would've been nice but its not a big deal and there were real technical issues for AMD pulling it off with the split die approach they've gone for. Remember the IOD is on a larger process, which helps with cost and platform flexibility, but uses quite a bit of power. Ignoring or downplaying that isn't reasonable.
john_6 core Thuban on it. You change the platform latter. Intel did this with it's Hybrid CPUs. It won market share.
Intel won market share because all AMD had was PhenomII Thubans or Propus vs Intel's Core2 i7 2600k SandyBridge or i5 661 Clarksdales, which lets face it were much better CPUs, that also tended to get good to great overclocks.

It had nothing to do with memory standard support.
john_You think winning market share and giving consumers options is trolling?
If you're ignoring the practical realities and business limitations that AMD is facing to reach your conclusions then yes.

At best if you're not trolling then you're just making stuff up out of nowhere to form your opinions. Which isn't reasonable either.
john_Intel did it recently. Probably it's not that hard integrating two memory controllers on a CPU.
Sure they did it on the 12xxx series CPU's which weren't at all known for good power consumption either at load or idle.

The power issues with that CPU can't all be blamed on the dual memory support but it sure didn't help!
john_In fact that "supposed to drop" that you write, agrees with what I said and you thought it was trolling.
So DDR5 didn't drop in price by the time AMD launched AM5 in late 2022 vs when Intel started supporting it in late 2021 with AlderLake? Or continue falling in 2023?

cdn.wccftech.com/wp-content/uploads/2022/06/1-1080.03056867.png

You're trying way to hard to read whatever you want into my previous comment. The "supposed to drop" was what AMD was saying at the time as part of the reason for their delaying supporting DDR5 and as a future looking statement hedging their bets is common when doing marketing. I recall many were fine with this since DDR5 was terribly expensive at first and didn't offer much in the way of performance benefit since early DDR5 didn't reach the higher clocks or lower timings it can now vs the top tier DDR4 3600/3200 many were still using back then.
john_They didn't had such a problem in the past, because things where simpler then.
Sure they did. That was why they didn't hardly develop their GPGPU software or compilers for any of GCN variants or Terascale either. And why 3DNow! went virtually nowhere outside of 2-3 games.

Remember AMD was hurting financially, almost went under really before Zen1 got out, for a long long time with Bulldozer, since at least Core/Conroe/Penryn came out really, and they had to drop their prices on Phenom to get sales.

Remember too that even before that AMD overpaid massively on ATi and more or less used up all their financial gains they made vs Netburst on that buy + getting the Dresden fab up and running. And if you go back before K7 AMD was doing not so well with the later K6 III or II's vs P!!! or PII since its FPU wasn't so good even if integer performance was solid vs those CPU's. And if you go back before THAT then you've got to remember the financial troubles they had with K5, which also BTW nearly sank the company. Sanders had to go around begging investors for hundreds of millions (in early-mid 90's dollars mind you, so I think it'd almost be a billion today) to keep the company afloat!

They've been drastically financially constrained nearly all the time they've been around!

Certainly up until around Zen2 or so if you only want to focus on recent times when they finally finished paying off a big chunk of their debt load they racked up during Bulldozer. Of course then they bought the FPGA company Xilinix as soon as they had some cash, which was a $30 billion+ deal, and they're back to being heavily in debt again.

They'd need to spend billions more they don't have to hire the hundreds or thousands of programmers, like NV does, to develop their software up to the same degree and they just don't have the money. Its also why they tend to be more prone to open sourcing stuff vs NV too. Its not out of good will. They know they can't get it done alone and are hoping things take off with open sourced development.
john_You don't need to support all the CPU features on Win 10. Just make it work in Win 10. If someone hates the idea of AI and NPU and whatever Win 11 represents, give them the option to run Win 10 with the NPU disabled.
Again they can't do that without MS doing the basic OS level work for them.

No one can. That is why I mentioned that neither Intel or QC will support NPU's on win10 either.

And you can already run win10 without NPU support on a SoC that supports that has one so your needs are already met I guess? The bigger deal is chipset support which is going to go away fast over time but for now win10 is still supported by AMD and others for their chipsets.
john_Intel with all those problems lately did find wafers for Arrow Lake.
Not a valid comparison. Intel has their own fabs that produce the lions' share of their products + was willing to outsource some of their product to TSMC.

AMD has no fabs at all.
john_I am pretty sure AMD could also find a few extra wafers.
Based on what?

TSMC is very public about being typically wafer constrained for their higher performing process tech, for several years now at least, and is constantly trying to build more fabs but THAT takes years to get one done too. I think all their 3 and 4nm production capacity has already been bought out for the entire year for instance.

So how can AMD buy what TSMC has already sold to Apple and others? Bear in mind too that Apple always gets first pick at TSMC's wafers due to the financial deal those 2 companies have.

And who else can they go to that has a competitive 3 or 4 or 5, or heck even 7, nanometer process that also has spare capacity they can buy? The closest anyone comes is Samsung and I've already said they're in talks with them to buy wafers. That is however a whole different process and they'd have to essentially redesign their product to support it. Which can take 6-12 months and cost hundreds of millions of dollars for a modern high end process before a single wafer is made!

Samsung's process tech also usually lags behind TSMC. So it'd probably only be for lower to mid end products to get produced there. APU's and such. Not higher end desktop or server chips. So even then there would still be supply issues for certain product lines.
Posted on Reply
#46
AusWolf
phubarYou know better than to judge a video card based on 1 game's performance. Especially one that is known for running poorly on almost all hardware.

Top end RNDA3 can come close to a 3090Ti on ray tracing which is a big step up over RDNA2 even if its still less than NV's 4000 series.
It's not just one game, I only picked an example. RDNA 3 loses the same amount of performance when you enable RT as RDNA 2, which is no surprise considering that their RT units are almost identical.

The only performance gain you have over last gen is due to having more of said units (and everything else), same as on Ada compared to Ampere. Brute forcing, if you will.

Edit: This is what happens with RDNA 2 vs 3 with an identical number of execution units:

I don't see "40-50% RT performance gain" anywhere.
Posted on Reply
#47
Vayra86
phubarThis isn't true.

RDNA3 is typically 40-50% faster at ray tracing than RDNA2. Still behind Nvidia's 4000 series but how is that not significantly better?
No, its really not. Check your facts before you post them as such. Especially if you double down on them...
phubarTop end RNDA3 can come close to a 3090Ti on ray tracing which is a big step up over RDNA2 even if its still less than NV's 4000 series.
Correct, because the cards also became faster overall.
Posted on Reply
#48
phubar
AusWolfIt's not just one game, I only picked an example. RDNA 3 loses the same amount of performance when you enable RT as RDNA 2, which is no surprise considering that their RT units are almost identical.

The only performance gain you have over last gen is due to having more of said units (and everything else), same as on Ada compared to Ampere. Brute forcing, if you will.
If you want to keep execution units the same then yes what you're saying is true BUT since they did add more ignoring that is kind've weird.

The whole point is to be faster over all. How you get there exactly (ie. more cache, clocks, ALU's, whatever) doesn't actually matter, so long as things like cost or heat don't get out of hand, from either a practical product POV or from the POV of the discussion.

But these GPU's are all essentially piles of relatively simple (vs CPU's) ALU's with a heap of cache and lots of high bandwidth these days since they're all brute forcing for parallelism in similar manners. The closest thing I've seen to way to address that issue is DLSS, FSR, or XeSS. Those are a big deal.Certainly much bigger than ray tracing.
Posted on Reply
#49
AusWolf
phubarIf you want to keep execution units the same then yes what you're saying is true BUT since they did add more ignoring that is kind've weird.

The whole point is to be faster over all. How you get there exactly doesn't actually matter from either a practical product POV or from the POV of the discussion.
All I'm saying is, being faster overall is not the same as having better RT performance. Yes, RDNA 3 cards are faster overall because they have more execution units, but they don't have better RT performance.

If you add more cylinders to a car's engine, the car will be faster overall. You're not getting more "cylinder performance" (that is, performance per cylinder). You'll just have more of them.

Better RT performance means experiencing a smaller performance drop when you enable it compared to when you don't. That is clearly not the case with RDNA 3 vs 2, just like it isn't with Ada vs Ampere.
Posted on Reply
#50
phubar
AusWolfAll I'm saying is, being faster overall is not the same as having better RT performance.
Yes but all I've been saying in thread is performance went up, not quibbling about exactly what execution unit numbers, or clockspeed, etc are involved. I don't care if its brute force since its all already brute force.

If you're agreeing that performance went up then there isn't anything else to discuss with regard to my comments to the other guy or the thread
Posted on Reply
Add your own comment
Jun 29th, 2024 13:16 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts