• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Curious "Navi 48 XTX" Graphics Card Prototype Detected in Regulatory Filings

Joined
Apr 28, 2023
Messages
45 (0.07/day)
All I'm saying is, being faster overall is not the same as having better RT performance.
Yes but all I've been saying in thread is performance went up, not quibbling about exactly what execution unit numbers, or clockspeed, etc are involved. I don't care if its brute force since its all already brute force.

If you're agreeing that performance went up then there isn't anything else to discuss with regard to my comments to the other guy or the thread
 
Joined
Sep 6, 2013
Messages
3,391 (0.82/day)
Location
Athens, Greece
System Name 3 desktop systems: Gaming / Internet / HTPC
Processor Ryzen 5 7600 / Ryzen 5 4600G / Ryzen 5 5500
Motherboard X670E Gaming Plus WiFi / MSI X470 Gaming Plus Max (1) / MSI X470 Gaming Plus Max (2)
Cooling Aigo ICE 400SE / Segotep T4 / Νoctua U12S
Memory Kingston FURY Beast 32GB DDR5 6000 / 16GB JUHOR / 32GB G.Skill RIPJAWS 3600 + Aegis 3200
Video Card(s) ASRock RX 6600 + GT 710 (PhysX) / Vega 7 integrated / Radeon RX 580
Storage NVMes, ONLY NVMes / NVMes, SATA Storage / NVMe, SATA, external storage
Display(s) Philips 43PUS8857/12 UHD TV (120Hz, HDR, FreeSync Premium) / 19'' HP monitor + BlitzWolf BW-V5
Case Sharkoon Rebel 12 / CoolerMaster Elite 361 / Xigmatek Midguard
Audio Device(s) onboard
Power Supply Chieftec 850W / Silver Power 400W / Sharkoon 650W
Mouse CoolerMaster Devastator III Plus / CoolerMaster Devastator / Logitech
Keyboard CoolerMaster Devastator III Plus / CoolerMaster Devastator / Logitech
Software Windows 10 / Windows 10&Windows 11 / Windows 10

Spend some time reading that. They go over all the details for you. Long story short they didn't get anywhere near the 80% improvement AMD was claiming but 40-50% faster on micro benches does show its got real improvements vs RDNA1.
RDNA1?
And NO it's NOT 40-50% faster.

You know better than to judge a video card based on 1 game's performance. Especially one that is known for running poorly on almost all hardware.

Top end RNDA3 can come close to a 3090Ti on ray tracing which is a big step up over RDNA2 even if its still less than NV's 4000 series.
The top RDNA3 cards are just BIGGER dies compared to RDNA2 GPUs and that's why they seem to improve in RT performance. Because they are BIGGER chips. The improvements there are a result of bigger GPUs not faster RT.

I guess those Milan X 7003 Epyc's that got launched months before the X3D chips just never existed then huh?
Are you intentionally pretending to not understanding what I am posting? Do I lose my time with you? (obviously)
5800X 2020, Milan X and 5800X3D 2022.

And how does what you're saying bolster your comments about this somehow being a screw up on AMD's part? Bear in mind that it was still very new packaging tech and they were dependent on TSMC to really make it all work too. Further bear in mind that AMD makes more money on server CPU's than desktop parts so it makes lots of sense to target them first and foremost as a business decision.

Whether I or you like that doesn't really matter here since AMD can't ignore their bottom line.
Excuses are rarely arguments. In this reply of yours, they are not.
And it is a screw up on AMD's part because the reception of AM5 was lukewarm for a number of reasons. AM5 became exciting again when the X3D models came out. If AMD had come out with X3D models quickly, the reception of AM5 platform would have been much better.
Did AM3 CPU's have a separate IOD like AM4/5 CPU's do?

If not then the comparison doesn't make sense.
You not being able to understand, it's also NOT an argument. And using an IOD makes it easier.
To me I think supporting DDR4/5 on the same socket would've been nice but its not a big deal and there were real technical issues for AMD pulling it off with the split die approach they've gone for. Remember the IOD is on a larger process, which helps with cost and platform flexibility, but uses quite a bit of power. Ignoring or downplaying that isn't reasonable.
It's not a big deal the Intel platform seeing as cheaper at a time that Intel was also advertising higher number of cores? Nice one.
The rest of your post are speculations that don't make sense. Something that could be done in 2008 can be done easier today. And it had nothing to do with power. The mem controller not in use just shuts down.
Intel won market share because all AMD had was PhenomII Thubans or Propus vs Intel's Core2 i7 2600k SandyBridge or i5 661 Clarksdales, which lets face it were much better CPUs, that also tended to get good to great overclocks.

It had nothing to do with memory standard support.
You totally lost it here. When I talk about Intel and mention Hybrid CPUs, obviously I don't mean Phenom era. Try to READ what others reply to you.

If you're ignoring the practical realities and business limitations that AMD is facing to reach your conclusions then yes.

At best if you're not trolling then you're just making stuff up out of nowhere to form your opinions. Which isn't reasonable either.
Until now you are the only one that there are indications of trolling.
The practical realities of AMD ended the time it could offer 30-40 billions dollars in shares and cash for Xilinx. We are not in 2015.
Sure they did it on the 12xxx series CPU's which weren't at all known for good power consumption either at load or idle.

The power issues with that CPU can't all be blamed on the dual memory support but it sure didn't help!
Dual memory support had nothing to do. As I said when a mem controller is not used, it just shuts down. We are not in 1999 where the whole chip is constantly on. Modern CPUs shut down their parts not in use. The problem with Intel and power consumption is simply process. Take a Ryzen 7950X and build it at Intel's 7nm process. See where power consumption will go.
So DDR5 didn't drop in price by the time AMD launched AM5 in late 2022 vs when Intel started supporting it in late 2021 with AlderLake? Or continue falling in 2023?


You're trying way to hard to read whatever you want into my previous comment. The "supposed to drop" was what AMD was saying at the time as part of the reason for their delaying supporting DDR5 and as a future looking statement hedging their bets is common when doing marketing. I recall many were fine with this since DDR5 was terribly expensive at first and didn't offer much in the way of performance benefit since early DDR5 didn't reach the higher clocks or lower timings it can now vs the top tier DDR4 3600/3200 many were still using back then.
My God. Are you 12? Don't show me how much DDR5 dropped, compare me the price of (for example) 32GB DDR4 3200 with 32GB DDR5 and tell me there is no significant difference in price. You think DDR4 remained the same?
You are trying to hard to ignore logic to fabricate non existing arguments.
Sure they did. That was why they didn't hardly develop their GPGPU software or compilers for any of GCN variants or Terascale either. And why 3DNow! went virtually nowhere outside of 2-3 games.

Remember AMD was hurting financially, almost went under really before Zen1 got out, for a long long time with Bulldozer, since at least Core/Conroe/Penryn came out really, and they had to drop their prices on Phenom to get sales.

Remember too that even before that AMD overpaid massively on ATi and more or less used up all their financial gains they made vs Netburst on that buy + getting the Dresden fab up and running. And if you go back before K7 AMD was doing not so well with the later K6 III or II's vs P!!! or PII since its FPU wasn't so good even if integer performance was solid vs those CPU's. And if you go back before THAT then you've got to remember the financial troubles they had with K5, which also BTW nearly sank the company. Sanders had to go around begging investors for hundreds of millions (in early-mid 90's dollars mind you, so I think it'd almost be a billion today) to keep the company afloat!

They've been drastically financially constrained nearly all the time they've been around!

Certainly up until around Zen2 or so if you only want to focus on recent times when they finally finished paying off a big chunk of their debt load they racked up during Bulldozer. Of course then they bought the FPGA company Xilinix as soon as they had some cash, which was a $30 billion+ deal, and they're back to being heavily in debt again.

They'd need to spend billions more they don't have to hire the hundreds or thousands of programmers, like NV does, to develop their software up to the same degree and they just don't have the money. Its also why they tend to be more prone to open sourcing stuff vs NV too. Its not out of good will. They know they can't get it done alone and are hoping things take off with open sourced development.
You still seem to have problems understanding what the other person says or what the subject is. Or maybe it's my English.
AMD didn't really had much problems in the past (10+ years ago) with software. The only problem they had was that developers where building games on Intel+Nvidia hardware, so when the games where coming out, Radeon cards where either not performing optimally or/and had to deal with bugs. That lead to AMD's bad reputation in drivers. Other than that AMD was fine and Nvidia was trying hard to differentiate it's hardware with PhysX and Gameworks libraries.
The rest of the history lesson presented as excuses, are (again) NOT arguments.
Again they can't do that without MS doing the basic OS level work for them.

No one can. That is why I mentioned that neither Intel or QC will support NPU's on win10 either.

And you can already run win10 without NPU support on a SoC that supports that has one so your needs are already met I guess? The bigger deal is chipset support which is going to go away fast over time but for now win10 is still supported by AMD and others for their chipsets.
You keep reading and misinterpreting my posts. AMD builds the CPUs you know. Not MS. Win 10 is out there 10 years now. AMD can build hardware to be compatible with both Win10 and Win11. Not being able to use the NPU under Win10 could be seen as an advantage from the point of view of people hating the idea of AI in their OS.
Not a valid comparison. Intel has their own fabs that produce the lions' share of their products + was willing to outsource some of their product to TSMC.

AMD has no fabs at all.
But they DID find wafers available from TSMC. It's not that TSMC told them "Sorry, no capacity left for you".
Based on what?

TSMC is very public about being typically wafer constrained for their higher performing process tech, for several years now at least, and is constantly trying to build more fabs but THAT takes years to get one done too. I think all their 3 and 4nm production capacity has already been bought out for the entire year for instance.

So how can AMD buy what TSMC has already sold to Apple and others? Bear in mind too that Apple always gets first pick at TSMC's wafers due to the financial deal those 2 companies have.

And who else can they go to that has a competitive 3 or 4 or 5, or heck even 7, nanometer process that also has spare capacity they can buy? The closest anyone comes is Samsung and I've already said they're in talks with them to buy wafers. That is however a whole different process and they'd have to essentially redesign their product to support it. Which can take 6-12 months and cost hundreds of millions of dollars for a modern high end process before a single wafer is made!

Samsung's process tech also usually lags behind TSMC. So it'd probably only be for lower to mid end products to get produced there. APU's and such. Not higher end desktop or server chips. So even then there would still be supply issues for certain product lines.
Intel DID found. AMD didN'T tried to allocate more when there was capacity available.
Already posted that about how they could use Samsung. And yes it needs time and money, but they could free up capacity for more high end products while having much higher supply for mid range products. While we all are bitching at Dell for being glued to Intel, Intel can warranty Dell huge supply. AMD seems to not be able to do so. So, they need to do steps in the direction where they will start growing as a company. If that means double sourcing, they should do so. Nvidia took it's risks and now is a 3 trillion company. AMD plays it safe and remains vulnerable. It's valuation is a result of investors hoping AMD to start moving it's legs. When they start losing faith and they have the last few months, AMD's valuation will start going down and AMD will become even more vulnerable. If Intel fixes it's manufacturing and Windows on ARM starts making some progress, probably slow but progress, things will start getting ugly.
 
Last edited:
Joined
Jan 14, 2019
Messages
12,570 (5.80/day)
Location
Midlands, UK
System Name Nebulon B
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance DDR5-4800
Video Card(s) AMD Radeon RX 6750 XT 12 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Bazzite (Fedora Linux) KDE
Yes but all I've been saying in thread is performance went up, not quibbling about exactly what execution unit numbers, or clockspeed, etc are involved. I don't care if its brute force since its all already brute force.

If you're agreeing that performance went up then there isn't anything else to discuss with regard to my comments to the other guy or the thread
Depends on how you look at it.

Certain models became faster than their predecessors because they have more execution units and VRAM, but the architecture itself didn't. CU vs CU, RDNA 3 has the same speed in most cases as RDNA 2.

Edit: The 7600 is a slightly more advanced 6650 XT, and the 7800 XT is a slightly more advanced 6800 XT. Same CU count, mostly the same speed.
 
Last edited:
Joined
Sep 17, 2014
Messages
22,666 (6.05/day)
Location
The Washing Machine
System Name Tiny the White Yeti
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling CPU: Thermalright Peerless Assassin / Case: Phanteks T30-120 x3
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
VR HMD HD 420 - Green Edition ;)
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
Yes but all I've been saying in thread is performance went up, not quibbling about exactly what execution unit numbers, or clockspeed, etc are involved. I don't care if its brute force since its all already brute force.

If you're agreeing that performance went up then there isn't anything else to discuss with regard to my comments to the other guy or the thread
But that IS crucial to the whole discussion.

All performance in GPUs is relative performance. Why? Because as a buyer, you're comparing products. You compare your current GPU to the one you're gonna buy. You compare Nvidia's offerings against AMDs.

With each product in the tiers of each company comes an associated price, but also an associated product, with a die size, which defines its 'value'. Big dies are more expensive. So if AMD needs a 50% bigger die (hypothetically) to get 50% more RT performance, have they really increased RT performance by 50%? Or have they just thrown 50% more hardware at it, with a 50% higher cost - often even more than 50% more cost, because bigger dies have worse yields.

So it really matters whether the 50% is gained relatively compared to the last generation or whether it comes from absolutes like square mm of die space. If its the first, you've just made the same product (or a new version of that product) 50% more valuable in that aspect. If its the latter, you've just wasted resources and made zero progress. Now again, this is a crucial concept to understand. It defines the market, and it defines what you purchase and why. If you don't understand this, you don't understand hardware at all - no offense intended! Just to emphasize how much it matters and how important the context is around saying 'X gained 50% in Y'.
 
Joined
Oct 22, 2014
Messages
14,170 (3.82/day)
Location
Sunshine Coast
System Name H7 Flow 2024
Processor AMD 5800X3D
Motherboard Asus X570 Tough Gaming
Cooling Custom liquid
Memory 32 GB DDR4
Video Card(s) Intel ARC A750
Storage Crucial P5 Plus 2TB.
Display(s) AOC 24" Freesync 1m.s. 75Hz
Mouse Lenovo
Keyboard Eweadn Mechanical
Software W11 Pro 64 bit
If you add more cylinders to a car's engine, the car will be faster overall. You're not getting more "cylinder performance" (that is, performance per cylinder). You'll just have more of them.
That is an incorrect analogy as more cylinders does not equate to more power if the capacity remains the same.
It will behave in a different manner affecting efficiency, but overall power tends to remain the same.
Anyway that is off topic for this thread.
 
Joined
Apr 28, 2023
Messages
45 (0.07/day)
Sorry meant RDNA2.
And NO it's NOT 40-50% faster.
Sure it is! Hint: denials aren't going to convince me of anything.
The top RDNA3 cards are just BIGGER dies compared to RDNA2 GPUs
How you get the improvement doesn't matter at all. Could be from more clocks, transistors, cache, etc. It doesn't matter. Just that there is one.

Remember my original comment was about performance only, not perf/ALU or perf/transistor, etc.
5800X 2020, Milan X and 5800X3D 2022.
Milan X came out months before 5800X3D. March or so vs June 2022. So it wasn't that tech didn't exist like you earlier claimed. It was that AMD decided to launch the extra cache dies on server parts first.
Excuses are rarely arguments. In this reply of yours, they are not.
Its not excuses. I'm giving you the facts.

They just don't have the resources or money to do everything they want. That is a big part of the reason why they're generally seen as the underdog in the x86/GPU biz!

If you want to prove me wrong here you've got to show they had billions more to spend year after year on software development and other software related stuff like compilers during the Bulldozer years at a minimum. Or even now (hint: you can't, their debt load is pretty high, they pushed themselves to the limit buying Xilinix). Just saying "nuh uh" isn't going to convince anyone of anything here.
And it is a screw up on AMD's part because the reception of AM5 was lukewarm for a number of reasons.
It wasn't a 'number of reasons'. It was that the platform was expensive. People started buying it more when the price dropped a bit on the mobos and CPU's. Having X3D at launch would've been nice but not a game changer and wouldn't have addressed the cost issue.
And using an IOD makes it easier. As I said when a mem controller is not used, it just shuts down.
That doesn't change that you can't compare apples v oranges here. They're going to be very different by default.

Having a IOD makes changes easier true but it doesn't solve the fundamental issues with bigger die needed for another memory controller or the heat/power used.

Gating the transistors off when not in use is ideal but apparently not always possible. The IOD power use is already a big problem for why AMD systems use more power at idle then they should. If they could power gate all of it off as needed they would've already done so. But they can't.
It's not a big deal the Intel platform seeing as cheaper at a time that Intel was also advertising higher number of cores? Nice one.
You made the claim that Intel's DDR4/5 support was the reason that platform did well and now you're switching goal posts to talking about core numbers? Especially when e cores are involved?

Look pick one argument and stick to it.
You totally lost it here. When I talk about Intel and mention Hybrid CPUs, obviously I don't mean Phenom era.
You brought up Phenom as a example though. If its not a valid comparison then why even bring it up?
The practical realities of AMD ended the time it could offer 30-40 billions dollars in shares and cash for Xilinx. We are not in 2015.
AMD bought Xilinix in 2022. The debt load from that deal is something they'll have to be dealing with for a long time. Why are you even bringing up 2015? Zen1 wasn't even out until 2017.
My God. Are you 12? Don't show me how much DDR5 dropped, compare me the price of (for example) 32GB DDR4 3200 with 32GB DDR5 and tell me there is no significant difference in price. You think DDR4 remained the same?
This is goal post shifting.

I showed how DDR5 prices dropped which was all that was required to show that AMD had a reasonable approach to delaying supporting DDR5. You claimed this was a mistake. So in order for you to be correct you have show that DDR5 prices stayed the same or rose instead. Good luck with that.
AMD didn't really had much problems in the past (10+ years ago) with software.
So they have compliers as good as NV's for their GPU's? OpenCL apps have as much marketshare as CUDA apps? No. OpenCL and AMD compliers, and software support in general for their GPU's for compute, is generally pretty lousy.

AMD game drivers are generally pretty good these days but that is a smaller part of puzzle when talking about software in general.
The rest of the history lesson presented as excuses, are (again) NOT arguments.
Facts support arguments and what I've been saying all along is that AMD has been financially and resource constrained for most of their existence which is hardly news.

You can't ignore their financial and resource issues and be reasonable at the same time.

AMD builds the CPUs you know. Not MS.
So if AMD throws more ALU's, cache, or whatever transistor on die then the libraries and OS software support magically spring from nowhere?

No.

MS controls their OS and that means MS has to develop the support in their OS for new tech such as NPU's. Neither Intel, AMD, or QC can force that issue by adding more hardware.
Not being able to use the NPU under Win10 could be seen as an advantage from the point of view of people hating the idea of AI in their OS. Intel DID found. But they DID find wafers available from TSMC. It's not that TSMC told them "Sorry, no capacity left for you". If that means double sourcing, they should do so. Nvidia took it's risks and now is a 3 trillion company. AMD plays it safe and remains vulnerable.
You can already use win10 without NPU support! With either AMD or Intel! They don't have to do anything! Win10 will simply ignore the hardware AI processor since it doesn't know how to use it. It'd be like using win9x on a multicore system. Win9x will still run fine on a Q6600 or a PhenomII X4 for instance. It will just only use 1 core.

That Intel was able to get wafers doesn't matter. They can do stuff AMD can't all the time since they have more money and resources! Money and resources matter massively here!

"Double sourcing" is FAR easier said then done. And it is still very resource and money intensive! If they don't have enough of either then they're stuck.

AMD took a huge risk paying as much as they did for Xilinix so I think you don't know whats really at stake here. Again their debt load is rather high. How exactly can they afford to take on billions more a year at this point?

Depends on how you look at it.
But that IS crucial to the whole discussion.
Didn't I specify how exactly I'm looking at it earlier in the thread by talking only about ray tracing performance of RDNA3 vs RDNA2 and not performance v ALU or performance vs die size across all features or GPUs?

If you keep insisting on talking about some other metric isn't that apples v oranges?
If you don't understand this, you don't understand hardware at all - no offense intended!
I don't care about hypotheticals. I care about real world performance that I can actually buy and the price I'm paying for from a actual store and not what it should theoretically be according to whatever hypothetical someone cooks up.
 
Joined
Jan 14, 2019
Messages
12,570 (5.80/day)
Location
Midlands, UK
System Name Nebulon B
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance DDR5-4800
Video Card(s) AMD Radeon RX 6750 XT 12 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Bazzite (Fedora Linux) KDE
Didn't I specify how exactly I'm looking at it earlier in the thread by talking only about ray tracing performance of RDNA3 vs RDNA2 and not performance v ALU or performance vs die size across all features or GPUs?

If you keep insisting on talking about some other metric isn't that apples v oranges?

I don't care about hypotheticals. I care about real world performance that I can actually buy and the price I'm paying for from a actual store and not what it should theoretically be according to whatever hypothetical someone cooks up.
You did specify by saying "RDNA 3 has 40-50% better RT performance" which is factually wrong. It does not. RDNA 3's architectural RT performance is the same as RDNA 2's. Period.

You can say that the 7900 XTX has more RT performance than the 6900 XT, but that's a card vs card comparison, not an architecture vs architecture one. To that, my answer is the 7600 vs 6650 XT or the 7800 XT vs 6800 XT.

That is an incorrect analogy as more cylinders does not equate to more power if the capacity remains the same.
It will behave in a different manner affecting efficiency, but overall power tends to remain the same.
Anyway that is off topic for this thread.
Perhaps. What I meant is that adding more RT units to one card doesn't equate to having more RT performance across the whole architecture/generation.
 
Joined
Sep 6, 2013
Messages
3,391 (0.82/day)
Location
Athens, Greece
System Name 3 desktop systems: Gaming / Internet / HTPC
Processor Ryzen 5 7600 / Ryzen 5 4600G / Ryzen 5 5500
Motherboard X670E Gaming Plus WiFi / MSI X470 Gaming Plus Max (1) / MSI X470 Gaming Plus Max (2)
Cooling Aigo ICE 400SE / Segotep T4 / Νoctua U12S
Memory Kingston FURY Beast 32GB DDR5 6000 / 16GB JUHOR / 32GB G.Skill RIPJAWS 3600 + Aegis 3200
Video Card(s) ASRock RX 6600 + GT 710 (PhysX) / Vega 7 integrated / Radeon RX 580
Storage NVMes, ONLY NVMes / NVMes, SATA Storage / NVMe, SATA, external storage
Display(s) Philips 43PUS8857/12 UHD TV (120Hz, HDR, FreeSync Premium) / 19'' HP monitor + BlitzWolf BW-V5
Case Sharkoon Rebel 12 / CoolerMaster Elite 361 / Xigmatek Midguard
Audio Device(s) onboard
Power Supply Chieftec 850W / Silver Power 400W / Sharkoon 650W
Mouse CoolerMaster Devastator III Plus / CoolerMaster Devastator / Logitech
Keyboard CoolerMaster Devastator III Plus / CoolerMaster Devastator / Logitech
Software Windows 10 / Windows 10&Windows 11 / Windows 10
Sure it is! Hint: denials aren't going to convince me of anything.
You are in denial. Benchmarks show it, others try to explain it to you in the last full page and you still don't want to understand it. OK.
How you get the improvement doesn't matter at all. Could be from more clocks, transistors, cache, etc. It doesn't matter. Just that there is one.

Remember my original comment was about performance only, not perf/ALU or perf/transistor, etc.
It does. It's the whole idea to get performance improvements by architecture. Keep making bigger chips is not viable. Keep increasing frequency of an old architecture is not viable.

And I don't need to remember anything. Your original comment was wrong and now you try to present it as something different, because you are obviously not in denial, right?
You said that "RDNA3 is typically 40-50% faster at ray tracing than RDNA2." You didn't said that RX 7900XTX is 40-50% faster than RX 6950XT for example. You clearly and wrongly insisted that RDNA3 was faster IN GENERAL, meaning as an architecture, by 40-50% compared to RDNA2.
I guess you wouldn't want to admit anything and this is clear also from your next reply where I point at you that 5800X came out 2 years before the X3D chips and you intentionally ignore that so you can continue insisting on your false narrative.
So, I stop here. Not reading the rest of your post. In fact no reason to read any of your future posts. I am not going to lose the weekend when you obviously don't want to admit even the most obvious of your wrongs.
Congratulations. You win.
:respect:

PS OK, I am reading the rest of your post out of curiosity and facepalming in every sentence. For example, that comment about AMD's debt and you keep going back to Bulldozer era to just dig out some irrelevant arguments. lol. I am going to miss some priceless comments from you. :p

PS2 Ahahahahahahahaha!!!!!!!............................................. oh my god ............(I keep reading)
 
Last edited:
Joined
Sep 17, 2014
Messages
22,666 (6.05/day)
Location
The Washing Machine
System Name Tiny the White Yeti
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling CPU: Thermalright Peerless Assassin / Case: Phanteks T30-120 x3
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
VR HMD HD 420 - Green Edition ;)
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
Didn't I specify how exactly I'm looking at it earlier in the thread by talking only about ray tracing performance of RDNA3 vs RDNA2 and not performance v ALU or performance vs die size across all features or GPUs?

If you keep insisting on talking about some other metric isn't that apples v oranges?

I don't care about hypotheticals. I care about real world performance that I can actually buy and the price I'm paying for from a actual store and not what it should theoretically be according to whatever hypothetical someone cooks up.

You did but that isnt up to you. 'Look if I compare it like this its 50%'. Great, but it makes no sense. You do you, but dont think others will too ;)

You care about the price and no hypotheticals, but nobody is talking about hypotheticals here, we are all talking about real gpus with real performance deficits, measured and tested and they dont show the gain you speak of. You took a piece of marketing bullshit and ran with it, so we are here to place that in the right perspective.

Youre just wrong, but if this is the hill you want to die on... okay. Believe what you want to believe while everyone else knows better ;)
 

ARF

Joined
Jan 28, 2020
Messages
4,670 (2.61/day)
Location
Ex-usa | slava the trolls
RDNA 3's architectural RT performance is the same as RDNA 2's.

Certain models became faster than their predecessors because they have more execution units and VRAM, but the architecture itself didn't. CU vs CU, RDNA 3 has the same speed in most cases as RDNA 2.

This means that "RDNA 3" doesn't exist (in fact, it is a rebranded RDNA 2, to be named RDNA 2.1 at best). The only new feature in it is the media engine, updated from Video Core Next 3.0 to Video Core Next 4.0, namely active AV1 encoding support.

Edit: The 7600 is a slightly more advanced 6650 XT, and the 7800 XT is a slightly more advanced 6800 XT. Same CU count, mostly the same speed.

Radeon RX 6800 XT is superior to RX 7800 XT - it has 72 compute units, while RX 7800 XT is limited to only 60.
Hence, RX 7800 XT is overall slower than RX 6800 XT which is a better buy.

1718441871546.png

 
Joined
Sep 17, 2014
Messages
22,666 (6.05/day)
Location
The Washing Machine
System Name Tiny the White Yeti
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling CPU: Thermalright Peerless Assassin / Case: Phanteks T30-120 x3
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
VR HMD HD 420 - Green Edition ;)
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
Oh wow, ARF is here... we have now landed in alternate reality land entirely. Im out :)
 
Joined
Jan 14, 2019
Messages
12,570 (5.80/day)
Location
Midlands, UK
System Name Nebulon B
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance DDR5-4800
Video Card(s) AMD Radeon RX 6750 XT 12 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Bazzite (Fedora Linux) KDE
This means that "RDNA 3" doesn't exist (in fact, it is a rebranded RDNA 2, to be named RDNA 2.1 at best). The only new feature in it is the media engine, updated from Video Core Next 3.0 to Video Core Next 4.0, namely active AV1 encoding support.
RDNA 3 is a different architecture. Here's the deep dive, if you're interested. The fact that it offers only slightly higher CU per CU, and similar RT performance as RDNA 2 on average is a different matter.

The topic of the conversation was the similar RT performance due to RDNA 2 and 3 having the same RT cores before you butted in.

Radeon RX 6800 XT is superior to RX 7800 XT - it has 72 compute units, while RX 7800 XT is limited to only 60.
Hence, RX 7800 XT is overall slower than RX 6800 XT which is a better buy.

View attachment 351417
You take Counter-Strike 2 as your cherry-picked example? Seriously now? :kookoo:

1718446446907.png


Edit: I don't know why you always have to say something that entirely misses the point of the conversation, but it's super annoying.
 
Last edited:
  • Haha
Reactions: ARF

ARF

Joined
Jan 28, 2020
Messages
4,670 (2.61/day)
Location
Ex-usa | slava the trolls
You take Counter-Strike 2

Counter-Strike is the only representative result, the game itself is optimised to utilise the shaders properly, and showcases the performance difference between 72 compute units on the RX 6800 XT, and the 60 compute units of RX 7800 XT.
There are many dozens of games which are not included in the above review, so it's not worth it to post the average results from it, it is in fact cherry-picking only the recent games. Which is wrong.
 
Joined
Jan 14, 2019
Messages
12,570 (5.80/day)
Location
Midlands, UK
System Name Nebulon B
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance DDR5-4800
Video Card(s) AMD Radeon RX 6750 XT 12 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Bazzite (Fedora Linux) KDE
Counter-Strike is the only representative result, the game itself is optimised to utilise the shaders properly, and showcases the performance difference between 72 compute units on the RX 6800 XT, and the 60 compute units of RX 7800 XT.
There are many dozens of games which are not included in the above review, so it's not worth it to post the average results from it, it is in fact cherry-picking only the recent games. Which is wrong.
Cherry picking any game or presenting an average is wrong, but cherry picking CS2 is absolutely fine because it is the only game in existence. Are you for real, man? :roll:

What about people like me who don't play CS2, so couldn't care less about how it is optimised to utilise shaders? Hm? :rolleyes:

I know you have an answer that trumps all others, because you always do. You're the wisest person in existence, obviously, therefore all further conversation with you is pointless.
 

ARF

Joined
Jan 28, 2020
Messages
4,670 (2.61/day)
Location
Ex-usa | slava the trolls
You're the wisest person in existence

I have never claimed this. But when you make a mistake, at least admit it.
Why did you say that RX 6800 XT and RX 7800 XT have the same compute units count?

Edit: The 7600 is a slightly more advanced 6650 XT, and the 7800 XT is a slightly more advanced 6800 XT. Same CU count, mostly the same speed.
 
Joined
Jan 14, 2019
Messages
12,570 (5.80/day)
Location
Midlands, UK
System Name Nebulon B
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance DDR5-4800
Video Card(s) AMD Radeon RX 6750 XT 12 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Bazzite (Fedora Linux) KDE
I have never claimed this. But when you make a mistake, at least admit it.
Why did you say that RX 6800 XT and RX 7800 XT have the same compute units count?
Okay, it's the non-XT that has the same CU count, my mistake. Are you happy now? What is the goal of this conversation? Do you have your own answer or counter-argument to literally everything?
 
Joined
Sep 17, 2014
Messages
22,666 (6.05/day)
Location
The Washing Machine
System Name Tiny the White Yeti
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling CPU: Thermalright Peerless Assassin / Case: Phanteks T30-120 x3
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
VR HMD HD 420 - Green Edition ;)
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
Okay, it's the non-XT that has the same CU count, my mistake. Are you happy now? What is the goal of this conversation? Do you have your own answer or counter-argument to literally everything?
I think phubar and ARF should go on honeymoon together, they're equipped with a strikingly similar way of dealing with facts. :)
 
Joined
Jan 14, 2019
Messages
12,570 (5.80/day)
Location
Midlands, UK
System Name Nebulon B
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance DDR5-4800
Video Card(s) AMD Radeon RX 6750 XT 12 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Bazzite (Fedora Linux) KDE
I think phubar and ARF should go on honeymoon together, they're equipped with a strikingly similar way of dealing with facts. :)
Hm, you're saying something. :D

Counter-Strike is the only representative result, the game itself is optimised to utilise the shaders properly, and showcases the performance difference between 72 compute units on the RX 6800 XT, and the 60 compute units of RX 7800 XT.
There are many dozens of games which are not included in the above review, so it's not worth it to post the average results from it, it is in fact cherry-picking only the recent games. Which is wrong.
On second thought, isn't it more of a case of the Source 2 engine not being able to utilise RDNA 3's dual shader units properly?
It can't be a case of "CS2's developers are geniuses and every other game developer in the entire world is damn stupid", surely?
 
Joined
Sep 6, 2013
Messages
3,391 (0.82/day)
Location
Athens, Greece
System Name 3 desktop systems: Gaming / Internet / HTPC
Processor Ryzen 5 7600 / Ryzen 5 4600G / Ryzen 5 5500
Motherboard X670E Gaming Plus WiFi / MSI X470 Gaming Plus Max (1) / MSI X470 Gaming Plus Max (2)
Cooling Aigo ICE 400SE / Segotep T4 / Νoctua U12S
Memory Kingston FURY Beast 32GB DDR5 6000 / 16GB JUHOR / 32GB G.Skill RIPJAWS 3600 + Aegis 3200
Video Card(s) ASRock RX 6600 + GT 710 (PhysX) / Vega 7 integrated / Radeon RX 580
Storage NVMes, ONLY NVMes / NVMes, SATA Storage / NVMe, SATA, external storage
Display(s) Philips 43PUS8857/12 UHD TV (120Hz, HDR, FreeSync Premium) / 19'' HP monitor + BlitzWolf BW-V5
Case Sharkoon Rebel 12 / CoolerMaster Elite 361 / Xigmatek Midguard
Audio Device(s) onboard
Power Supply Chieftec 850W / Silver Power 400W / Sharkoon 650W
Mouse CoolerMaster Devastator III Plus / CoolerMaster Devastator / Logitech
Keyboard CoolerMaster Devastator III Plus / CoolerMaster Devastator / Logitech
Software Windows 10 / Windows 10&Windows 11 / Windows 10
my mistake
Oh, bad move. When you admit that you did a mistake, that post is going to be screenshotted and thrown at your face in every future conversation. You have to be very careful with that.*
Also, by admitting you made a mistake, that means that everything you posted until now (and everything you will post in the future) is wrong, which makes the other interlocutor always correct.



PS * In the 80s the prime minister of Greece was Andreas Papandreou. Great manipulator of the masses. So when once he had to admit in public that he did a mistake, he didn't say "I did a mistake". Instead he preferred the Latin phrase "Mea Culpa" knowing that most people in Greece didn't knew Latin and also knowing that many will just be amazed from the usage of that Latin phrase. So he turned a negative situation to his advantage.
 
Joined
Jan 14, 2019
Messages
12,570 (5.80/day)
Location
Midlands, UK
System Name Nebulon B
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance DDR5-4800
Video Card(s) AMD Radeon RX 6750 XT 12 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Bazzite (Fedora Linux) KDE
Oh, bad move. When you admit that you did a mistake, that post is going to be screenshotted and thrown at your face in every future conversation. You have to be very careful with that.*
Also, by admitting you made a mistake, that means that everything you posted until now (and everything you will post in the future) is wrong, which makes the other interlocutor always correct.
It doesn't make a difference. ARF knows everything right, and the rest of us are always wrong. If it's not the case, he'll just twist the topic, or change it until he's right again.
 
Joined
Apr 28, 2023
Messages
45 (0.07/day)
Looks like I called it right the first time and you were trolling me.

You did but that isnt up to you.
So you're final arbiter of what anyone gets to say, or how to say it on these forums eh? Is that a pay for perk or what?

Either way I don't think that is legit and I have to say it seems rather rude at best to me!

I'd prefer it if you can't actually address what I'm saying in a honest manner that you 2 not reply to me at all. Add me to your ignore lists if it helps at all, thanks!
You did specify by saying "RDNA 3 has 40-50% better RT performance" which is factually wrong. It does not. RDNA 3's architectural RT performance is the same as RDNA 2's. Period.
If you want to ignore parts of the product line that don't suit your metric sure. But that isn't reasonable.

More performance is more performance even if they get it in a way that you don't find pleasing for one reason or another.
 
Joined
Sep 17, 2014
Messages
22,666 (6.05/day)
Location
The Washing Machine
System Name Tiny the White Yeti
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling CPU: Thermalright Peerless Assassin / Case: Phanteks T30-120 x3
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
VR HMD HD 420 - Green Edition ;)
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
So you're final arbiter of what anyone gets to say, or how to say it on these forums eh? Is that a pay for perk or what?
No, I've explained at length, as have others why you're wrong, supported by facts. Numbers. Facts are the final arbiter here, as they should be. If you want to be right on the basis of likes or popularity contests, go to social media. Unironically, I've had to explain the same thing in a lengthy PM with ARF the past few days because he thinks others are trolls because they correct his bonkers nonsense. Are you guys multiboxing?
 
Joined
Jan 14, 2019
Messages
12,570 (5.80/day)
Location
Midlands, UK
System Name Nebulon B
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance DDR5-4800
Video Card(s) AMD Radeon RX 6750 XT 12 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Bazzite (Fedora Linux) KDE
If you want to ignore parts of the product line that don't suit your metric sure. But that isn't reasonable.

More performance is more performance even if they get it in a way that you don't find pleasing for one reason or another.
I did not look at the product line at all. I looked at the architecture which RDNA 3 is. If you want to talk about products, then talk about products, not the architecture. But then, let's talk about the 7600 as well, OK? ;)
 
Joined
Sep 6, 2013
Messages
3,391 (0.82/day)
Location
Athens, Greece
System Name 3 desktop systems: Gaming / Internet / HTPC
Processor Ryzen 5 7600 / Ryzen 5 4600G / Ryzen 5 5500
Motherboard X670E Gaming Plus WiFi / MSI X470 Gaming Plus Max (1) / MSI X470 Gaming Plus Max (2)
Cooling Aigo ICE 400SE / Segotep T4 / Νoctua U12S
Memory Kingston FURY Beast 32GB DDR5 6000 / 16GB JUHOR / 32GB G.Skill RIPJAWS 3600 + Aegis 3200
Video Card(s) ASRock RX 6600 + GT 710 (PhysX) / Vega 7 integrated / Radeon RX 580
Storage NVMes, ONLY NVMes / NVMes, SATA Storage / NVMe, SATA, external storage
Display(s) Philips 43PUS8857/12 UHD TV (120Hz, HDR, FreeSync Premium) / 19'' HP monitor + BlitzWolf BW-V5
Case Sharkoon Rebel 12 / CoolerMaster Elite 361 / Xigmatek Midguard
Audio Device(s) onboard
Power Supply Chieftec 850W / Silver Power 400W / Sharkoon 650W
Mouse CoolerMaster Devastator III Plus / CoolerMaster Devastator / Logitech
Keyboard CoolerMaster Devastator III Plus / CoolerMaster Devastator / Logitech
Software Windows 10 / Windows 10&Windows 11 / Windows 10
Top