Sorry meant RDNA2.
And NO it's NOT 40-50% faster.
Sure it is! Hint: denials aren't going to convince me of anything.
The top RDNA3 cards are just BIGGER dies compared to RDNA2 GPUs
How you get the improvement doesn't matter at all. Could be from more clocks, transistors, cache, etc. It doesn't matter. Just that there is one.
Remember my original comment was about performance only, not perf/ALU or perf/transistor, etc.
5800X 2020, Milan X and 5800X3D 2022.
Milan X came out months before 5800X3D. March or so vs June 2022. So it wasn't that tech didn't exist like you earlier claimed. It was that AMD decided to launch the extra cache dies on server parts first.
Excuses are rarely arguments. In this reply of yours, they are not.
Its not excuses. I'm giving you the facts.
They just don't have the resources or money to do everything they want. That is a big part of the reason why they're generally seen as the underdog in the x86/GPU biz!
If you want to prove me wrong here you've got to show they had billions more to spend year after year on software development and other software related stuff like compilers during the Bulldozer years at a minimum. Or even now (hint: you can't, their debt load is pretty high, they pushed themselves to the limit buying Xilinix). Just saying "nuh uh" isn't going to convince anyone of anything here.
And it is a screw up on AMD's part because the reception of AM5 was lukewarm for a number of reasons.
It wasn't a 'number of reasons'. It was that the platform was expensive. People started buying it more when the price dropped a bit on the mobos and CPU's. Having X3D at launch would've been nice but not a game changer and wouldn't have addressed the cost issue.
And using an IOD makes it easier. As I said when a mem controller is not used, it just shuts down.
That doesn't change that you can't compare apples v oranges here. They're going to be very different by default.
Having a IOD makes
changes easier true but it doesn't solve the fundamental issues with bigger die needed for another memory controller or the heat/power used.
Gating the transistors off when not in use is ideal but apparently not always possible. The IOD power use is already a big problem for why AMD systems use more power at idle then they should. If they could power gate all of it off as needed they would've already done so. But they can't.
It's not a big deal the Intel platform seeing as cheaper at a time that Intel was also advertising higher number of cores? Nice one.
You made the claim that Intel's DDR4/5 support was the reason that platform did well and now you're switching goal posts to talking about core numbers? Especially when e cores are involved?
Look pick one argument and stick to it.
You totally lost it here. When I talk about Intel and mention Hybrid CPUs, obviously I don't mean Phenom era.
You brought up Phenom as a example though. If its not a valid comparison then why even bring it up?
The practical realities of AMD ended the time it could offer 30-40 billions dollars in shares and cash for Xilinx. We are not in 2015.
AMD bought Xilinix in 2022. The debt load from that deal is something they'll have to be dealing with for a long time. Why are you even bringing up 2015? Zen1 wasn't even out until 2017.
My God. Are you 12? Don't show me how much DDR5 dropped, compare me the price of (for example) 32GB DDR4 3200 with 32GB DDR5 and tell me there is no significant difference in price. You think DDR4 remained the same?
This is goal post shifting.
I showed how DDR5 prices dropped which was all that was required to show that AMD had a reasonable approach to delaying supporting DDR5. You claimed this was a mistake. So in order for you to be correct you have show that DDR5 prices stayed the same or rose instead. Good luck with that.
AMD didn't really had much problems in the past (10+ years ago) with software.
So they have compliers as good as NV's for their GPU's? OpenCL apps have as much marketshare as CUDA apps? No. OpenCL and AMD compliers, and software support in general for their GPU's for compute, is generally pretty lousy.
AMD game drivers are generally pretty good these days but that is a smaller part of puzzle when talking about software in general.
The rest of the history lesson presented as excuses, are (again) NOT arguments.
Facts support arguments and what I've been saying all along is that AMD has been financially and resource constrained for most of their existence which is hardly news.
You can't ignore their financial and resource issues and be reasonable at the same time.
AMD builds the CPUs you know. Not MS.
So if AMD throws more ALU's, cache, or whatever transistor on die then the libraries and OS software support magically spring from nowhere?
No.
MS controls their OS and that means MS has to develop the support in their OS for new tech such as NPU's. Neither Intel, AMD, or QC can force that issue by adding more hardware.
Not being able to use the NPU under Win10 could be seen as an advantage from the point of view of people hating the idea of AI in their OS. Intel DID found. But they DID find wafers available from TSMC. It's not that TSMC told them "Sorry, no capacity left for you". If that means double sourcing, they should do so. Nvidia took it's risks and now is a 3 trillion company. AMD plays it safe and remains vulnerable.
You can already use win10 without NPU support! With either AMD or Intel!
They don't have to do anything! Win10 will simply ignore the hardware AI processor since it doesn't know how to use it. It'd be like using win9x on a multicore system. Win9x will still run fine on a Q6600 or a PhenomII X4 for instance. It will just only use 1 core.
That Intel was able to get wafers doesn't matter. They can do stuff AMD can't all the time since they have more money and resources! Money and resources matter massively here!
"Double sourcing" is FAR easier said then done. And it is still very resource and money intensive! If they don't have enough of either then they're stuck.
AMD took a huge risk paying as much as they did for Xilinix so I think you don't know whats really at stake here. Again their debt load is rather high. How exactly can they afford to take on billions more a year at this point?
Depends on how you look at it.
But that IS crucial to the whole discussion.
Didn't I specify how exactly I'm looking at it earlier in the thread by talking only about ray tracing performance of RDNA3 vs RDNA2 and not performance v ALU or performance vs die size across all features or GPUs?
If you keep insisting on talking about some other metric isn't that apples v oranges?
If you don't understand this, you don't understand hardware at all - no offense intended!
I don't care about hypotheticals. I care about real world performance that I can actually buy and the price I'm paying for from a actual store and not what it should theoretically be according to whatever hypothetical someone cooks up.