This is good for AMD too. I believe Sony and MS will ask for a SOC that will be way superior in RT performance, meaning AMD will have to realize that being the worst in RT, isn't a viable option. No matter how much it wants to focus on AI and pretend that raster is still the king. Image a Switch beating future Sony and MS consoles in RT. AMD will completely lose the console market if it doesn't became serious in RT.
Now Switch going Ada is a necessity to also fight the new X86 handheld consoles. I wonder if Nintendo will start investing more in visuals in the future.
I don't think RT is taking off at all.
Its marketing driven, definitely not user driven like, say, DLSS which gets modded in everywhere.
There IS tooling to RT- ify all the things, mind.
But I already knew this the moment it got announced. We're going to brute force something that we used to do much more efficiently, for mediocre to very low IQ gains, in a time when resources get scarce and climate is banging on the door? And when we see that Moore's Law is becoming ever harder to keep up, with the end of silicon on the horizon? Good luck with that. Even if it was going to be a success, reality will kill it sooner or later. But frankly, its a completely pointless exercise, as there are still games coming out that combine baked and dynamic lighting just fine to get near equal results. They have the economical advantage, because they can make better looking games run on a far broader range of hardware. The supposed cost or time to market advantage for developers is thus far an unproven marketing blurb as well, and even so, dev cost is really never a show stopper, its all about sales. To me this is common sense, honestly. Economic laws never lie.
I honestly hope AMD stays the course wrt RT. Integration at a low cost/die space, sure. Integration at the cost of raw perf? Pretty shitty.
Samsung 8nm is just a tweaked extension of 10nm node and there is nothing unseen on those tpds, especially when considering the die size and the node size and age. Despite its age, it was more power efficient than RDNA 2 on the more advanced node, it just scalled poorly with clocks. Ada is using a 4nm node and yes, its efficiency doesn't look good, in fact, it is excellent compared to the laughable efficiency of the AMD on 5nm. AMD also experienced the same problem as Nvidia with Ampere, past a certain point, the power consumption increased greatly with higher clocks and as they said, that was the main reason they didn't deliver faster products this gen because they would melt without liquid cooling and consume 500W+.
Ah right, they're not just clocking GPUs as high as possible because that's the most profitable bottom line anymore? Interesting, the world changed overnight!
Both Ada and RDNA3 clock about equal and both are not efficient at the top of the VF curve - this is common as it is virtually every GPU gen past 32nm. It doesn't say a thing about how good the node is, it mostly tells us how greedy chip makers are.
I'm not sure how you devised this story. Samsung's 8nm is plagued by issues, its not efficient, and Ampere suffers from transient spiking. 320W for an X80 is not unseen? Okay. Also I'm not entirely sure why AMD is in that comparison, RDNA2 isn't even remotely the subject here.
Sure, they can clock lower for Nintendo, but that still doesn't make it a good node, especially not for a low power device. Its yesteryear's tech, so its mostly just cheap.