As I've said many times before, I would prefer to remove any and all AI and other "non-rendering" oriented parts from the GPU, just in order to fill the "freed up" (or rather space not seized by AI) with more Compute Units, ROPs and TMUs, along with better memory for that price difference. Or at least cut out AI part, by making the die footprint much smaller, and cheaper, with better thermal performance. That also would mean that the GOU would be more capable of running higher resolutions with better settings, with much less reason to use any upscaler.
This is a big hell no. Make AI ASICs addins cards.
I would even expand the subject, and say, that RTX, or RTRT cards should have been addon cards as well. It would be perfect from business standpoint, and would have benefit consumers either. They could sell their GeForce GTX, and pair it with RTX addon cards (doubling their revenue from both product lines), to promote pure compute RTRT power. And everyone interesting in it would jump straight. There would be tons more sales. Just like people with older GeForce cards and Radeons were buying GTX ones for the sake of PhysX and encoding.
Also, the separate RTRT card, would have been a better solution, as it wouldn't be limited by common power envelope with raster GPU parts. There would be much more room to grow, and make RTRT card as powerful as possible. Such separate solution would benefit the "AI" crowd even more, as they don't need the raster and video output in every single card in their AI farms, as they need a compute power itself.
Would have been RTRT addon cards separate, it would strike AMD in much lesse degree, and they could dedicate more raster performance power into their Radeons, which wouldn't lower their value any time.