Okay but anybody doing AI is screaming for VRAM and the production people have been screaming for even longer. If you don't need VRAM you're already well served by the market and have been for a long time and if you don't use a GPU beyond "drawing UIs" and "viewport rendering" then what are we even talking about now, you don't need a GPU. Point is VRAM hasn't kept up compared to every other component in a PC.
What I was talking about is that demand for better processors with more core count far outweighed -and still does, albeit not as much- the need for more VRAM. With a subtext that the limitations themselves aren't similar.
Some/many AI models may fall under the memory-intensive umbrella, but we are talking about a niche that is highly cloud-centric. People "doing AI" as in who? The rando trying to implement some
plagiarism engine genAI or SAM locally? I'm with you. The LLM end user? Those run on their own accelerators (or connect to cloud), afaik. The researcher/developer? They're mostly on Colab. The people who do training? Yeah I think they are looking at things at a different scale...
Meanwhile, look at any random, locally-running, number-crunching, production/engineering/scientific/whatever application, and you'll very likely find it recommending as many cores as you can throw at it, as it has been for decades.