Nvidia doesn't sell VRAM. How do they save $30? I don't think you understand the process of manufacturing a video card. Nvidia has TSMC manufacture the GPU and sells them to card manufacturers who buy the components and VRAM, assemble the card, ship it through distributors and then retailers to the end customer. Nvidia doesn't make anything from the VRAM.
No duh, AIB partners are the one's that add the physical memory to the cards but it's Nvidia that decides how much memory can be paired with the GPU in the first place. What I said is 100% correct, Nvidia is the one deciding how much VRAM you get, not board partners.
Nvidia is the one deciding the MSRP of these products and they do so well knowing the BOM costs that AIB partners will have. How does more VRAM factor into Nvidia's bottom line? Releasing an SKU with more VRAM could very well cut into how much Nvidia might from the sale of the GPU core itself if they decide that a GPU must hit a certain MSRP. In the case of a $700 GPU, Nvidia might lower the cost of the GPU core to the AIB a bit to fit the cost of the additional VRAM under a set price.
You can't call someone else ignorant but then have an argument premised on ignorance of the fact that Nvidia ultimately has control over the end product save for maybe the cooler (but even then Nvidia likely exercises control given all the 4000 series coolers are pretty beefy, most AIBs would cheap out). You are missing the forest throught the trees.
When you have that kind of market share you are the industry and you bend everyone to your will, until the government gets involved and party is over.
Yep, Bell Systems had an 85% marketshare when it was broken up. Nvidia has an 88% marketshare.
Stop it. I told that VRAM per $ must be increased in my very first post. Still, this problem is seriously overblown: only a handful of games really are affected.
You aren't wrong but your argument is analogous to the argument that 4 cores were enough for games back before Zen 1 came out and around the time Zen 1 launched.
It's true but ignores the fact that devs have no choice but to optimize for what's available. Of course only a few games are hindered, you first need to make the hardware available and affordable before devs will push to use it.
Maybe, just MAYBE because AMD never supported AI (natively, at least) up until the latest gen and things didn't go well for RDNA3 in this department on a hardware side either?
6000 and 7000 series support AI. So that's 1 less gen than Nvidia.
Well yeah, CUDA is Nvidia only. Impossible for AMD cards to support it unless they are using a translation layer like ROCm.
no Windows drivers that just work for 99+ percent end users
AMD drivers are pretty good as of late. Steve on HWUB even says he prefers AMD drivers.
That said support in AI applications could be better but that's more due to the aforementioned lack of CUDA support and Nvidia's 1 generation headstart.
, no nothing. Do I need to remind that FSR is a joke, RT performance in AMD GPUs almost doesn't exist, and their professional workload capacity also's completely in the dark for a good half algos (at least as of '23)? AMD have money to solve the problem but they definitely lack cojones. Or something else.
The rest of this is irrelevant to AI. Calling FSR a joke is definitely overblown as well. Some of this is a rehash of a prior point, we already know AMD lacks CUDA. CUDA IMO is anti-competitive given it completely removes competition from purchasing decisions. Most of the professional market hasn't had a choice in which vendor they should get, they have to get Nvidia.