Monday, November 7th 2022
NVIDIA GeForce RTX 4080 Founders Edition PCB Pictured, Revealing AD103 Silicon
Here's the first picture of the PCB of the NVIDIA GeForce RTX 4080 Founders Edition. With NVIDIA cancelling the AD104-based GeForce RTX 4080 12 GB, the significantly buffed, AD103-based RTX 4080 16 GB is now referred to as simply the RTX 4080. The picture reveals an asymmetric PCB shape to fit with the Founders Edition dual-axial flow-through design. The card pulls power from a 16-pin ATX 12VHPWR connector, and appears to use roughly a 16-phase VRM. The PCB has many blank VRM phase traces, although just eight memory-chip pads to go with the 256-bit wide GDDR6X memory interface of the AD103 silicon.
The AD103 silicon features a rectangular die, and a fiberglass substrate that looks about the same size as past-generation NVIDIA GPUs with 256-bit wide memory interfaces, such as the GA104. The AD103 GPU is probably pin-compatible with the smaller AD104, at least as far as substrate-size is concerned; so minimal PCB design R&D effort is put into designing the 12 GB and 16 GB variants of the RTX 4080. The RTX 4080 12 GB is now gone, and the AD104 will power --70 classs SKUs with fewer shaders than what would've been the RTX 4080 12 GB. The display output configuration remains the same as the RTX 4090, with three DisplayPort 1.4a, and an HDMI 2.1a. NVIDIA is expected to launch the GeForce RTX 4080 on November 16, priced at USD $1,199 (MSRP).
Sources:
KittyYYuko (Twitter), VideoCardz
The AD103 silicon features a rectangular die, and a fiberglass substrate that looks about the same size as past-generation NVIDIA GPUs with 256-bit wide memory interfaces, such as the GA104. The AD103 GPU is probably pin-compatible with the smaller AD104, at least as far as substrate-size is concerned; so minimal PCB design R&D effort is put into designing the 12 GB and 16 GB variants of the RTX 4080. The RTX 4080 12 GB is now gone, and the AD104 will power --70 classs SKUs with fewer shaders than what would've been the RTX 4080 12 GB. The display output configuration remains the same as the RTX 4090, with three DisplayPort 1.4a, and an HDMI 2.1a. NVIDIA is expected to launch the GeForce RTX 4080 on November 16, priced at USD $1,199 (MSRP).
40 Comments on NVIDIA GeForce RTX 4080 Founders Edition PCB Pictured, Revealing AD103 Silicon
I'm pretty sure we'll see games where RTX 4080 16 GB actualy looses to RTX 3090 Ti due to much less memory bandwidth!
Some day in the future, at least 5 years if not 10 from now, a time will come when RT on will be faster than RT off. Than I will use it. Untill than, leave it off. It doesn't worth the pref hit.
Anyway and to the point, 4080 is a nice but pointless GPU outside of professional CUDA usages, like 4090, in it's current price.
For gaming, wait for AMD offer.
But today? Already the whole thing is escalating to meet gen to gen perf increases... 5-10 years better bring a game changer in that sense or RT is dead or of too little relevance. Besides, its not 'raster OR rt'. Its 'and'. Another thing 16>32 bit was not. So devices will need raster perf still...
Oh and BTW the ones marked "Melhor Preço" means it's the lowest price recorded (for a specific SKU) on this price comparison website, which I can tell you I've been using to compare this kind of stuff for well over 3 years...
If AD103 is 372mm^2 like the TPU database says, there are ~140 die per 300mm wafer (using an online wafer layout calculator). At $18000 per 5nm/4nm wafer that's $130 just for the silicon (even assuming no defective die). Then you have to include the cost of assembling the chip package, the cost of the memory chips and all the other stuff on the PCB, and the cost of mounting everything to the PCB.
In the end there's no way that a card based on an AD103 chip is costing less than $250 just to manufacture let alone pay for R&D and marketing. $1200 may be a bit much to ask for a 4080, but unless TSMC drops its wafer prices, these simple calculations make me conclude that AD103 based cards can't be priced less than $650 while still selling for a profit. AMD pricing their new GPU at $1000 is about right to keep the same profit margins as the past.
Perhaps this demonstrates that there needs to be a new paradigm of rebranding the previous generations of high-end GPUs to sell to the mid range and low end. Making mid-range and low-end GPUs on the latest process node doesn't seem to make financial sense anymore.
I think we will see a change in the near future: The same architecture with different process levels to different performance level\tier.
It will be good if only the top tier will use the latest process to achieve max absolut pref. The people who buy them will 'gladly' pay the extra to be on the bleeding edge of tech and will also pay for the extra work regarding design of two node processes for the same architecture.
Mid and low tier will use older, more mature and higher yielded process.
No need to xx30/xx50/xx60 to use 4nm if cost to pref is what you after and new wafer cost is skyrocket.
7/6nm is very much fine with me right now to any mid-level-GPU, as long as it come with enough memory.
Architecture improvement, process refinement and new software\tech (DLSS\LA,FSR,XESS ect.) will take care of the pref improvement.
Basically a 1 process lag for the mid and low tier, so when NV 5xxx series will be out on a better 2/3nm node for the 5080/5090 we will have 5030/5050/5060 on 4/5nm node.
And with that segmentation, we will be one step closer to 'GAMERS master race' who pay big and all the others who make the economic decision and don't care about races.