• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA GeForce RTX 4080 Founders Edition PCB Pictured, Revealing AD103 Silicon

Consumers who purchases these makes this possible. Vote with your wallet guys..

Never paid more than 300EUR for a card myself :laugh:
Wish that was possible, in the netherlands that means a 1030 or something, i may as well ditch my PC then.

And no, i will never buy second hand.
 
$1200 and no secure frame around GPU die? WTH...
 
"Finally, we have the Vulkan benchmarks where the NVIDIA GeForce RTX 4080 graphics card ends up being only 5.5% faster than the RTX 3090 Ti"

I'm pretty sure we'll see games where RTX 4080 16 GB actualy looses to RTX 3090 Ti due to much less memory bandwidth!
 
Last edited:
There was a time, way way back, when you had 16bit vs 32 bit charts (it is a fun nostalgic read). Playing the game with 16bit was faster until a point when a new tech came along and 32bit was faster then 16bit. That was the tipping point and 16bit died that moment.

Some day in the future, at least 5 years if not 10 from now, a time will come when RT on will be faster than RT off. Than I will use it. Untill than, leave it off. It doesn't worth the pref hit.

Anyway and to the point, 4080 is a nice but pointless GPU outside of professional CUDA usages, like 4090, in it's current price.
For gaming, wait for AMD offer.
 
Wish that was possible, in the netherlands that means a 1030 or something, i may as well ditch my PC then.

And no, i will never buy second hand.
I've been a PC hobbyists for almost 20 years and I may have bought about 5 cards new.. :laugh:
 
It won't be a problem on this card though.
Fire hazard aside, it is still a problem for average sized PC case users trying to close their side panels.
 
The first benchmark results are out. :wtf: Will it be enough to battle the 7900XTX?

Geekbench4080.PNG


3DMark4080.PNG


https://videocardz.com/newz/nvidia-...ed-in-geekbench-30-to-37-faster-than-rtx-3080https://videocardz.com/newz/nvidia-geforce-rtx-4080-3dmark-timespy-scores-have-been-leaked-as-well
 
There was a time, way way back, when you had 16bit vs 32 bit charts (it is a fun nostalgic read). Playing the game with 16bit was faster until a point when a new tech came along and 32bit was faster then 16bit. That was the tipping point and 16bit died that moment.

Some day in the future, at least 5 years if not 10 from now, a time will come when RT on will be faster than RT off. Than I will use it. Untill than, leave it off. It doesn't worth the pref hit.

Anyway and to the point, 4080 is a nice but pointless GPU outside of professional CUDA usages, like 4090, in it's current price.
For gaming, wait for AMD offer.
Minor exception, in that 16-32 bit age we still had many node shrinks ahead of us.

But today? Already the whole thing is escalating to meet gen to gen perf increases... 5-10 years better bring a game changer in that sense or RT is dead or of too little relevance. Besides, its not 'raster OR rt'. Its 'and'. Another thing 16>32 bit was not. So devices will need raster perf still...
 
all these 4080s will be OOS in 5 minutes on launch day. plenty of gamers with more money than sense out there
Dont get me wrong, i do believe these cards will sell out in record time but not bought by consumers. I believe they will be bought by scalpers right away and the cards price will rise. This will make it seem like this card has sold out and is a hit for Nvidia but the actuality is scalpers trying to make a profit. To be fair Nvidia does not care if scalpers or gamers are buying these cards, they just want them sold out like the 4090. You can buy plenty of 4090's now they are just all scalped.
 
Wish that was possible, in the netherlands that means a 1030 or something, i may as well ditch my PC then.

And no, i will never buy second hand.
And are now probably wondering if they should buy a two year Ampere card for a launch MSRP
Here in Portugal there are a lot of 1660 Super's at about 300€. Even though they're better than a 1030, it's insane to think it might still cost more than the launch MSRP for a 3 year old, low-midrange card...
1667945845971.png

Oh and BTW the ones marked "Melhor Preço" means it's the lowest price recorded (for a specific SKU) on this price comparison website, which I can tell you I've been using to compare this kind of stuff for well over 3 years...
 
There's tonns of people who gave RTX 20 (Turing) series a miss since it didn't bring any price / performance increase over GTX 10 (Pascal), and then they couldn't get a RTX 30 (Ampere) cards because of the cryptoidiotism.

And are now probably wondering if they should buy a two year Ampere card for a launch MSRP, or wait for a full RTX 40 (Ada) release and pay even more money for the same performance...
My 2080 Super kicks the crap out of my 1070, it a was plenty big enough upgrade for me for 1440p gaming. I got it for less than half the price 3070s were going for at the time. Now I will upgrade my 1080 Ti to 7900XT(X) and give Lovelace a wide berth.
 
People need to keep in mind just how absurdly expensive a 4nm wafer is.

If AD103 is 372mm^2 like the TPU database says, there are ~140 die per 300mm wafer (using an online wafer layout calculator). At $18000 per 5nm/4nm wafer that's $130 just for the silicon (even assuming no defective die). Then you have to include the cost of assembling the chip package, the cost of the memory chips and all the other stuff on the PCB, and the cost of mounting everything to the PCB.

In the end there's no way that a card based on an AD103 chip is costing less than $250 just to manufacture let alone pay for R&D and marketing. $1200 may be a bit much to ask for a 4080, but unless TSMC drops its wafer prices, these simple calculations make me conclude that AD103 based cards can't be priced less than $650 while still selling for a profit. AMD pricing their new GPU at $1000 is about right to keep the same profit margins as the past.

Perhaps this demonstrates that there needs to be a new paradigm of rebranding the previous generations of high-end GPUs to sell to the mid range and low end. Making mid-range and low-end GPUs on the latest process node doesn't seem to make financial sense anymore.
 
People need to keep in mind just how absurdly expensive a 4nm wafer is.

If AD103 is 372mm^2 like the TPU database says, there are ~140 die per 300mm wafer (using an online wafer layout calculator). At $18000 per 5nm/4nm wafer that's $130 just for the silicon (even assuming no defective die). Then you have to include the cost of assembling the chip package, the cost of the memory chips and all the other stuff on the PCB, and the cost of mounting everything to the PCB.

In the end there's no way that a card based on an AD103 chip is costing less than $250 just to manufacture let alone pay for R&D and marketing. $1200 may be a bit much to ask for a 4080, but unless TSMC drops its wafer prices, these simple calculations make me conclude that AD103 based cards can't be priced less than $650 while still selling for a profit. AMD pricing their new GPU at $1000 is about right to keep the same profit margins as the past.

Perhaps this demonstrates that there needs to be a new paradigm of rebranding the previous generations of high-end GPUs to sell to the mid range and low end. Making mid-range and low-end GPUs on the latest process node doesn't seem to make financial sense anymore.
Indeed.
I think we will see a change in the near future: The same architecture with different process levels to different performance level\tier.
It will be good if only the top tier will use the latest process to achieve max absolut pref. The people who buy them will 'gladly' pay the extra to be on the bleeding edge of tech and will also pay for the extra work regarding design of two node processes for the same architecture.
Mid and low tier will use older, more mature and higher yielded process.
No need to xx30/xx50/xx60 to use 4nm if cost to pref is what you after and new wafer cost is skyrocket.
7/6nm is very much fine with me right now to any mid-level-GPU, as long as it come with enough memory.
Architecture improvement, process refinement and new software\tech (DLSS\LA,FSR,XESS ect.) will take care of the pref improvement.
Basically a 1 process lag for the mid and low tier, so when NV 5xxx series will be out on a better 2/3nm node for the 5080/5090 we will have 5030/5050/5060 on 4/5nm node.

And with that segmentation, we will be one step closer to 'GAMERS master race' who pay big and all the others who make the economic decision and don't care about races.
 
Indeed.
I think we will see a change in the near future: The same architecture with different process levels to different performance level\tier.
It will be good if only the top tier will use the latest process to achieve max absolut pref. The people who buy them will 'gladly' pay the extra to be on the bleeding edge of tech and will also pay for the extra work regarding design of two node processes for the same architecture.
Mid and low tier will use older, more mature and higher yielded process.
No need to xx30/xx50/xx60 to use 4nm if cost to pref is what you after and new wafer cost is skyrocket.
7/6nm is very much fine with me right now to any mid-level-GPU, as long as it come with enough memory.
Architecture improvement, process refinement and new software\tech (DLSS\LA,FSR,XESS ect.) will take care of the pref improvement.
Basically a 1 process lag for the mid and low tier, so when NV 5xxx series will be out on a better 2/3nm node for the 5080/5090 we will have 5030/5050/5060 on 4/5nm node.

And with that segmentation, we will be one step closer to 'GAMERS master race' who pay big and all the others who make the economic decision and don't care about races.
I wholeheartedly support this. With a middle end cpu costing 300 dollars, middle end GPU shouldnt be 700-800 dollars.
 
Back
Top