• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA Announces GeForce Ampere RTX 3000 Series Graphics Cards: Over 10000 CUDA Cores

Contradicts Huang's statements about 1.9 better perf/w (taking 2080Ti as 270W card, 3070 should have been 145W)
That's closer to the best case scenario, for one game, you don't seriously believe that the card will do (at least) 1.9x better perf/W across the board under all workloads do you?
20200901171847.jpg
 
Yeah the 1.9X perf/watt figure is pretty bullshit.
The top of the Turing curve is the worst Perf/watt while the Ampere curve is at its most effcient.
Just take the FPS value at 120W, Ampere at ~58 while Turing at 45FPS, that's around 30% improvement.
I could do the same with 2080Ti vs 1080Ti where lowering the TDP of the 2080Ti to 140W and I still get 1080Ti performance, does that mean Turing is 1.8X Perf/watt of Pascal ? heck no.

Though looking at Ampere curve and we know why the new cards TDP are much higher, perf just scale so well with power.
 
There'l be much more affordable (RDNA2 and then Nvidia) cards with more than 10GB in just a few months, I'm sure.
Looking at the tiny gap between the 3080 and the 3090, how do you imagine that? Sure, there might be 12 and 16GB RDNA2 cards, but given Nvidia's massive marketshare advantage, developers won't tune their games for those primarily. Beyond that, "much more affordable"? How? Best case scenario we get a 3080 Ti with the same core count as the 3090 but 12GB of VRAM and lower clocks at $999. Cheaper than that isn't happening, and even that would be borderline miraculous.
Come on, Red Gaming Tech said/MLID were correct with:

- them using Samsung's inferior 8nm process node
- The cards will draw huge power as a result. 320 and 380W is not normal. One even claimed the exact power draw which was on the money
- The 3080 will be what they are pushing hard as it's performance is much, much closer to the 3090 than the price suggests. This is to combat Navi
- Performance numbers and their relative gaps were all spot on

So we knew a load about this release and Nvidia were definitely mpre leaky here than with Turing, Pascal.
380W? The 3090 is 350W. Other than that, this seems to have been accurate.
Contradicts Huang's statements about 1.9 better perf/w (taking 2080Ti as 270W card, 3070 should have been 145W)
Perf/W isn't a linear function, and Nvidia has obviously chosen to push clocks (and thus power, losing efficiency as they move up the DVFS curve) to increase the absolute performance to gain a better competitive position in anticipation of RDNA 2. All that graph says is that "Ampere" (likely the GA104 die, unknown core count and memory configuration) at ~140W could match the 2080 Ti/TU102 at ~270W. Which might very well be true, but we'll never know outside of people undervolting and underclocking their GPUs, as Nvidia is never going to release a GPU based on this chip at that power level (unless they go entirely insane on mobile, I guess).
More like 3070 is similar to 2070,. 450mm2, 256 bit. so nvidia managed to squeeze 6144 Cuda or 2.66x more compared to 2304. while bumping the memory speed only to 16. And the average of 1,14+2,66 is 1.9x, so it is 90% more efficient on average, of course where the pure computation power comes into play it is 2.66, Full die to full die TU106 Vs GA104 3070Ti.
You can't compare Turing Cuda cores to Ampere Cuda cores as if they are the same, as there is obviously a fundamental shift in the architecture with Nvidia doubling the ALUs overall. Also, calculating efficiency from cores/area is ... absurd. Efficiency for a GPU is a function of performance over power consumption, and neither of the two can be directly calculated from on-paper specs, especially given a new architecture. Given the doubling of ALUs with seemingly minor changes elsewhere in the rasterization part of the GPU (it's not like everything else has doubled, after all), it's highly likely that perf/Tflop is going to drop significantly for this generation. After all, they are presenting the ~20Tflop 3070 as a bit faster than the ~13Tflop 2080 Ti, not 50% faster.
 

Update 16:23 UTC: Cyberpunk 2077 is playing big on the next generation. NVIDIA is banking extensively on the game to highlight the advantages of Ampere. The 200 GB game could absorb gamers for weeks or months on end.
Why this screen is so blurred? I hope the game won't be like that, lol.
 
Looking at the tiny gap between the 3080 and the 3090, how do you imagine that? Sure, there might be 12 and 16GB RDNA2 cards, but given Nvidia's massive marketshare advantage, developers won't tune their games for those primarily. Beyond that, "much more affordable"? How? Best case scenario we get a 3080 Ti with the same core count as the 3090 but 12GB of VRAM and lower clocks at $999. Cheaper than that isn't happening, and even that would be borderline miraculous.
A 850$ 3080 20GB? That would be much more affordable than 1500$, right?
 
I'll wait for multiple reviews but I'm eyeing for the 3080. Hopefully it can run MSFS2020 at 60fps.
 
I'll wait for multiple reviews but I'm eyeing for the 3080. Hopefully it can run MSFS2020 at 60fps.

isn't 30fps all that is needed for MSFS2020 as that is pretty much what all the reviewers are saying.
 
isn't 30fps all that is needed for MSFS2020 as that is pretty much what all the reviewers are saying.
Please no blasphemy. ;)
 
The numbers are certainly real ... they just don't make this stuff up.... but like every other benchmark, it bears little resemblance to every day usage. No different from RAID benchmarks or multicore benchmarks ... they only matter when you have an application that can use it, and for 98.5% of folks, we can't. What we will see in gaming wll be in the 20% range from 2080 - 3080.... as i recall, that was what was shown for the marble demo.

After the price bump from the mining craze rise / fall ... the warehouses full of last gen cards and , in the US ... the import tariffs,. It was nice to see the naysayers were wring about the pricing. In 2017, Hot Hardaware did a graph of prices for the top gaming card (that leaves out Titan and 2090) and from the year 2000 till 2017, the average price of the top card hovered around $700 in 2017 US Dollars. That makes the 3080 a bargain both for the MSRP and the fact that prices are now inflated because of pandemic and tariff impacts.
 
A 850$ 3080 20GB? That would be much more affordable than 1500$, right?
That configuration is never going to happen. Not only is the BOM cost for 10GB of GDDR6X likely more than $150, they would need to design a new PCB for this SKU - I sincerely doubt the 3080's PCB has double sided VRAM pads given its pricing. The amount is massive overkill too, placing the card in a segment where datacenters and others running massive datasets would gobble them up before gamers could ever get their hands on them. That's who the 3090 is meant to attract, after all, so why undercut it with the segment most eager to buy it?
 
Well if they get higher density GDDR6x in there, all of a sudden it isn't so much impossible anymore. But then again when they have 3090 to sell why would they put any of their foot willingly on the Axe?

Though I do expect the Supes to debut some time after, perhaps next year?
The Boys' superheroes and their Compound V roid rage: Science Behind the  Fiction
No not them!
 
That configuration is never going to happen. Not only is the BOM cost for 10GB of GDDR6X likely more than $150, they would need to design a new PCB for this SKU - I sincerely doubt the 3080's PCB has double sided VRAM pads given its pricing. The amount is massive overkill too, placing the card in a segment where datacenters and others running massive datasets would gobble them up before gamers could ever get their hands on them. That's who the 3090 is meant to attract, after all, so why undercut it with the segment most eager to buy it?
They won't undercut it if they don't need to, that is, if AMD doesn't come with a 16GB or more card beating it or thereabouts. They will do it as late as possible, of course.
As for the datacenter crowd gobbing them up, why would that be a problem, as long as they sell well? The availability of the 3090 will be small in any case.
 
According to RTX 2080 review on Guru3D, it achieves 45 fps average in Shadow of the Tomb Raider in 4K and same settings as Digital Foundry used in their video. But they achieve around 60fps. Which is only 33% more. But they claim avg fps is 80% higher. You can see fps counter in left top corner with Tomb Raider. Vsync was on in captured footage? If there was 80% increase in performance. Avg fps should be around 80. RTX 2080Ti fps on same cpu DF was using should be over 60 in Shadow of TR. So RTX 3080 is Just around 30% faster than 2080Ti. So the new TOP of the line Nvidia gaming GPU is just 30% faster than previous TOP of the line GPU. When you look at it like that, i really don't see any special jump in performance. RTX 3090 is for professionals and i don't even count it in at that price.
 
Vsync was on in captured footage?
At some point in the beginning they mentioned VSync was on, IIRC.

Anyways, let's wait for the marketing BS to settle and for the real numbers to show up, recorded on multiple resolutions, from serious test sites...
 
They won't undercut it if they don't need to, that is, if AMD doesn't come with a 16GB or more card beating it or thereabouts. They will do it as late as possible, of course.
As for the datacenter crowd gobbing them up, why would that be a problem, as long as they sell well? The availability of the 3090 will be small in any case.
You're shifting the perspective of your arguments as you go here - that's generally not a very good way of making a point, and serves to show that the ground on which you are building your arguments is shaky. In one sentence you're arguing from a perspective of "gamers will need more than 10GB of VRAM, Nvidia must provide", and in the next you're arguing from a perspective of "it doesn't matter who buys the GPUs as long as they sell". See the issue? The latter perspective is in direct conflict with the former. Are you arguing from the perspective of the "common good" of gamers, or are you arguing from the perspective of Nvidia continuing to be a successful business?

As for what you are saying: for compute customers, any sale of a potential 20GB 3080 is a lost sale of a 3090 - those customers can always afford the next tier up if it makes sense in terms of features or performance. The compute performance difference between the 3080 and 3090 is small enough that 100% of those customers would then instead buy the 3080 20GB unless they are in the tiny niche where another 4GB actually makes a difference. For the rest, they'll just buy two 3080 20GBs. And again, 20GB of VRAM with no additional bandwidth or compute performance makes pretty much zero sense. Games won't exceed 10GB of VRAM any time soon unless their developers either don't care or are incompetent.
 
699$ 3080 yes
some of us will also need a 150$ PSU upgrade in addition
320W without overclocking

the 3070 seems way more reasonable

lets see what Big navi has got to offer before we upgrade.
 
You're shifting the perspective of your arguments as you go here - that's generally not a very good way of making a point, and serves to show that the ground on which you are building your arguments is shaky. In one sentence you're arguing from a perspective of "gamers will need more than 10GB of VRAM, Nvidia must provide", and in the next you're arguing from a perspective of "it doesn't matter who buys the GPUs as long as they sell". See the issue? The latter perspective is in direct conflict with the former. Are you arguing from the perspective of the "common good" of gamers, or are you arguing from the perspective of Nvidia continuing to be a successful business?
I'm not shifting perspectives, you're probably overanalysing my (maybe too short) messages.

The whole point should be considered only from the viewpoint of the company, in a more or less competitive market.
I'm pretty certain that in a year or two there will be more games, requiring more than 10k of VRAM in certain situations, but I think if AMD comes out with a competitive option for the 2080 (with a more reasonable amount of memory), reviews will point out this problem, let's say, in the next 5 months. If this happens , Nvidia will have to react to remain competitive (they are very good at this).
As for what you are saying: for compute customers, any sale of a potential 20GB 3080 is a lost sale of a 3090 - those customers can always afford the next tier up if it makes sense in terms of features or performance. The compute performance difference between the 3080 and 3090 is small enough that 100% of those customers would then instead buy the 3080 20GB unless they are in the tiny niche where another 4GB actually makes a difference. For the rest, they'll just buy two 3080 20GBs.
No SLI on the 3080? Anyways, I will repeat myself, Nvidia will do this only if they have to, and AMD beats the 3080.
And again, 20GB of VRAM with no additional bandwidth or compute performance makes pretty much zero sense.
Do you mean to say that the increase from 8 to 10 GB is proportional with the compute and bandwith gap between the 2080 and the 3080? It's rather obvious that it's not. On the contrary, if you look at the proportions, the 3080 is the outlier of the lineup, it has 2x the memory bandwidth of the 3070 but only 25% more VRAM
Games won't exceed 10GB of VRAM any time soon unless their developers either don't care or are incompetent.
Older games have textures optimized for viewing at 10802p. This gen is about 4k gaming being really possible, so we'll see more detailed textures.
They will be leveraged on the consoles via streaming from the SSD, and on PCs via increasing RAM/VRAM usage.
 
Single card 3 slots ouch. I know you guys say video games today do not use SLI. Well I beg to differ. I went to 4k gaming couple years ago and I play a few games like FFXIV and doom and few others. I bought a single 1070ti and in FFXIV at 4k i pushed about 50 to 60 fps but in SLI I pushed over 120 FPS @ 4k so the game says it does not support it but it does. SLI is always on. I have met many people that bought SLI and did not know how to configure it so they never saw the benefit from it. Also in SLI the game ran smoother cleaner. Now maybe it cannot address all the memory but it still can use both GPU's which increase the smoothness.

When I ran 1080 P gaming I ran two 660ti's in SLI and everything was sweet. But 4k nope could not handle the load.
 
Well from 16 pages of posts and all the 30 series announcements 2020 is shaping up to be the year of the Desktop Computer.
 
Back
Top