• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA Announces GeForce Ampere RTX 3000 Series Graphics Cards: Over 10000 CUDA Cores

VRAM is one of the most expensive parts of the GPU right now and so far, the least important when it comes to performance. You can increase the VRAM, but then you'll have to increase the price considerably, just look at 3090... I mean, these cards are for today. I doubt anybody can do anything about VRAM prices. Game devs really need (and probably will) adapt to this reality, unless these prices change. I think you can find flaws with anything, if you want.
PS5 devs have specifically talked about targeting 12GB as the dynamic VRAM allocation of next-gen titles, something made possible without silly loading times by the new hybrid storage system PS5 has.

10GB cards will be inadequate, soon, I think - and it has already been mentioned in this thread that HZD on PC requires over 8GB. The 3070 is incapable of max settings on games that existed before it's even released!

The next gen consoles will have a huge impact on game developers, because the hardware is so close to PC hardware this time around. Expect every dev to build their engines for the consoles first and the PCMR will get ports.

I was expecting the 3080 to be 16GB and 3070 to be 12GB, to be honest.....
 
PS5 devs have specifically talked about targeting 12GB as the dynamic VRAM allocation of next-gen titles, something made possible without silly loading times by the new hybrid storage system PS5 has.

10GB cards will be inadequate, soon, I think - and it has already been mentioned in this thread that HZD on PC requires over 8GB. The 3070 is incapable of max settings on games that existed before it's even released!

The next gen consoles will have a huge impact on game developers, because the hardware is so close to PC hardware this time around. Expect every dev to build their engines for the consoles first and the PCMR will get ports.
I see the memory allocation as Nvidia's move to keep a GPU release in a few months to a year relevant, they're pushing the power envelope already so silicon optimization and process optimization won't net much of a gain so more higher speed Vram will sell cards in a year.
 
Well the 3070 is only 220W, that's less than the 5700XT, and that has a feature set that is well.... Lacking.
 
PS5 devs have specifically talked about targeting 12GB as the dynamic VRAM allocation of next-gen titles, something made possible without silly loading times by the new hybrid storage system PS5 has.

10GB cards will be inadequate, soon, I think - and it has already been mentioned in this thread that HZD on PC requires over 8GB. The 3070 is incapable of max settings on games that existed before it's even released!

The next gen consoles will have a huge impact on game developers, because the hardware is so close to PC hardware this time around. Expect every dev to build their engines for the consoles first and the PCMR will get ports.

I was expecting the 3080 to be 16GB and 3070 to be 12GB, to be honest.....
The PS5 likely has the same split between the OS and software as the XSX, reserving 2.5GB for the system and leaving 13.5GB for software. This if course has to serve as both RAM and VRAM for the software, so games exceeding 10GB in VRAM alone is quite unlikely. Of course the PC typically supports higher detail levels, leading to higher VRAM usage, but new texture streaming techniques (and especially DirectStorage) are likely to dramatically reduce the amount of "let's keep it in VRAM in case we need it" data, which is the majority of current VRAM usage on both PCs and consoles. If developers start designing with NVMe as a baseline, VRAM utilization can drop very noticeably from this alone. Current games pre-load data based on HDD transfer rates and seek times, meaning data is loaded very aggressively and early, with the majority of it being flushed without ever seeing use.
 
Here it is, the GeForce RTX 3080, 10 GB GDDR6X, running at 19 Gbps, 238 tensor TFLOPs, 58 RT TFLOPs, 18 power phases.

Hmm, according to toms 68 RT and 272 tensor

Capture.PNG


 
Anyone knows how and when can we preorder FE cards?
 
The PS5 likely has the same split between the OS and software as the XSX, reserving 2.5GB for the system and leaving 13.5GB for software. This if course has to serve as both RAM and VRAM for the software, so games exceeding 10GB in VRAM alone is quite unlikely. Of course the PC typically supports higher detail levels, leading to higher VRAM usage, but new texture streaming techniques (and especially DirectStorage) are likely to dramatically reduce the amount of "let's keep it in VRAM in case we need it" data, which is the majority of current VRAM usage on both PCs and consoles. If developers start designing with NVMe as a baseline, VRAM utilization can drop very noticeably from this alone. Current games pre-load data based on HDD transfer rates and seek times, meaning data is loaded very aggressively and early, with the majority of it being flushed without ever seeing use.
If they go down that route then we will all need NVMe storage for our games libraries. More likely is that the devs can't assume people have 3GB/s library drives and will opt to continue using GPU VRAM as storage.

This is one instance where I'd like to be wrong but the last 25 years of PC gaming has proven that devs always cater to the lowest common denominator to get the largest customer base possible.
 
Anyone knows how and when can we preorder FE cards?
Employee from NVIDIA stated in a reddit q&a yesterday that there will be no preorders. They just open for purchase on the 17th/24th/ in october.
 
Anyone knows how and when can we preorder FE cards?
You can't, but you can sign up for when orders go live. In my experience with Pascal and Turing launches they are out of stock before the email arrives, so it's not much use.

https://www.nvidia.com/en-gb/geforce/buy/ <regional, you'll need to change en-gb to your country code.

Also, does anyone want to buy a 2080Ti for more than the cost of a 3080 and 3070 combined? Nvidia has you covered!

1599041637519.png
 
Hmm, according to toms 68 RT and 272 tensor

View attachment 167503

Tom's is wrong. That is closer to the 3090's specs, though not quite.

If they go down that route then we will all need NVMe storage for our games libraries. More likely is that the devs can't assume people have 3GB/s library drives and will opt to continue using GPU VRAM as storage.

This is one instance where I'd like to be wrong but the last 25 years of PC gaming has proven that devs always cater to the lowest common denominator to get the largest customer base possible.
Given that DirectStorage is on the XSX, the PS5 uses a similar system, and most high budget games are developed for consoles (too), I would be very surprised if this didn't happen. I guess they might make some sort of legacy mode, though it would be far less effort for developers to aim for console specs as a minimum. Though to be frank even aiming for SATA SSDs as a baseline would largely fix this, as seek times matter more for this than raw transfer rates.
 
Though to be frank even aiming for SATA SSDs as a baseline would largely fix this, as seek times matter more for this than raw transfer rates.
Yeah, like I said, it'd be nice if I'm wrong this time.
I don't fancy replacing my 2.5TB of library drives with NVMe.
 
You can't, but you can sign up for when orders go live. In my experience with Pascal and Turing launches they are out of stock before the email arrives, so it's not much use.

https://www.nvidia.com/en-gb/geforce/buy/ <regional, you'll need to change en-gb to your country code.

Also, does anyone want to buy a 2080Ti for more than the cost of a 3080 and 3070 combined? Nvidia has you covered!

View attachment 167505

Did you check this?

o_O:roll:
 
If they go down that route then we will all need NVMe storage for our games libraries. More likely is that the devs can't assume people have 3GB/s library drives and will opt to continue using GPU VRAM as storage.

This is one instance where I'd like to be wrong but the last 25 years of PC gaming has proven that devs always cater to the lowest common denominator to get the largest customer base possible.
Shadowlands already includes an SSD as a minimum requirement for the game. But from that to 3GB/s that's another step up, I would imagine they would rather ask for more RAM/VRAM.
 
Shadowlands already includes an SSD as a minimum requirement for the game. But from that to 3GB/s that's another step up, I would imagine they would rather ask for more RAM/VRAM.
But if the only option for more VRAM is a $1499 GPU... then buying a $100 or $200 SSD is far easier, no?
Yeah, like I said, it'd be nice if I'm wrong this time.
I don't fancy replacing my 2.5TB of library drives with NVMe.
It really wouldn't be hard to make a flexible solution for this - just make game platforms (GOG, Steam, Epic, etc.) identify what types of storage you have in your system (Windows already does this for SSDs and HDDs on a system level, but it should be trivial to differentiate between SATA and NVMe too), add a tag to games requiring fast storage so the launcher knows, and allow the platform to shuffle games between drives as needed (obviously with user configuration options like always keeping certain games on storage type X or Y, etc.
 
:O

Not sure who I should give thanks to... NVIDIA for the effort or AMD for the compo.
Intel.

This price/performance combo is a straight up attempt to knock AMD out of the highend GPU market. Something Nvidia havent beem able to try previously as the monopolies commission would have come calling.

If Nvidia can KO AMD hard enough they will end up in a two horse race with Intel who will be in AMDs old spot of having the second best CPUs and gpus.
 
But if the only option for more VRAM is a $1499 GPU... then buying a $100 or $200 SSD is far easier, no?
There'l be much more affordable (RDNA2 and then Nvidia) cards with more than 10GB in just a few months, I'm sure.

If Nvidia can KO AMD hard enough they will end up in a two horse race with Intel who will be in AMDs old spot of having the second best CPUs and gpus.
I confess that the 2 shader operation trick reminds me exactly of what they pulled 20 years ago with the Geforce 2 GTS (T stands for texel, which meant applying 2 textures per pixel per clock cycle) for eliminating 3DFX, and it worked just fine back then, so much brute force, 3DFX was lost and soon disappeared.
 
I've always been a little bit puzzled by the supposed 'price gouging' Nvidia is doing. Yes, they're leading and command a bit of premium. But there's almost always something on offer for that premium. And then there's always a bunch of GPUs below it that do get some sort of advancement in perf/dollar and absolute performance.

I mean... the 970 was super competitive also on price. The 660ti was the same back during Kepler and the 670 was seen as the 'poor man's 680', but performed virtually the same. The 1070 was dropping the 980ti price point down by a few hundred... and its happening again with x70 today. The price of an x70 has risen... but so has the featureset and the performance gap to the bottom end.

Even with the mining craze the midrange was populated and the price, while inflated, was not quite as volatile as others.



'the' leaks? The 12 pin was the only truly accurate one man (alright, and the pictures then). Nvidia played this well, you can rest assured all we got was carefully orchestrated. And that includes the teasing of a 12 pin. Marketing gets a lead start with these leaks, we also heard 1400- 2000 dollars worth of GPU, obviously this makes the announcement of the actual pricing even stronger.

Come on, Red Gaming Tech said/MLID were correct with:

- them using Samsung's inferior 8nm process node
- The cards will draw huge power as a result. 320 and 380W is not normal. One even claimed the exact power draw which was on the money
- The 3080 will be what they are pushing hard as it's performance is much, much closer to the 3090 than the price suggests. This is to combat Navi
- Performance numbers and their relative gaps were all spot on

So we knew a load about this release and Nvidia were definitely mpre leaky here than with Turing, Pascal.
 
So we knew a load about this release and Nvidia were definitely mpre leaky here than with Turing, Pascal.
Tom from MLID said most of his RDNA2 info was coming from Nvidia sources too :kookoo:
 
Well the 3070 is only 220W, that's ...
Contradicts Huang's statements about 1.9 better perf/w (taking 2080Ti as 270W card, 3070 should have been 145W)
 
Come on, Red Gaming Tech said/MLID were correct with:

- them using Samsung's inferior 8nm process node
- The cards will draw huge power as a result. 320 and 380W is not normal. One even claimed the exact power draw which was on the money
- The 3080 will be what they are pushing hard as it's performance is much, much closer to the 3090 than the price suggests. This is to combat Navi
- Performance numbers and their relative gaps were all spot on

So we knew a load about this release and Nvidia were definitely mpre leaky here than with Turing, Pascal.

Quite true in fact, yeah. Still though, I'm pretty sure this was orchestrated leaking. The timing, the content... how do you sell 320W? By letting us ease into it... and then bringing a favorable price point.

These 'tubers are just free or nearly free marketing tools.
 
Soo, talking about transistor density, TSMC 7nm DUV vs Samsung 8nm:

5700XT, 250mm2, 10.3 billion => 41 million trans. per mm2
3080, 627mm2, 28 billion => 44.6 million tr. per mm2

Remind me, who was saying that Samsung 8nm is faux and just a marketing name for 10nm?
 
  • Like
Reactions: ppn
Comparing RTX 3070 to 2080 Super which are more similar in specs (8GBs on 256bit bus, 16GBps VRAM) suggests that Ampere is around 35-40% more efficient than Turing.
I guess we lose a bit of efficiency going Samsung 8N but the lower prices justified all that.
 
Comparing RTX 3070 to 2080 Super which are more similar in specs (8GBs on 256bit bus, 16GBps VRAM) suggests that Ampere is around 35-40% more efficient than Turing.
I guess we lose a bit of efficiency going Samsung 8N but the lower prices justified all that.

More like 3070 is similar to 2070,. 450mm2, 256 bit. so nvidia managed to squeeze 6144 Cuda or 2.66x more compared to 2304. while bumping the memory speed only to 16. And the average of 1,14+2,66 is 1.9x, so it is 90% more efficient on average, of course where the pure computation power comes into play it is 2.66, Full die to full die TU106 Vs GA104 3070Ti.
 
You trust (and mistread) steam hardware survey too much.
For actual sales check reports from actual shops, e.g. mindfactory.
Steam has a major marketshare among PC gamers, and is way more representative than a single shop. The only thing missing from the Steam hardware survey is people who buy graphics cards and don't game.

PS5 devs have specifically talked about targeting 12GB as the dynamic VRAM allocation of next-gen titles, something made possible without silly loading times by the new hybrid storage system PS5 has.

10GB cards will be inadequate, soon, I think - and it has already been mentioned in this thread that HZD on PC requires over 8GB. The 3070 is incapable of max settings on games that existed before it's even released!
Dynamic, as in the game is able to decide how much is system RAM and how much is VRAM.

By the time 8 GB is inadequate for gaming, the performance of RTX 3070 will be too, and you will be buying a "RTX 6070"…

The next gen consoles will have a huge impact on game developers, because the hardware is so close to PC hardware this time around. Expect every dev to build their engines for the consoles first and the PCMR will get ports.
I think you are putting too much faith in game developers. Most of them just take an off-the-shelf game engine, load in some assets, do some scripting and call it a game. Most game studios don't do a single line of low-level engine code, and the extent of their "optimizations" are limited to adjusting assets to reach a desired frame rate.

If they go down that route then we will all need NVMe storage for our games libraries. More likely is that the devs can't assume people have 3GB/s library drives and will opt to continue using GPU VRAM as storage.
Not really. The difference between a "standard" 500 MB/s SSD and a 3 GB/s SSD will be loading times. For resource streaming, 500 MB/s is plenty.
Also, don't forget that these "cheap" NVMe QLC SSDs can't deliver 3 GB/s sustained, so if a game truely depended on this, you would need a SLC SSD or Optane.

This is one instance where I'd like to be wrong but the last 25 years of PC gaming has proven that devs always cater to the lowest common denominator to get the largest customer base possible.
Games in general isn't particularly good at utilizing the hardware we have currently, and the trend in game development has clearly been less performance optimization, so what makes you think this will change all of a sudden?
 
Back
Top