• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA Announces GeForce Ampere RTX 3000 Series Graphics Cards: Over 10000 CUDA Cores

SLI is dead notice how the list gets smaller and smaller as we get to the current year.

 
Single card 3 slots ouch. I know you guys say video games today do not use SLI. Well I beg to differ. I went to 4k gaming couple years ago and I play a few games like FFXIV and doom and few others. I bought a single 1070ti and in FFXIV at 4k i pushed about 50 to 60 fps but in SLI I pushed over 120 FPS @ 4k so the game says it does not support it but it does. SLI is always on. I have met many people that bought SLI and did not know how to configure it so they never saw the benefit from it. Also in SLI the game ran smoother cleaner. Now maybe it cannot address all the memory but it still can use both GPU's which increase the smoothness.

When I ran 1080 P gaming I ran two 660ti's in SLI and everything was sweet. But 4k nope could not handle the load.
yes you increased smoothness by increase fps but you also increase latency of every one of the frames by factor 2 or even 3, fermi was the last of Tom since then it all deth :x
 
I payed $370 fot my GTX1070 in 2017. $500 for a 3070 does not seem very fair to me. Next gen the GTX 4070 will be $600.
 
I think you are putting too much faith in game developers. Most of them just take an off-the-shelf game engine, load in some assets, do some scripting and call it a game. Most game studios don't do a single line of low-level engine code, and the extent of their "optimizations" are limited to adjusting assets to reach a desired frame rate.
change all of a sudden?
I graduated alongside, lived with, and stay in touch with multiple game developers from Campos Santos (now Valve), Splash Damage, Jagex, Blizzard, King, Ubisoft, and by proxy EA, and Activision; I think they'd all be insulted by your statement. More importantly, even if there is a grain of truth to what you say, the "off-the-shelf engines" have been slowly but surely migrating to console-optimised engines over the last few years.

Not really. The difference between a "standard" 500 MB/s SSD and a 3 GB/s SSD will be loading times. For resource streaming, 500 MB/s is plenty.
Also, don't forget that these "cheap" NVMe QLC SSDs can't deliver 3 GB/s sustained, so if a game truely depended on this, you would need a SLC SSD or Optane.
You're overanalyzing this. I said 3GB/s simply beacuse that's a commonly-accepted read speed of a typical NVMe drive. Also, even the worst PCIe 3.0 x4 drives read at about 3GB/s sustained, no matter whether they're QLC or MLC. The performance differences between QLC and MLC is only really apparent on sustained write speeds.

Games in general isn't particularly good at utilizing the hardware we have currently, and the trend in game development has clearly been less performance optimization, so what makes you think this will change all of a sudden?
. My point was exactly that. Perhaps English isn't your first language but when I said "the last 25 years of PC gaming has proven that devs always cater to the lowest common denominator" - that was me saying that it ISN'T going to change suddenly, and it's been like this for 25 years without changing. That's exactly why games aren't particularly good at utilising the hardware we have currently, because the devs need to make sure it'll run on a dual-core with 4GB RAM and a 2GB graphics card from 9 years ago.
 
The only thing missing from the Steam hardware survey is people who buy graphics cards and don't game.
I used to buy graphics cards, game and not be on Steam.
Pretty sure most people who are into Blizzard games do not use steam.
That doesn't explain why WoW players would necessarily skip NV... and this is when AMD bothered to explain what is going on.
Their main argument was their absence in internet cafe business, which was skewing the figures a lot (each user that was logging in was counting separately).
Steam fixed it somewhat, but not all to AMD liking (in AMD's words), brushing it off as that Valve doesn't really care about how representative that survey is. (yikes)

Mindfactory is a major pc parts online shop in Germany and it shows buying habbits of the respective DIY demographic in Germany. I don't see why that is not relevant.

All that graph says is that "Ampere" (likely the GA104 die, unknown core count and memory configuration) at ~140W could match the 2080 Ti/TU102 at ~270W. Which might very well be true, but we'll never know outside of people undervolting and underclocking their GPUs, as Nvidia is never going to release a GPU based on this chip at that power level (unless they go entirely insane on mobile, I guess).
This makes the statement fairly useless, whereas AMD's perf/w claim (+50% in RDNA2) is reflecting practical reality at least in TPU reviews.
 
I'm not shifting perspectives, you're probably overanalysing my (maybe too short) messages.
Sorry, but no. You started out by arguing from the viewpoint of gamers needing more VRAM - i.e. basing your argument in customer needs. Regardless of your intentions, shifting the basis of the argument to the viewpoint of the company is a dramatic shift that introduces conflicting interests to your argumentation, which you need to address.
The whole point should be considered only from the viewpoint of the company, in a more or less competitive market.
Again, I have to disagree. I don't give a rodent's behind about the viewpoint of Nvidia. They provide a service to me as a (potential) customer: providing compelling products. They, however, are in it for the profit, and often make choices in product segmentation, pricing, featuresets, etc. that are clearly aimed at increasing profits rather than providing benefits to the customer. There are of course relevant arguments to be presented in terms of whether what customers may need/want/wish for is feasible in various ways (technologically, economically, etc.), but that is as much of the viewpoint of the company as should be taken into account here. Adopting an Nvidia-internal perspective on this is meaningless for anyone who doesn't work for Nvidia, and IMO even meaningless for them unless that person is in a decision-making position when it comes to these questions.
I'm pretty certain that in a year or two there will be more games, requiring more than 10k of VRAM in certain situations, but I think if AMD comes out with a competitive option for the 2080 (with a more reasonable amount of memory), reviews will point out this problem, let's say, in the next 5 months. If this happens , Nvidia will have to react to remain competitive (they are very good at this).
There will definitely be games requiring more than 10k of VRAM ;) But 10GB? Again, I have my doubts. Sure, there will always be outliers, and there will always be games that take pride in being extremely graphically intensive. There will also always be settings one can enable that consume massive amounts of VRAM if desired, mostly with negligible if noticeable at all impacts on graphical quality. But beyond that, the introduction of DirectStorage for Windows and alongside that the very likely beginning of SSDs being a requirement for most major games in the future will directly serve to decrease VRAM needs. Sure, new things can be introduced to take up the space freed up by not prematurely streaming in assets that never get used, but the chance of those new features taking up all that was freed up plus a few GB more is very, very slim. Of course not every game will use DirectStorage, but every cross-platform title launching on the XSX will at least have it as an option - and removing it might necessitate rearchitecting the entire structure of the game (adding loading screens, corridors, etc.), so it's not something that can be removed easily.
No SLI on the 3080? Anyways, I will repeat myself, Nvidia will do this only if they have to, and AMD beats the 3080.
SLI? That's a gaming feature. And you don't even need SLI for gaming with DX12 multi-adapter and the like. Compute workloads do not care one iota about SLI support. NVLink does have some utility if you're teaming up the GPU to work as one, but it's just as likely (for example in huge database workloads, which can consume massive amounts of memory) that each GPU can do the same task in parallel, working on different parts of the dataset, in which case PCIe handles all the communication needed. The same goes for things like rendering.
Do you mean to say that the increase from 8 to 10 GB is proportional with the compute and bandwith gap between the 2080 and the 3080? It's rather obvious that it's not. On the contrary, if you look at the proportions, the 3080 is the outlier of the lineup, it has 2x the memory bandwidth of the 3070 but only 25% more VRAM
...and? Increasing the amount of VRAM to 20GB won't change the bandwidth whatsoever, as the bus width is fixed. For that to change they would have to add memory channels, which we know there are two more of on the die, so that's possible, but then you're talking either 11/22GB or 12/24GB - the latter of which is where the 3090 lives. The other option is of course to use faster rated memory, but the chances of Nvidia introducing a new SKU with twice the memory and faster memory is essentially zero at least until this memory becomes dramatically cheaper and more widespread. As for the change in memory amount between the 2080 and the 3080, I think it's perfectly reasonable, both because the amount of memory isn't directly tied to feeding the GPU (it just needs to be enough; more than that is useless) but bandwidth is (which has seen a notable increase), and because - once again - 10GB is likely to be plenty for the vast majority of games for the foreseeable future.
Older games have textures optimized for viewing at 10802p. This gen is about 4k gaming being really possible, so we'll see more detailed textures.
They will be leveraged on the consoles via streaming from the SSD, and on PCs via increasing RAM/VRAM usage.
The entire point of DirectStorage, which Nvidia made a massive point out of supporting with the 3000-series, is precisely to handle this in the same way as on consoles. So that statement is fundamentally false. If a game uses DirectStorage on the XSX, it will also do so on W10 as long as the system has the required components. Which any 3000-series-equipped system will have. Which will, once again, reduce VRAM usage.

First, this makes the statement fairly useless, whereas AMD's perf/w claim (+50% in RDNA2) is reflecting practical reality at least in TPU reviews.
It absolutely makes the statement useless. That's how marketing works (at least in an extremely simplified and partially naive view): you pick the best aspects of your product and promote them. Analysis of said statements very often show them to then be meaningless when viewed in the most relevant context. That doesn't make the statement false - Nvidia could likely make an Ampere GPU delivering +90% perf/W over Turing, if they wanted to - but it makes it misleading given that it doesn't match the in-use reality of the products that are actually made. I also really don't see how the +50% perf/W for RDNA 2 claim can be reflected in any reviews yet, given that no reviews of any RDNA 2 product exist yet (which is natural, seeing how no RDNA 2 products exist either).
 
and removing it might necessitate rearchitecting the entire structure of the game (adding loading screens, corridors, etc.), so it's not something that can be removed easily.
Not sure how accurate that is, there's rumors of a cheap Xbox following (accompanying?) the regular one's release & that one sure as hell isn't going to use just as fast an SSD
 
To the people who don't like the high stock TDPs, you can thank overclockers who went to all ends to circumvent NVidia's TDP lockdown on Pascal and Turing. NVidia figured that if people would go to extreme lengths to shunt mod flagship GPUs to garner power in excess of 400W, why not push a measly 350W and look good in performance at the same time?
 
The pricing and cuda core count is defiantly a surprise. While competition from AMD's RDNA is a driver for the leap, I think that the next gen console release is what is pushing this performance jump and price decrease. I think that Nvidia fears that PC gaming is getting too expensive and the next gen consoles may take away some market share if prices cant be lowered.

Plus, it is apparent that Nvidia has been sandbagging since Pascal as AMD just had nothing to compete.
 
Not sure how accurate that is, there's rumors of a cheap Xbox following (accompanying?) the regular one's release & that one sure as hell isn't going to use just as fast an SSD
Actually it is guaranteed to use that. The XSX uses a relatively cheap ~2.4GB/s SSD. The cheaper one might cut the capacity in half, but it won't move away from NVMe. The main savings will come from less RAM (lots of savings), a smaller SoC (lots of savings) and accompanying cuts in the PSU, VRM, cooling, likely lack of an optical drive, etc. (also lots of savings when combined). The NVMe storage is such a fundamental part of the way games are built for these consoles that you can't even run the games off slower external storage, so how would that work with a slower drive internally?
 
According to RTX 2080 review on Guru3D, it achieves 45 fps average in Shadow of the Tomb Raider in 4K and same settings as Digital Foundry used in their video. But they achieve around 60fps. Which is only 33% more. But they claim avg fps is 80% higher. You can see fps counter in left top corner with Tomb Raider. Vsync was on in captured footage? If there was 80% increase in performance. Avg fps should be around 80. RTX 2080Ti fps on same cpu DF was using should be over 60 in Shadow of TR. So RTX 3080 is Just around 30% faster than 2080Ti. So the new TOP of the line Nvidia gaming GPU is just 30% faster than previous TOP of the line GPU. When you look at it like that, i really don't see any special jump in performance. RTX 3090 is for professionals and i don't even count it in at that price.

I can't wait to see the REAL performance increase by reputable sites (Digital Foundry is not reputable, this was paid marketing deal for Nvidia) of a 3080 vs a 2080 or 2080 Ti. Without the cherry-picking, marketing fiddling of figures and nebulous tweaking of settings (RT, DLSS, vsync etc).

Before it's even been reliably benchmarked it's being proclaimed as the greatest thing ever. But I've been in this game too long to know that the figures sans RT are not nearly as impressive as is being touted by Nvidia's world class marketing and underhanded settings fiddling.
 
I just can't imagine having a 350W card in my system. 250W is pretty warm. I felt that 180W blower was a sweet spot. All of these cards should have at least 16GB imho.
 
The Series X SSD is not exactly fast or advanced. The PS5's, ok, that is impressive.

So the budget Series S will surely have the same SSD as the X for reasons mentioned above. If it doesn't, MS have created even more problems for themselves with game development.
 
Awaiting reviews before I take any real interest in this. Nvidia have a history of bullshitting.
 
I payed $370 fot my GTX1070 in 2017. $500 for a 3070 does not seem very fair to me. Next gen the GTX 4070 will be $600.

Since when did NV care about what is fair? They will sell to what the market will bare.

The Series X SSD is not exactly fast or advanced. The PS5's, ok, that is impressive.

So the budget Series S will surely have the same SSD as the X for reasons mentioned above. If it doesn't, MS have created even more problems for themselves with game development.

There is a reason the Direct Storage API was created.
 
Sorry, but no. You started out by arguing from the viewpoint of gamers needing more VRAM - i.e. basing your argument in customer needs. Regardless of your intentions, shifting the basis of the argument to the viewpoint of the company is a dramatic shift that introduces conflicting interests to your argumentation, which you need to address.

Again, I have to disagree. I don't give a rodent's behind about the viewpoint of Nvidia. They provide a service to me as a (potential) customer: providing compelling products. They, however, are in it for the profit, and often make choices in product segmentation, pricing, featuresets, etc. that are clearly aimed at increasing profits rather than providing benefits to the customer. There are of course relevant arguments to be presented in terms of whether what customers may need/want/wish for is feasible in various ways (technologically, economically, etc.), but that is as much of the viewpoint of the company as should be taken into account here. Adopting an Nvidia-internal perspective on this is meaningless for anyone who doesn't work for Nvidia, and IMO even meaningless for them unless that person is in a decision-making position when it comes to these questions.

There will definitely be games requiring more than 10k of VRAM ;) But 10GB? Again, I have my doubts. Sure, there will always be outliers, and there will always be games that take pride in being extremely graphically intensive. There will also always be settings one can enable that consume massive amounts of VRAM if desired, mostly with negligible if noticeable at all impacts on graphical quality. But beyond that, the introduction of DirectStorage for Windows and alongside that the very likely beginning of SSDs being a requirement for most major games in the future will directly serve to decrease VRAM needs. Sure, new things can be introduced to take up the space freed up by not prematurely streaming in assets that never get used, but the chance of those new features taking up all that was freed up plus a few GB more is very, very slim. Of course not every game will use DirectStorage, but every cross-platform title launching on the XSX will at least have it as an option - and removing it might necessitate rearchitecting the entire structure of the game (adding loading screens, corridors, etc.), so it's not something that can be removed easily.

SLI? That's a gaming feature. And you don't even need SLI for gaming with DX12 multi-adapter and the like. Compute workloads do not care one iota about SLI support. NVLink does have some utility if you're teaming up the GPU to work as one, but it's just as likely (for example in huge database workloads, which can consume massive amounts of memory) that each GPU can do the same task in parallel, working on different parts of the dataset, in which case PCIe handles all the communication needed. The same goes for things like rendering.

...and? Increasing the amount of VRAM to 20GB won't change the bandwidth whatsoever, as the bus width is fixed. For that to change they would have to add memory channels, which we know there are two more of on the die, so that's possible, but then you're talking either 11/22GB or 12/24GB - the latter of which is where the 3090 lives. The other option is of course to use faster rated memory, but the chances of Nvidia introducing a new SKU with twice the memory and faster memory is essentially zero at least until this memory becomes dramatically cheaper and more widespread. As for the change in memory amount between the 2080 and the 3080, I think it's perfectly reasonable, both because the amount of memory isn't directly tied to feeding the GPU (it just needs to be enough; more than that is useless) but bandwidth is (which has seen a notable increase), and because - once again - 10GB is likely to be plenty for the vast majority of games for the foreseeable future.

The entire point of DirectStorage, which Nvidia made a massive point out of supporting with the 3000-series, is precisely to handle this in the same way as on consoles. So that statement is fundamentally false. If a game uses DirectStorage on the XSX, it will also do so on W10 as long as the system has the required components. Which any 3000-series-equipped system will have. Which will, once again, reduce VRAM usage.


It absolutely makes the statement useless. That's how marketing works (at least in an extremely simplified and partially naive view): you pick the best aspects of your product and promote them. Analysis of said statements very often show them to then be meaningless when viewed in the most relevant context. That doesn't make the statement false - Nvidia could likely make an Ampere GPU delivering +90% perf/W over Turing, if they wanted to - but it makes it misleading given that it doesn't match the in-use reality of the products that are actually made. I also really don't see how the +50% perf/W for RDNA 2 claim can be reflected in any reviews yet, given that no reviews of any RDNA 2 product exist yet (which is natural, seeing how no RDNA 2 products exist either).
My dude, you spend too long to argue, too little to understand. I'm gonna cut this discussion a little short because I don't like discussions that don't go anywhere, no disrespect intended. The 3080 has already loads of bandwidth, all it's lacking is memory size.

If you don't believe me plot a x y graph, with memory bandwidth x FP32 perf as x, memory size as y. Plot the 780,980, 1080, 2080, 3080, 3070 and 3090 points on it and you'll see if there are any outliers ;) . Or we'll just agree to disagree.
 
I payed $370 fot my GTX1070 in 2017. $500 for a 3070 does not seem very fair to me. Next gen the GTX 4070 will be $600.
I agree. People thinking these prices are low are bonkers. We don't have enough competition in GPU space. $500 for a 8GB card in 2020. Are they joking? It will be fast obsolete.
 
I agree. People thinking these prices are low are bonkers. We don't have enough competition in GPU space. $500 for a 8GB card in 2020. Are they joking? It will be fast obsolete.

Yeah 1070 +35% 2070 +45% 3070, this thing should be at least 95% faster than 1070, and vram remains the same 8GB.

but this is the gimped chip in order protect 2080Ti users that got gutted by the price cut 60%, 1199 to 499, 11GB is all they have left, not for long. we should get the 6144 Cuda 16GB at some point. only for $599.

8GB should be fine at low detail e-sports for the next 4 years. I get unplayable frame rate at below 8GB. and even 45% won't help with the framerate.
 
Last edited:
I graduated alongside, lived with, and stay in touch with multiple game developers from Campos Santos (now Valve), Splash Damage, Jagex, Blizzard, King, Ubisoft, and by proxy EA, and Activision; I think they'd all be insulted by your statement. More importantly, even if there is a grain of truth to what you say, the "off-the-shelf engines" …
Most studios don't make their own game engine in-house anymore, unfortunately. That's not an insult, but a fact. There has been a clear trend in fewer studios making their own engines for years, and the lack of performance optimizations and buggy/broken games at launch are the results. There are some studios, like Id software, which does still do quality work.

We are talking a lot about new hardware features and new APIs in this forum, yet the adoption of such features in games is very slow. Many have been wondering why we haven't seen the revolutionary performance gains we were promised with DirectX 12. Well, the reality is that for generic engines the low-level rendering code is hidden behind layers upon layers of abstractions, so those are not going to get the full potential.

… have been slowly but surely migrating to console-optimised engines over the last few years.
"Console optimization" is a myth.
In order to optimize code, low-level code must be written to target specfic instructions, API features or performance characteristics.
When people are claiming games are "console optimized", they are usually referring to them not being scalable, so it's rather lack of optimization if anything.

My point was exactly that. Perhaps English isn't your first language but when I said "the last 25 years of PC gaming has proven that devs always cater to the lowest common denominator" - that was me saying that it ISN'T going to change suddenly, and it's been like this for 25 years without changing. That's exactly why games aren't particularly good at utilising the hardware we have currently, because the devs need to make sure it'll run on a dual-core with 4GB RAM and a 2GB graphics card from 9 years ago.
Games today are usually not intentionally catering to the lowest common denominator, but it's more a result of the engine they have chosen, especially if they don't make one in-house. If having support for 10 year old PCs were a priority, we would see more games with support for older Windows versions etc.
 
I can't wait to see the REAL performance increase by reputable sites (Digital Foundry is not reputable, this was paid marketing deal for Nvidia) of a 3080 vs a 2080 or 2080 Ti. Without the cherry-picking, marketing fiddling of figures and nebulous tweaking of settings (RT, DLSS, vsync etc).

Before it's even been reliably benchmarked it's being proclaimed as the greatest thing ever. But I've been in this game too long to know that the figures sans RT are not nearly as impressive as is being touted by Nvidia's world class marketing and underhanded settings fiddling.
Just a technicality: there's a big difference between closely regulated exclusive access to hardware and paid marketing. Is it a marketing plot by Nvidia? Absolutely. Does it undermine DF' s credibility whatsoever? No. Why? Because they are completely transparent about the process, the limitations involved, and how the data is presented. Their conclusion is also "we should all wait for reviews, but this looks very good for now":

It's early days with RTX 3080 testing. In terms of addressing the claims of the biggest generational leap Nvidia has ever delivered, I think the reviews process with the mass of data from multiple outlets testing a much wider range of titles is going to be the ultimate test for validating that claim. That said, some of the numbers I saw in my tests were quite extraordinary and on a more general level, the role of DLSS in accelerating RT titles can't be understated.

That there? That's nuance. (Something that is sorely lacking in your post.) They are making very, very clear that this is a preliminary hands-on, in no way an exhaustive review, and that there were mssive limitations to which games they could test, how they could run the tests, which data they could present from these tests, and how they could be presented. There is also no disclosure of this being paid content, which they are required by law to provide if it is. So no, this is not a "paid marketing deal". It's an exclusive preview. Learn the difference.
 
I just hope this new gen bring the prices down on the used cards, because the prices for used gpus are nuts.
People asking new card money for their used crap.
Hopefully AMD have something competitive this time around and have the effect on nvidia as it had on intel.
Because after many many years I see a better value in intel i7 10700 than any Ryzen.
 
The Series X SSD is not exactly fast or advanced. The PS5's, ok, that is impressive.

So the budget Series S will surely have the same SSD as the X for reasons mentioned above. If it doesn't, MS have created even more problems for themselves with game development.
MS didnt have to overtune the SSD because they created a whole new APi, Direct Storage. I think the PS5 and the Xbox will have about the same effective speed.
 
MS didnt have to overtune the SSD because they created a whole new APi, Direct Storage. I think the PS5 and the Xbox will have about the same effective speed.

No, the PS5 SSD is literally TWICE as fast, and its IO is apparently significantly more advanced, there's no chance they are similar in performance.

Just a technicality: there's a big difference between closely regulated exclusive access to hardware and paid marketing. Is it a marketing plot by Nvidia? Absolutely. Does it undermine DF' s credibility whatsoever? No. Why? Because they are completely transparent about the process, the limitations involved, and how the data is presented. Their conclusion is also "we should all wait for reviews, but this looks very good for now":



That there? That's nuance. (Something that is sorely lacking in your post.) They are making very, very clear that this is a preliminary hands-on, in no way an exhaustive review, and that there were mssive limitations to which games they could test, how they could run the tests, which data they could present from these tests, and how they could be presented. There is also no disclosure of this being paid content, which they are required by law to provide if it is. So no, this is not a "paid marketing deal". It's an exclusive preview. Learn the difference.

It's a paid marketing deal.
 
Back
Top