• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA Announces GeForce Ampere RTX 3000 Series Graphics Cards: Over 10000 CUDA Cores

Joined
Feb 21, 2006
Messages
2,221 (0.32/day)
Location
Toronto, Ontario
System Name The Expanse
Processor AMD Ryzen 7 5800X3D
Motherboard Asus Prime X570-Pro BIOS 5013 AM4 AGESA V2 PI 1.2.0.Cc.
Cooling Corsair H150i Pro
Memory 32GB GSkill Trident RGB DDR4-3200 14-14-14-34-1T (B-Die)
Video Card(s) XFX Radeon RX 7900 XTX Magnetic Air (24.10.1)
Storage WD SN850X 2TB / Corsair MP600 1TB / Samsung 860Evo 1TB x2 Raid 0 / Asus NAS AS1004T V2 20TB
Display(s) LG 34GP83A-B 34 Inch 21: 9 UltraGear Curved QHD (3440 x 1440) 1ms Nano IPS 160Hz
Case Fractal Design Meshify S2
Audio Device(s) Creative X-Fi + Logitech Z-5500 + HS80 Wireless
Power Supply Corsair AX850 Titanium
Mouse Corsair Dark Core RGB SE
Keyboard Corsair K100
Software Windows 10 Pro x64 22H2
Benchmark Scores 3800X https://valid.x86.fr/1zr4a5 5800X https://valid.x86.fr/2dey9c 5800X3D https://valid.x86.fr/b7d
SLI is dead notice how the list gets smaller and smaller as we get to the current year.

 
Joined
Oct 14, 2017
Messages
210 (0.08/day)
System Name Lightning
Processor 4790K
Motherboard asrock z87 extreme 3
Cooling hwlabs black ice 20 fpi radiator, cpu mosfet blocks, MCW60 cpu block, full cover on 780Ti's
Memory corsair dominator platinum 2400C10, 32 giga, DDR3
Video Card(s) 2x780Ti
Storage intel S3700 400GB, samsung 850 pro 120 GB, a cheep intel MLC 120GB, an another even cheeper 120GB
Display(s) eizo foris fg2421
Case 700D
Audio Device(s) ESI Juli@
Power Supply seasonic platinum 1000
Mouse mx518
Software Lightning v2.0a
Single card 3 slots ouch. I know you guys say video games today do not use SLI. Well I beg to differ. I went to 4k gaming couple years ago and I play a few games like FFXIV and doom and few others. I bought a single 1070ti and in FFXIV at 4k i pushed about 50 to 60 fps but in SLI I pushed over 120 FPS @ 4k so the game says it does not support it but it does. SLI is always on. I have met many people that bought SLI and did not know how to configure it so they never saw the benefit from it. Also in SLI the game ran smoother cleaner. Now maybe it cannot address all the memory but it still can use both GPU's which increase the smoothness.

When I ran 1080 P gaming I ran two 660ti's in SLI and everything was sweet. But 4k nope could not handle the load.
yes you increased smoothness by increase fps but you also increase latency of every one of the frames by factor 2 or even 3, fermi was the last of Tom since then it all deth :x
 
Joined
Mar 4, 2011
Messages
165 (0.03/day)
Location
Israel
System Name Negra5
Processor i5 6500
Motherboard ASUS Z170M-Plus
Cooling Cooler Master Hyper TX3
Memory Kingston HyperX 16GB DDR4
Video Card(s) PNY GTX-1070, XFX RX480
Storage Gigabyte 256GB SSD, WD 1TB HDD, WD 4TB HDD.
Display(s) SAMSUNG 32" FullHD
Case GAMING EAGLE WARRIOR CG-06R1
Audio Device(s) nVidia HD Audio
Power Supply Corsair GS800W 80 Plus Bronze
Mouse Cooler Master Devastator MS2k
Keyboard Cooler Master Devastator MB24
Software Windows 10 20H2
Benchmark Scores Pfft
I payed $370 fot my GTX1070 in 2017. $500 for a 3070 does not seem very fair to me. Next gen the GTX 4070 will be $600.
 
Joined
Feb 20, 2019
Messages
8,281 (3.93/day)
System Name Bragging Rights
Processor Atom Z3735F 1.33GHz
Motherboard It has no markings but it's green
Cooling No, it's a 2.2W processor
Memory 2GB DDR3L-1333
Video Card(s) Gen7 Intel HD (4EU @ 311MHz)
Storage 32GB eMMC and 128GB Sandisk Extreme U3
Display(s) 10" IPS 1280x800 60Hz
Case Veddha T2
Audio Device(s) Apparently, yes
Power Supply Samsung 18W 5V fast-charger
Mouse MX Anywhere 2
Keyboard Logitech MX Keys (not Cherry MX at all)
VR HMD Samsung Oddyssey, not that I'd plug it into this though....
Software W10 21H1, barely
Benchmark Scores I once clocked a Celeron-300A to 564MHz on an Abit BE6 and it scored over 9000.
I think you are putting too much faith in game developers. Most of them just take an off-the-shelf game engine, load in some assets, do some scripting and call it a game. Most game studios don't do a single line of low-level engine code, and the extent of their "optimizations" are limited to adjusting assets to reach a desired frame rate.
change all of a sudden?
I graduated alongside, lived with, and stay in touch with multiple game developers from Campos Santos (now Valve), Splash Damage, Jagex, Blizzard, King, Ubisoft, and by proxy EA, and Activision; I think they'd all be insulted by your statement. More importantly, even if there is a grain of truth to what you say, the "off-the-shelf engines" have been slowly but surely migrating to console-optimised engines over the last few years.

Not really. The difference between a "standard" 500 MB/s SSD and a 3 GB/s SSD will be loading times. For resource streaming, 500 MB/s is plenty.
Also, don't forget that these "cheap" NVMe QLC SSDs can't deliver 3 GB/s sustained, so if a game truely depended on this, you would need a SLC SSD or Optane.
You're overanalyzing this. I said 3GB/s simply beacuse that's a commonly-accepted read speed of a typical NVMe drive. Also, even the worst PCIe 3.0 x4 drives read at about 3GB/s sustained, no matter whether they're QLC or MLC. The performance differences between QLC and MLC is only really apparent on sustained write speeds.

Games in general isn't particularly good at utilizing the hardware we have currently, and the trend in game development has clearly been less performance optimization, so what makes you think this will change all of a sudden?
. My point was exactly that. Perhaps English isn't your first language but when I said "the last 25 years of PC gaming has proven that devs always cater to the lowest common denominator" - that was me saying that it ISN'T going to change suddenly, and it's been like this for 25 years without changing. That's exactly why games aren't particularly good at utilising the hardware we have currently, because the devs need to make sure it'll run on a dual-core with 4GB RAM and a 2GB graphics card from 9 years ago.
 
Joined
Jul 9, 2015
Messages
3,413 (1.00/day)
System Name M3401 notebook
Processor 5600H
Motherboard NA
Memory 16GB
Video Card(s) 3050
Storage 500GB SSD
Display(s) 14" OLED screen of the laptop
Software Windows 10
Benchmark Scores 3050 scores good 15-20% lower than average, despite ASUS's claims that it has uber cooling.
The only thing missing from the Steam hardware survey is people who buy graphics cards and don't game.
I used to buy graphics cards, game and not be on Steam.
Pretty sure most people who are into Blizzard games do not use steam.
That doesn't explain why WoW players would necessarily skip NV... and this is when AMD bothered to explain what is going on.
Their main argument was their absence in internet cafe business, which was skewing the figures a lot (each user that was logging in was counting separately).
Steam fixed it somewhat, but not all to AMD liking (in AMD's words), brushing it off as that Valve doesn't really care about how representative that survey is. (yikes)

Mindfactory is a major pc parts online shop in Germany and it shows buying habbits of the respective DIY demographic in Germany. I don't see why that is not relevant.

All that graph says is that "Ampere" (likely the GA104 die, unknown core count and memory configuration) at ~140W could match the 2080 Ti/TU102 at ~270W. Which might very well be true, but we'll never know outside of people undervolting and underclocking their GPUs, as Nvidia is never going to release a GPU based on this chip at that power level (unless they go entirely insane on mobile, I guess).
This makes the statement fairly useless, whereas AMD's perf/w claim (+50% in RDNA2) is reflecting practical reality at least in TPU reviews.
 
Joined
May 2, 2017
Messages
7,762 (2.81/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
I'm not shifting perspectives, you're probably overanalysing my (maybe too short) messages.
Sorry, but no. You started out by arguing from the viewpoint of gamers needing more VRAM - i.e. basing your argument in customer needs. Regardless of your intentions, shifting the basis of the argument to the viewpoint of the company is a dramatic shift that introduces conflicting interests to your argumentation, which you need to address.
The whole point should be considered only from the viewpoint of the company, in a more or less competitive market.
Again, I have to disagree. I don't give a rodent's behind about the viewpoint of Nvidia. They provide a service to me as a (potential) customer: providing compelling products. They, however, are in it for the profit, and often make choices in product segmentation, pricing, featuresets, etc. that are clearly aimed at increasing profits rather than providing benefits to the customer. There are of course relevant arguments to be presented in terms of whether what customers may need/want/wish for is feasible in various ways (technologically, economically, etc.), but that is as much of the viewpoint of the company as should be taken into account here. Adopting an Nvidia-internal perspective on this is meaningless for anyone who doesn't work for Nvidia, and IMO even meaningless for them unless that person is in a decision-making position when it comes to these questions.
I'm pretty certain that in a year or two there will be more games, requiring more than 10k of VRAM in certain situations, but I think if AMD comes out with a competitive option for the 2080 (with a more reasonable amount of memory), reviews will point out this problem, let's say, in the next 5 months. If this happens , Nvidia will have to react to remain competitive (they are very good at this).
There will definitely be games requiring more than 10k of VRAM ;) But 10GB? Again, I have my doubts. Sure, there will always be outliers, and there will always be games that take pride in being extremely graphically intensive. There will also always be settings one can enable that consume massive amounts of VRAM if desired, mostly with negligible if noticeable at all impacts on graphical quality. But beyond that, the introduction of DirectStorage for Windows and alongside that the very likely beginning of SSDs being a requirement for most major games in the future will directly serve to decrease VRAM needs. Sure, new things can be introduced to take up the space freed up by not prematurely streaming in assets that never get used, but the chance of those new features taking up all that was freed up plus a few GB more is very, very slim. Of course not every game will use DirectStorage, but every cross-platform title launching on the XSX will at least have it as an option - and removing it might necessitate rearchitecting the entire structure of the game (adding loading screens, corridors, etc.), so it's not something that can be removed easily.
No SLI on the 3080? Anyways, I will repeat myself, Nvidia will do this only if they have to, and AMD beats the 3080.
SLI? That's a gaming feature. And you don't even need SLI for gaming with DX12 multi-adapter and the like. Compute workloads do not care one iota about SLI support. NVLink does have some utility if you're teaming up the GPU to work as one, but it's just as likely (for example in huge database workloads, which can consume massive amounts of memory) that each GPU can do the same task in parallel, working on different parts of the dataset, in which case PCIe handles all the communication needed. The same goes for things like rendering.
Do you mean to say that the increase from 8 to 10 GB is proportional with the compute and bandwith gap between the 2080 and the 3080? It's rather obvious that it's not. On the contrary, if you look at the proportions, the 3080 is the outlier of the lineup, it has 2x the memory bandwidth of the 3070 but only 25% more VRAM
...and? Increasing the amount of VRAM to 20GB won't change the bandwidth whatsoever, as the bus width is fixed. For that to change they would have to add memory channels, which we know there are two more of on the die, so that's possible, but then you're talking either 11/22GB or 12/24GB - the latter of which is where the 3090 lives. The other option is of course to use faster rated memory, but the chances of Nvidia introducing a new SKU with twice the memory and faster memory is essentially zero at least until this memory becomes dramatically cheaper and more widespread. As for the change in memory amount between the 2080 and the 3080, I think it's perfectly reasonable, both because the amount of memory isn't directly tied to feeding the GPU (it just needs to be enough; more than that is useless) but bandwidth is (which has seen a notable increase), and because - once again - 10GB is likely to be plenty for the vast majority of games for the foreseeable future.
Older games have textures optimized for viewing at 10802p. This gen is about 4k gaming being really possible, so we'll see more detailed textures.
They will be leveraged on the consoles via streaming from the SSD, and on PCs via increasing RAM/VRAM usage.
The entire point of DirectStorage, which Nvidia made a massive point out of supporting with the 3000-series, is precisely to handle this in the same way as on consoles. So that statement is fundamentally false. If a game uses DirectStorage on the XSX, it will also do so on W10 as long as the system has the required components. Which any 3000-series-equipped system will have. Which will, once again, reduce VRAM usage.

First, this makes the statement fairly useless, whereas AMD's perf/w claim (+50% in RDNA2) is reflecting practical reality at least in TPU reviews.
It absolutely makes the statement useless. That's how marketing works (at least in an extremely simplified and partially naive view): you pick the best aspects of your product and promote them. Analysis of said statements very often show them to then be meaningless when viewed in the most relevant context. That doesn't make the statement false - Nvidia could likely make an Ampere GPU delivering +90% perf/W over Turing, if they wanted to - but it makes it misleading given that it doesn't match the in-use reality of the products that are actually made. I also really don't see how the +50% perf/W for RDNA 2 claim can be reflected in any reviews yet, given that no reviews of any RDNA 2 product exist yet (which is natural, seeing how no RDNA 2 products exist either).
 
Joined
Jul 9, 2015
Messages
3,413 (1.00/day)
System Name M3401 notebook
Processor 5600H
Motherboard NA
Memory 16GB
Video Card(s) 3050
Storage 500GB SSD
Display(s) 14" OLED screen of the laptop
Software Windows 10
Benchmark Scores 3050 scores good 15-20% lower than average, despite ASUS's claims that it has uber cooling.
Joined
Apr 12, 2013
Messages
7,529 (1.77/day)
and removing it might necessitate rearchitecting the entire structure of the game (adding loading screens, corridors, etc.), so it's not something that can be removed easily.
Not sure how accurate that is, there's rumors of a cheap Xbox following (accompanying?) the regular one's release & that one sure as hell isn't going to use just as fast an SSD
 
Joined
Feb 1, 2013
Messages
1,266 (0.29/day)
System Name Gentoo64 /w Cold Coffee
Processor 9900K 5.2GHz @1.312v
Motherboard MXI APEX
Cooling Raystorm Pro + 1260mm Super Nova
Memory 2x16GB TridentZ 4000-14-14-28-2T @1.6v
Video Card(s) RTX 4090 LiquidX Barrow 3015MHz @1.1v
Storage 660P 1TB, 860 QVO 2TB
Display(s) LG C1 + Predator XB1 QHD
Case Open Benchtable V2
Audio Device(s) SB X-Fi
Power Supply MSI A1000G
Mouse G502
Keyboard G815
Software Gentoo/Windows 10
Benchmark Scores Always only ever very fast
To the people who don't like the high stock TDPs, you can thank overclockers who went to all ends to circumvent NVidia's TDP lockdown on Pascal and Turing. NVidia figured that if people would go to extreme lengths to shunt mod flagship GPUs to garner power in excess of 400W, why not push a measly 350W and look good in performance at the same time?
 
Joined
Sep 18, 2017
Messages
198 (0.08/day)
The pricing and cuda core count is defiantly a surprise. While competition from AMD's RDNA is a driver for the leap, I think that the next gen console release is what is pushing this performance jump and price decrease. I think that Nvidia fears that PC gaming is getting too expensive and the next gen consoles may take away some market share if prices cant be lowered.

Plus, it is apparent that Nvidia has been sandbagging since Pascal as AMD just had nothing to compete.
 
Joined
May 2, 2017
Messages
7,762 (2.81/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
Not sure how accurate that is, there's rumors of a cheap Xbox following (accompanying?) the regular one's release & that one sure as hell isn't going to use just as fast an SSD
Actually it is guaranteed to use that. The XSX uses a relatively cheap ~2.4GB/s SSD. The cheaper one might cut the capacity in half, but it won't move away from NVMe. The main savings will come from less RAM (lots of savings), a smaller SoC (lots of savings) and accompanying cuts in the PSU, VRM, cooling, likely lack of an optical drive, etc. (also lots of savings when combined). The NVMe storage is such a fundamental part of the way games are built for these consoles that you can't even run the games off slower external storage, so how would that work with a slower drive internally?
 
Joined
Jul 19, 2016
Messages
482 (0.16/day)
According to RTX 2080 review on Guru3D, it achieves 45 fps average in Shadow of the Tomb Raider in 4K and same settings as Digital Foundry used in their video. But they achieve around 60fps. Which is only 33% more. But they claim avg fps is 80% higher. You can see fps counter in left top corner with Tomb Raider. Vsync was on in captured footage? If there was 80% increase in performance. Avg fps should be around 80. RTX 2080Ti fps on same cpu DF was using should be over 60 in Shadow of TR. So RTX 3080 is Just around 30% faster than 2080Ti. So the new TOP of the line Nvidia gaming GPU is just 30% faster than previous TOP of the line GPU. When you look at it like that, i really don't see any special jump in performance. RTX 3090 is for professionals and i don't even count it in at that price.

I can't wait to see the REAL performance increase by reputable sites (Digital Foundry is not reputable, this was paid marketing deal for Nvidia) of a 3080 vs a 2080 or 2080 Ti. Without the cherry-picking, marketing fiddling of figures and nebulous tweaking of settings (RT, DLSS, vsync etc).

Before it's even been reliably benchmarked it's being proclaimed as the greatest thing ever. But I've been in this game too long to know that the figures sans RT are not nearly as impressive as is being touted by Nvidia's world class marketing and underhanded settings fiddling.
 
Joined
Feb 14, 2012
Messages
2,355 (0.50/day)
System Name msdos
Processor 8086
Motherboard mainboard
Cooling passive
Memory 640KB + 384KB extended
Video Card(s) EGA
Storage 5.25"
Display(s) 80x25
Case plastic
Audio Device(s) modchip
Power Supply 45 watts
Mouse serial
Keyboard yes
Software disk commander
Benchmark Scores still running
I just can't imagine having a 350W card in my system. 250W is pretty warm. I felt that 180W blower was a sweet spot. All of these cards should have at least 16GB imho.
 
Joined
Jul 19, 2016
Messages
482 (0.16/day)
The Series X SSD is not exactly fast or advanced. The PS5's, ok, that is impressive.

So the budget Series S will surely have the same SSD as the X for reasons mentioned above. If it doesn't, MS have created even more problems for themselves with game development.
 
Joined
Apr 12, 2017
Messages
131 (0.05/day)
Processor Haswell-E - i7-5820K @ 4.4GHz
Motherboard ASUS X99S
Cooling Noctua NH-D15S
Memory 16GB Corsair Vengeance LPX 3000MHz
Video Card(s) Palit Super JetStream 980Ti
Storage SSD: 512GB [Crucial MX100] HDD: 34TB [4 x 6TB WD Blue, 2 x 5TB Seagate External]
Display(s) Acer ProDesigner BM320 4K
Case Fractal Design R5
Power Supply Corsair RM750x
Awaiting reviews before I take any real interest in this. Nvidia have a history of bullshitting.
 
Joined
Feb 21, 2006
Messages
2,221 (0.32/day)
Location
Toronto, Ontario
System Name The Expanse
Processor AMD Ryzen 7 5800X3D
Motherboard Asus Prime X570-Pro BIOS 5013 AM4 AGESA V2 PI 1.2.0.Cc.
Cooling Corsair H150i Pro
Memory 32GB GSkill Trident RGB DDR4-3200 14-14-14-34-1T (B-Die)
Video Card(s) XFX Radeon RX 7900 XTX Magnetic Air (24.10.1)
Storage WD SN850X 2TB / Corsair MP600 1TB / Samsung 860Evo 1TB x2 Raid 0 / Asus NAS AS1004T V2 20TB
Display(s) LG 34GP83A-B 34 Inch 21: 9 UltraGear Curved QHD (3440 x 1440) 1ms Nano IPS 160Hz
Case Fractal Design Meshify S2
Audio Device(s) Creative X-Fi + Logitech Z-5500 + HS80 Wireless
Power Supply Corsair AX850 Titanium
Mouse Corsair Dark Core RGB SE
Keyboard Corsair K100
Software Windows 10 Pro x64 22H2
Benchmark Scores 3800X https://valid.x86.fr/1zr4a5 5800X https://valid.x86.fr/2dey9c 5800X3D https://valid.x86.fr/b7d
I payed $370 fot my GTX1070 in 2017. $500 for a 3070 does not seem very fair to me. Next gen the GTX 4070 will be $600.

Since when did NV care about what is fair? They will sell to what the market will bare.

The Series X SSD is not exactly fast or advanced. The PS5's, ok, that is impressive.

So the budget Series S will surely have the same SSD as the X for reasons mentioned above. If it doesn't, MS have created even more problems for themselves with game development.

There is a reason the Direct Storage API was created.
 
Joined
May 15, 2020
Messages
697 (0.42/day)
Location
France
System Name Home
Processor Ryzen 3600X
Motherboard MSI Tomahawk 450 MAX
Cooling Noctua NH-U14S
Memory 16GB Crucial Ballistix 3600 MHz DDR4 CAS 16
Video Card(s) MSI RX 5700XT EVOKE OC
Storage Samsung 970 PRO 512 GB
Display(s) ASUS VA326HR + MSI Optix G24C4
Case MSI - MAG Forge 100M
Power Supply Aerocool Lux RGB M 650W
Sorry, but no. You started out by arguing from the viewpoint of gamers needing more VRAM - i.e. basing your argument in customer needs. Regardless of your intentions, shifting the basis of the argument to the viewpoint of the company is a dramatic shift that introduces conflicting interests to your argumentation, which you need to address.

Again, I have to disagree. I don't give a rodent's behind about the viewpoint of Nvidia. They provide a service to me as a (potential) customer: providing compelling products. They, however, are in it for the profit, and often make choices in product segmentation, pricing, featuresets, etc. that are clearly aimed at increasing profits rather than providing benefits to the customer. There are of course relevant arguments to be presented in terms of whether what customers may need/want/wish for is feasible in various ways (technologically, economically, etc.), but that is as much of the viewpoint of the company as should be taken into account here. Adopting an Nvidia-internal perspective on this is meaningless for anyone who doesn't work for Nvidia, and IMO even meaningless for them unless that person is in a decision-making position when it comes to these questions.

There will definitely be games requiring more than 10k of VRAM ;) But 10GB? Again, I have my doubts. Sure, there will always be outliers, and there will always be games that take pride in being extremely graphically intensive. There will also always be settings one can enable that consume massive amounts of VRAM if desired, mostly with negligible if noticeable at all impacts on graphical quality. But beyond that, the introduction of DirectStorage for Windows and alongside that the very likely beginning of SSDs being a requirement for most major games in the future will directly serve to decrease VRAM needs. Sure, new things can be introduced to take up the space freed up by not prematurely streaming in assets that never get used, but the chance of those new features taking up all that was freed up plus a few GB more is very, very slim. Of course not every game will use DirectStorage, but every cross-platform title launching on the XSX will at least have it as an option - and removing it might necessitate rearchitecting the entire structure of the game (adding loading screens, corridors, etc.), so it's not something that can be removed easily.

SLI? That's a gaming feature. And you don't even need SLI for gaming with DX12 multi-adapter and the like. Compute workloads do not care one iota about SLI support. NVLink does have some utility if you're teaming up the GPU to work as one, but it's just as likely (for example in huge database workloads, which can consume massive amounts of memory) that each GPU can do the same task in parallel, working on different parts of the dataset, in which case PCIe handles all the communication needed. The same goes for things like rendering.

...and? Increasing the amount of VRAM to 20GB won't change the bandwidth whatsoever, as the bus width is fixed. For that to change they would have to add memory channels, which we know there are two more of on the die, so that's possible, but then you're talking either 11/22GB or 12/24GB - the latter of which is where the 3090 lives. The other option is of course to use faster rated memory, but the chances of Nvidia introducing a new SKU with twice the memory and faster memory is essentially zero at least until this memory becomes dramatically cheaper and more widespread. As for the change in memory amount between the 2080 and the 3080, I think it's perfectly reasonable, both because the amount of memory isn't directly tied to feeding the GPU (it just needs to be enough; more than that is useless) but bandwidth is (which has seen a notable increase), and because - once again - 10GB is likely to be plenty for the vast majority of games for the foreseeable future.

The entire point of DirectStorage, which Nvidia made a massive point out of supporting with the 3000-series, is precisely to handle this in the same way as on consoles. So that statement is fundamentally false. If a game uses DirectStorage on the XSX, it will also do so on W10 as long as the system has the required components. Which any 3000-series-equipped system will have. Which will, once again, reduce VRAM usage.


It absolutely makes the statement useless. That's how marketing works (at least in an extremely simplified and partially naive view): you pick the best aspects of your product and promote them. Analysis of said statements very often show them to then be meaningless when viewed in the most relevant context. That doesn't make the statement false - Nvidia could likely make an Ampere GPU delivering +90% perf/W over Turing, if they wanted to - but it makes it misleading given that it doesn't match the in-use reality of the products that are actually made. I also really don't see how the +50% perf/W for RDNA 2 claim can be reflected in any reviews yet, given that no reviews of any RDNA 2 product exist yet (which is natural, seeing how no RDNA 2 products exist either).
My dude, you spend too long to argue, too little to understand. I'm gonna cut this discussion a little short because I don't like discussions that don't go anywhere, no disrespect intended. The 3080 has already loads of bandwidth, all it's lacking is memory size.

If you don't believe me plot a x y graph, with memory bandwidth x FP32 perf as x, memory size as y. Plot the 780,980, 1080, 2080, 3080, 3070 and 3090 points on it and you'll see if there are any outliers ;) . Or we'll just agree to disagree.
 
Joined
Dec 14, 2011
Messages
115 (0.02/day)
I payed $370 fot my GTX1070 in 2017. $500 for a 3070 does not seem very fair to me. Next gen the GTX 4070 will be $600.
I agree. People thinking these prices are low are bonkers. We don't have enough competition in GPU space. $500 for a 8GB card in 2020. Are they joking? It will be fast obsolete.
 

ppn

Joined
Aug 18, 2015
Messages
1,231 (0.36/day)
I agree. People thinking these prices are low are bonkers. We don't have enough competition in GPU space. $500 for a 8GB card in 2020. Are they joking? It will be fast obsolete.

Yeah 1070 +35% 2070 +45% 3070, this thing should be at least 95% faster than 1070, and vram remains the same 8GB.

but this is the gimped chip in order protect 2080Ti users that got gutted by the price cut 60%, 1199 to 499, 11GB is all they have left, not for long. we should get the 6144 Cuda 16GB at some point. only for $599.

8GB should be fine at low detail e-sports for the next 4 years. I get unplayable frame rate at below 8GB. and even 45% won't help with the framerate.
 
Last edited:
Joined
Jun 10, 2014
Messages
2,987 (0.78/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
I graduated alongside, lived with, and stay in touch with multiple game developers from Campos Santos (now Valve), Splash Damage, Jagex, Blizzard, King, Ubisoft, and by proxy EA, and Activision; I think they'd all be insulted by your statement. More importantly, even if there is a grain of truth to what you say, the "off-the-shelf engines" …
Most studios don't make their own game engine in-house anymore, unfortunately. That's not an insult, but a fact. There has been a clear trend in fewer studios making their own engines for years, and the lack of performance optimizations and buggy/broken games at launch are the results. There are some studios, like Id software, which does still do quality work.

We are talking a lot about new hardware features and new APIs in this forum, yet the adoption of such features in games is very slow. Many have been wondering why we haven't seen the revolutionary performance gains we were promised with DirectX 12. Well, the reality is that for generic engines the low-level rendering code is hidden behind layers upon layers of abstractions, so those are not going to get the full potential.

… have been slowly but surely migrating to console-optimised engines over the last few years.
"Console optimization" is a myth.
In order to optimize code, low-level code must be written to target specfic instructions, API features or performance characteristics.
When people are claiming games are "console optimized", they are usually referring to them not being scalable, so it's rather lack of optimization if anything.

My point was exactly that. Perhaps English isn't your first language but when I said "the last 25 years of PC gaming has proven that devs always cater to the lowest common denominator" - that was me saying that it ISN'T going to change suddenly, and it's been like this for 25 years without changing. That's exactly why games aren't particularly good at utilising the hardware we have currently, because the devs need to make sure it'll run on a dual-core with 4GB RAM and a 2GB graphics card from 9 years ago.
Games today are usually not intentionally catering to the lowest common denominator, but it's more a result of the engine they have chosen, especially if they don't make one in-house. If having support for 10 year old PCs were a priority, we would see more games with support for older Windows versions etc.
 
Joined
May 2, 2017
Messages
7,762 (2.81/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
I can't wait to see the REAL performance increase by reputable sites (Digital Foundry is not reputable, this was paid marketing deal for Nvidia) of a 3080 vs a 2080 or 2080 Ti. Without the cherry-picking, marketing fiddling of figures and nebulous tweaking of settings (RT, DLSS, vsync etc).

Before it's even been reliably benchmarked it's being proclaimed as the greatest thing ever. But I've been in this game too long to know that the figures sans RT are not nearly as impressive as is being touted by Nvidia's world class marketing and underhanded settings fiddling.
Just a technicality: there's a big difference between closely regulated exclusive access to hardware and paid marketing. Is it a marketing plot by Nvidia? Absolutely. Does it undermine DF' s credibility whatsoever? No. Why? Because they are completely transparent about the process, the limitations involved, and how the data is presented. Their conclusion is also "we should all wait for reviews, but this looks very good for now":

It's early days with RTX 3080 testing. In terms of addressing the claims of the biggest generational leap Nvidia has ever delivered, I think the reviews process with the mass of data from multiple outlets testing a much wider range of titles is going to be the ultimate test for validating that claim. That said, some of the numbers I saw in my tests were quite extraordinary and on a more general level, the role of DLSS in accelerating RT titles can't be understated.

That there? That's nuance. (Something that is sorely lacking in your post.) They are making very, very clear that this is a preliminary hands-on, in no way an exhaustive review, and that there were mssive limitations to which games they could test, how they could run the tests, which data they could present from these tests, and how they could be presented. There is also no disclosure of this being paid content, which they are required by law to provide if it is. So no, this is not a "paid marketing deal". It's an exclusive preview. Learn the difference.
 

r9

Joined
Jul 28, 2008
Messages
3,300 (0.55/day)
System Name Primary|Secondary|Poweredge r410|Dell XPS|SteamDeck
Processor i7 11700k|i7 9700k|2 x E5620 |i5 5500U|Zen 2 4c/8t
Memory 32GB DDR4|16GB DDR4|16GB DDR4|32GB ECC DDR3|8GB DDR4|16GB LPDDR5
Video Card(s) RX 7800xt|RX 6700xt |On-Board|On-Board|8 RDNA 2 CUs
Storage 2TB m.2|512GB SSD+1TB SSD|2x256GBSSD 2x2TBGB|256GB sata|512GB nvme
Display(s) 50" 4k TV | Dell 27" |22" |3.3"|7"
VR HMD Samsung Odyssey+ | Oculus Quest 2
Software Windows 11 Pro|Windows 10 Pro|Windows 10 Home| Server 2012 r2|Windows 10 Pro
I just hope this new gen bring the prices down on the used cards, because the prices for used gpus are nuts.
People asking new card money for their used crap.
Hopefully AMD have something competitive this time around and have the effect on nvidia as it had on intel.
Because after many many years I see a better value in intel i7 10700 than any Ryzen.
 
Joined
Sep 1, 2009
Messages
1,232 (0.22/day)
Location
CO
System Name 4k
Processor AMD 5800x3D
Motherboard MSI MAG b550m Mortar Wifi
Cooling ARCTIC Liquid Freezer II 240
Memory 4x8Gb Crucial Ballistix 3600 CL16 bl8g36c16u4b.m8fe1
Video Card(s) Nvidia Reference 3080Ti
Storage ADATA XPG SX8200 Pro 1TB
Display(s) LG 48" C1
Case CORSAIR Carbide AIR 240 Micro-ATX
Audio Device(s) Asus Xonar STX
Power Supply EVGA SuperNOVA 650W
Software Microsoft Windows10 Pro x64
The Series X SSD is not exactly fast or advanced. The PS5's, ok, that is impressive.

So the budget Series S will surely have the same SSD as the X for reasons mentioned above. If it doesn't, MS have created even more problems for themselves with game development.
MS didnt have to overtune the SSD because they created a whole new APi, Direct Storage. I think the PS5 and the Xbox will have about the same effective speed.
 
Joined
Jul 19, 2016
Messages
482 (0.16/day)
MS didnt have to overtune the SSD because they created a whole new APi, Direct Storage. I think the PS5 and the Xbox will have about the same effective speed.

No, the PS5 SSD is literally TWICE as fast, and its IO is apparently significantly more advanced, there's no chance they are similar in performance.

Just a technicality: there's a big difference between closely regulated exclusive access to hardware and paid marketing. Is it a marketing plot by Nvidia? Absolutely. Does it undermine DF' s credibility whatsoever? No. Why? Because they are completely transparent about the process, the limitations involved, and how the data is presented. Their conclusion is also "we should all wait for reviews, but this looks very good for now":



That there? That's nuance. (Something that is sorely lacking in your post.) They are making very, very clear that this is a preliminary hands-on, in no way an exhaustive review, and that there were mssive limitations to which games they could test, how they could run the tests, which data they could present from these tests, and how they could be presented. There is also no disclosure of this being paid content, which they are required by law to provide if it is. So no, this is not a "paid marketing deal". It's an exclusive preview. Learn the difference.

It's a paid marketing deal.
 
Top