Tuesday, September 20th 2022

NVIDIA Project Beyond GTC Keynote Address: Expect the Expected (RTX 4090)

NVIDIA just kicked off the GTC Autumn 2022 Keynote address that culminates in Project Beyond, the company's launch vehicle for its next-generation GeForce RTX 40-series graphics cards based on the "Ada" architecture. These are expected to nearly double the performance over the present generation, ushering in a new era of photo-real graphics as we inch closer to the metaverse. NVIDIA CEO Jensen Huang is expected to take center-stage to launch these cards.

15:00 UTC: The show is on the road.
15:00 UTC: AI remains the center focus, including how it plays with gaming.

15:01 UTC: Racer X is a real-time interactive tech demo. Coming soon.
15:02 UTC: Future games will be simulations, not pre-baked- Jensen Huang
15:03 UTC: This is seriously good stuff (RacerX). It runs on a single GPU, in real-time, uses RTX Neural Rendering
15:05 UTC: Ada Lovelace is a huge GPU
15:06 UTC: 76 billion transistors, over 18,000 shaders. 76 billion transistors, Micron GDDR6X memory. Shader execution reordering is major innovation, as big as out-of-order execution for CPUs, gains up to 25% in-game performance. Ada built on TSMC 4 nm, using 4N, a custom process designed in together with NVIDIA.

There's a new streaming multiprocessor design, with a total of 90 TFLOPS. Power efficiency is doubled over Ampere.
Ray Tracing is on the third generation now, with 200 RT TFLOPS and twice the triangle intersection speed.
Deep Learning AI uses 4th gen Tensor Cores, 1400 TFLOPS, "Optical Flow Accelerator"
15:07 UTC: Shader Execution Reordering similar to the one we saw with Intel Xe-HPG
15:08 UTC: Several new hardware-accelerated ray tracing innovations with 3rd gen RTX.
15:09 UTC: DLSS 3 is announced. It brings with it several new innovations, including temporal components, and Reflex latency optimizations. Generates new frames without involving the graphics pipeline.
15:11 UTC: Cyberpunk 2077 to get DLSS 3 and SER. 16 times increase in effective performance using DLSS 3 vs. DLSS 1. MS Flight Simulator to get DLSS 3 support
15:13 UTC: Portal RTX, a remaster just like Quake II RTX, available from November, created with Omniverse RTX Remix.
15:14 UTC: Ada offers a giant leap in total performance. Everything has been increased 40 -> 90 TFLOPS shader, 78 -> 200 TFLOPS RTX, 126 -> 300 TFLOPS OFA, 320 -> 1400 TFLOPS Tensor.
15:17 UTC: Power efficiency is more than doubled, but power goes up to 450 W now.
15:18 UTC: GeForce RTX 4090 will be available on October 12, priced at $1600. It comes with 24 GB GDDR6X and is 2-4x faster than RTX 3090 Ti.
15:18 UTC: RTX 4080 is available in two versions, 16 GB and 12 GB. The 16 GB version starts at $1200, the 12 GB at $900. 2-4x faster than RTX 3080 Ti.
15:19 UTC: New pricing for RTX 30-series, "for mainstream gamers", RTX 40-series "for enthusiasts".
15:19 UTC: "Ada is a quantum leap for gamers"—improved ray tracing, shader execution reordering, DLSS 3.
15:20 UTC: Updates to Omniverse

15:26 UTC: Racer X demo was built by a few dozen artists in just 3 months.
15:31 UTC: Digital twins would play a vital sole in product development and lifecycle maintenence.
15:31 UTC: Over 150 connectors to Omniverse.
15:33 UTC: GDN (graphics delivery network) is the new CDN. Graphics rendering over the Internet will be as big in the future as streaming video is today.
15:37 UTC: Omniverse Cloud, a planetary-scale GDN
15:37 UTC: THOR SuperChip for automotive applications.

15:41 UTC: NVIDIA next-generation Drive
Add your own comment

333 Comments on NVIDIA Project Beyond GTC Keynote Address: Expect the Expected (RTX 4090)

#226
80-watt Hamster
ValantarYeah, I think that's spot on. And I also think that if Nvidia were to get their will with going that way, it'll be the end of PC gaming - barring someone else stepping in to fill their shoes. PC gaming as a hyper-expensive niche just isn't sustainable as a business. If anything, the Steam Deck has demonstrated that the future of gaming might lie in the exact opposite direction, and that we might in some ways be reaching a point of "good enough" in a lot of ways.

I also agree that this situation presents a massive opportunity for AMD. Regardless of where RDNA3 peaks in terms of absolute performance, if they flesh out the $300-700 segment with high performing, good value options relatively quickly, they have an unprecedented opportunity to overtake Nvidia - as this pitch from Nvidia (as well as their earnings call) confirms that they've got more Ampere stock than they know what to do with, and they don't want to stomach the cost of cutting prices unless they really have to. If RDNA3 actually delivers its promised 50% perf/W increase, and die sizes and MCM packaging costs allow for somewhat competitive pricing, this could be the generation where AMD makes a significant move up from their perennial ~20% market share. That's my hope - but knowing AMD and their recent preference for high ASPs and following in Nvidia's pricing footsteps, I'm not that hopeful. Fingers crossed, though.


This is absolutely true. There's also the simple fact of GPUs largely being good enough for quite a lot of things for quite a while. We're no longer seeing anyone need a GPU upgrade after even 3 years - unless you're an incorrigible snob, at least. Lots of factors play into this, from less growth in graphical requirements to the massive presence of AA/lower budget/indie games in the current market to open source upscaling tech to lower resolutions in many cases still being fine in terms of graphical fidelity. It could absolutely be that Nvidia is preparing for a future of (much) lower sales, but I don't think they can do that by pushing for $1500 GPUs becoming even remotely normal - the number of people affording those isn't going to increase anytime soon. Diversifying into other markets is smart, though, and AMD also needs to do the same (not that they aren't already). But the impending/already happened death of the 1-2 year GPU upgrade cycle won't be solved by higher prices and more premium products.
I think it's less trying to position $1500 GPUs as normal, but using the halo effect to normalize $500-700 cards as midrange. $350 was upper midrange not that long ago.
Posted on Reply
#227
N3M3515
dick_cheneyif that's what you consider fair pricing logic then next GPU gen will be 1499 for the 5080 becuase its faster.
Finaly someone with a brain.
/rant
I for the life of me don't understand all these people claiming normal prices, jsut because it's faster! IT'S FUCKING SUPPOSED TO BE FASTER, It's a new gen for god sake. How on earth is it justifiable that a gepu that launched at $700 is now launching at freaking $1100?!?!?!?

Fucking next gen normal price is $2000 because is 2x faster than 4080????!?!?!?
rant/
Posted on Reply
#228
Legacy-ZA
You would think that, after the record-high profits nVidia reaped, they would cut gamers some slack this time around. But it looks like, they are just rolling on ahead with giving middle fingers. an RTX4080 that should be a 4070... and the prices.... LOL! GTFO nGreedia!

Don't buy their stuff unless you NEED a replacement, time to teach them a lesson they won't soon forget.
Posted on Reply
#229
Tek-Check
N3M3515Finaly someone with a brain.
/rant
I for the life of me don't understand all these people claiming normal prices, jsut because it's faster! IT'S FUCKING SUPPOSED TO BE FASTER, It's a new gen for god sake. How on earth is it justifiable that a gepu that launched at $700 is now launching at freaking $1100?!?!?!?

Fucking next gen normal price is $2000 because is 2x faster than 4080????!?!?!?
rant/
Calm down. It's not normal. If high end GPU enthusiasts are happy to be milked, allow them the pleasure. It's not mining crisis anymore, so those expensive cards will have harder time to sell, when you can buy used 3090 under $700 on ebay.
Posted on Reply
#230
wheresmycar
80-watt HamsterI think it's less trying to position $1500 GPUs as normal, but using the halo effect to normalize $500-700 cards as midrange. $350 was upper midrange not that long ago.
I had the same feeling back in 2017 seeing the increase in price for the 1080 TI which many (like myself) were induced to invest into. For me it was a tough decision at the time being my earlier purchases of previous Gen hardware at the GPU level hardly shot north of £350-£400. To justify forking out £600/so for a 1080 TI i dropped going the custom liquid cooling route which was preplanned for both the GPU and CPU... hence convinced myself it was a worthwhile splurge paired up with a less expensive AIO. At the time it was apparent back then if gamers consciously splurged £600-£750 for the top performance territory admission (or closely trailing) the end result was always inevitable: "Gen-2-Gen perpetual and immense price increases " or in more subtle terms "taking the p-i-s-s". What I didn't anticipate was how far the day light robbery would stretch and to what extent the influence would carry over to mid-performance cards. I can't see a 4070/4060 being reasonably priced by any stretch of the imagination and even worse i think we're losing merit in defining what is reasonable as we're long past sensible/logical price moderation and a large fraction of the added cost already appears inherent. Worse of it, i felt this way at the launch of 1080 TI... current offerings for me are just a bad joke gone right for NVIDIA and soon to follow, AMD.

Anyway, i've made my mind up.. i'm not spending a dime over £800 for my next upgrade which is already an extravagant ask. I'll wait for RDNA3 to drop too. Whatever comes available at this price range, if it delivers a sizeable performance uplift over my current graphics card (2080-TI) i'm pulling the trigger. What i won't be doing is picking up a high refresh rate 4K display over the current 1440p which by now (5 years on) I thought it would be economically feasible. NVIDIA, thanks!
Posted on Reply
#231
Sisyphus
Valantar[...]
I have to correct something. 4090 will not support nvlink. The 4090 will be more a gaming card, than the 3090 was. Interesting. Now there is room for a new rtx Titan around 2000 $ and nvlink. Also possible, nVidia wants to stop cannibalizing quadro clients. It seams, the 4090 is mainly for 4k/RT enthusiasts. Didn't expected it.
Posted on Reply
#232
oneils
Dirt ChipHopefully we could have DLSS-3 without RT (RT off) so a 4050/4060 will be just fine for a 4K, 60FPS, not max ultra settings.



4080 12gb
4080 12gb ti
4080 12gb super
4080 12gb super ti

4080 16gb
4080 16gb ti
4080 16gb super
4080 16gb super ti

4090 20gb/ti/super/super ti

4090 24gb ti/super/ super ti

and you can have every bit of +/-256 shaders you like
:)
Hahaha. Reminds me of the early 2000s. Around 2002-2004. So many tiers...
Posted on Reply
#233
Sisyphus
oneilsHahaha. Reminds me of the early 2000s. Around 2002-2004. So many tiers...
And so many board partners and revisions. Life is complex. :laugh:
Posted on Reply
#234
AnotherReader
oxrufiioxoI think AMD is more focused on CPU than GPU all their products are made on the Same TSMC node unlike Nvidia that gets a special node just for their gpus. I think AMD is happy with 20-25% of the market and just making a profit on top of being in the PS5 and Series X/S.

That being said I still expect the 7900XT to be 1200 usd or less and the 7800XT to be 800 usd or less I just don't think AMD cares as long as its 50-100 usd cheaper than a comparable Nvidia sku even if its much cheaper to make. I also wouldn't be surprised if the 7900XT was 1500 usd though..... I doubt anything lower than these 2 cards will launch anytime soon though with maybe a 7700XT early next year that to me will be the more interesting card in how they price it.

Another thing and this goes for Nvidia as well is TSMC is really the one dictating prices. Not sure if true but the New Ada gpus are supposedly much more costly to make than Ampere I really would like to see a breakdown of component cost for sure though I know SMD are getting more expensive as well.
TSMC's N5 is significantly more expensive than their own N7 or Samsung's 8 nm node. That isn't the full explanation though. Given the size of the 4090's die, the 4080 16 GB should be based on a die around 400 mm^2. That is about the same size as GA104, i.e. the die used for the 3060 Ti, 3070, and 3070 Ti. As the 4080-16 is unlikely to be harvested from perfect dies, it will cost Nvidia about 67 to 96 dollars more than GA104. With a 50% margin, this means the 4080 16 GB should be 134 to 192 dollars more than the 3070 based on the die cost alone. I don't know what the 16 GB of GDDR6X cost compared to 8 GB of GDDR6. Let's say it's an extremely unlikely $220 vs $55. Now the difference is $ 165 for the memory alone; margins for the retailer and AIB should increase it to $ 200 or so. Overall, the cost increase should top out at $ 450 for memory plus GPU, and is probably more realistically around $ 300 or less. The 4080 16 GB shouldn't be sold for more than $ 949 based on this very cursory analysis.
Posted on Reply
#235
oxrufiioxo
AnotherReaderTSMC's N5 is significantly more expensive than their own N7 or Samsung's 8 nm node. That isn't the full explanation though. Given the size of the 4090's die, the 4080 16 GB should be based on a die around 400 mm^2. That is about the same size as GA104, i.e. the die used for the 3060 Ti, 3070, and 3070 Ti. As the 4080-16 is unlikely to be harvested from perfect dies, it will cost Nvidia about 67 to 96 dollars more than GA104. With a 50% margin, this means the 4080 16 GB should be 134 to 192 dollars more than the 3070 based on the die cost alone. I don't know what the 16 GB of GDDR6X cost compared to 8 GB of GDDR6. Let's say it's an extremely unlikely $220 vs $55. Now the difference is $ 165 for the memory alone; margins for the retailer and AIB should increase it to $ 200 or so. Overall, the cost increase should top out at $ 450 for memory plus GPU, and is probably more realistically around $ 300 or less. The 4080 16 GB shouldn't be sold for more than $ 949 based on this very cursory analysis.
Tsmc 5nm is like $ 17000 vs $9000 for 7nm per 300mm wafer. Not sure if what Nvidia is using with the 4n is even more expensive. I'm pretty sure samsungs 8nm was much cheaper than 7nm tsmc. No idea what the yealds are so hard to compare pricing.


I agree though pricing seems to suck on both the 4080s my guess is if ampere was all sold out they'd price them at 799 and 999 but I'm probably wrong.
Posted on Reply
#236
AusWolf
Legacy-ZAYou would think that, after the record-high profits nVidia reaped, they would cut gamers some slack this time around. But it looks like, they are just rolling on ahead with giving middle fingers. an RTX4080 that should be a 4070... and the prices.... LOL! GTFO nGreedia!

Don't buy their stuff unless you NEED a replacement, time to teach them a lesson they won't soon forget.
Present-day capitalism is about constant growth, and not cutting some slack. If they can sell stuff for more money, they will. Let's see what AMD's got in hand. I hope, not the same "it's more expensive because it's faster" crap.
Posted on Reply
#237
Valantar
SisyphusI have to correct something. 4090 will not support nvlink. The 4090 will be more a gaming card, than the 3090 was. Interesting. Now there is room for a new rtx Titan around 2000 $ and nvlink. Also possible, nVidia wants to stop cannibalizing quadro clients. It seams, the 4090 is mainly for 4k/RT enthusiasts. Didn't expected it.
Nvlink isn't necessary for most pro applications either - only the ones where memory coherency between cards is a major benefit. Most renderers and similar applications in the prosumer/pro range treat individual GPUs separately regardless of the presence of an Nvlink bridge. So the loss of Nvlink won't matter much to the types of pros interested in a Titan class GPU anyhow. Of course, if you're working with tens of GB of data that needs loading into VRAM, then Nvlink (and ideally 48GB or higher Quadros) is a must.
Posted on Reply
#238
gffermari
I don't mind the new naming scheme (two 4080s! like two 3080s like two 1060s) since I always look for how it performs at what cost but....kind of dissapointed of the prices.
949 and 1269 pounds for the FE??!!

I was always reading about skipping the 3000, if you have a 2080Ti, but now seems to be the only reasonable, pricewise, alternative.

Also if someone is willing to pay the price for a 4080, he should go all in for the 4090. I see the 4080 16GB as terrible value.
Posted on Reply
#239
AnotherReader
oxrufiioxoTsmc 5nm is like $ 17000 vs $9000 for 7nm per 300mm wafer. Not sure if what Nvidia is using with the 4n is even more expensive. I'm pretty sure samsungs 8nm was much cheaper than 7nm tsmc. No idea what the yealds are so hard to compare pricing.


I agree though pricing seems to suck on both the 4080s my guess is if ampere was all sold out they'd price them at 799 and 999 but I'm probably wrong.
I used estimates of 16,000 for N5 and 7,000 for Samsung's 8 nm. Yields are actually much higher than the defect rate indicates as they aren't selling full dies; they can harvest many, if not all, defective dies. Still, I used a defect density of 0.09 per square cm for both processes which works out to a yield of 70% for a GA104 sized chip. According to TSMC, N4 will use EUV for more layers thereby reducing the number of steps and masks, thus offering a cost advantage.
Posted on Reply
#240
oxrufiioxo
AnotherReaderI used estimates of 16,000 for N5 and 7,000 for Samsung's 8 nm. Yields are actually much higher than the defect rate indicates as they aren't selling full dies; they can harvest many, if not all, defective dies. Still, I used a defect density of 0.09 per square cm for both processes which works out to a yield of 70% for a GA104 sized chip. According to TSMC, N4 will use EUV for more layers thereby reducing the number of steps and masks, thus offering a cost advantage.
Apparently prices have gone up ay least 20% since TSMC announced 5nm at 17000 per wafer back in Q1 2020.

Again hard to get a guage of what Nvidia is actually paying. They are also Apparently not offering discounts on large orders.

That's about all I can find... Again not defending Nvidia the two 4080s currently look bad.... Flagships are always overpriced but adjusted for inflation the 4090 is cheaper than the 3090 non ti at launch at least in the US.

At the same time I won't be comparing these cards to launch prices I'll be doing it vs the current 3090ti at 1100 usd and my 3080ti at it's current 800 ish usd price to guage if the 4090/4080 16GB is worth buying.
Posted on Reply
#241
AusWolf
gffermariI don't mind the new naming scheme (two 4080s! like two 3080s like two 1060s) since I always look for how it performs at what cost but....kind of dissapointed of the prices.
949 and 1269 pounds for the FE??!!

I was always reading about skipping the 3000, if you have a 2080Ti, but now seems to be the only reasonable, pricewise, alternative.

Also if someone is willing to pay the price for a 4080, he should go all in for the 4090. I see the 4080 16GB as terrible value.
I agree.

Considering this, my next upgrade will either be a
  • 4080 12 GB if I can find one at a reasonable price (doubtful), or
  • 7700 XT... in fact, I'm toying with the thought of having an all-AMD machine again, or
  • Arc A770. Again, only at a decent enough price. Based on its predicted performance, it should be a sweet deal at around 300 quid.
Posted on Reply
#242
Valantar
Just for reference here, since people are discussing nodes: the differences between N5 and N4 (assuming it's base N4 and not N4P or N4X, which don't seem to be in volume production yet?) aren't huge, so Nvidia isn't likely to have that much of an advantage from this node difference compared to RDNA3. Isn't AMD also said to be using a bespoke N5 process, not the stock one? Or is that just for Ryzen 7000?

From Anandtech:

This is obviously rather vague - especially the N4 vs N5 comparison - but we can assume that regular N4 has less of a power drop than -22%, and less of a performance increase than +11%. Nvidia can definitely use that power drop, given how Ampere lagged behind RDNA2 and the upcoming RNDA3 getting a boost from moving to 5nm. Still, this is definitely going to be interesting going forwards.
Posted on Reply
#243
AusWolf
ValantarJust for reference here, since people are discussing nodes: the differences between N5 and N4 (assuming it's base N4 and not N4P or N4X, which don't seem to be in volume production yet?) aren't huge, so Nvidia isn't likely to have that much of an advantage from this node difference compared to RDNA3. Isn't AMD also said to be using a bespoke N5 process, not the stock one? Or is that just for Ryzen 7000?

From Anandtech:

This is obviously rather vague - especially the N4 vs N5 comparison - but we can assume that regular N4 has less of a power drop than -22%, and less of a performance increase than +11%. Nvidia can definitely use that power drop, given how Ampere lagged behind RDNA2 and the upcoming RNDA3 getting a boost from moving to 5nm. Still, this is definitely going to be interesting going forwards.
I've noticed that the advantages of new nodes have been diminishing while the prices of final products have been rapidly growing lately. Rhetorical question: are we coming close to a point when developing new manufacturing nodes just isn't worth it anymore?
Posted on Reply
#244
Sisyphus
ValantarNvlink isn't necessary for most pro applications either - only the ones where memory coherency between cards is a major benefit. Most renderers and similar applications in the prosumer/pro range treat individual GPUs separately regardless of the presence of an Nvlink bridge. So the loss of Nvlink won't matter much to the types of pros interested in a Titan class GPU anyhow. Of course, if you're working with tens of GB of data that needs loading into VRAM, then Nvlink (and ideally 48GB or higher Quadros) is a must.
Thanks. I am in offline ray tracing, some content creating using Nvidia Iray engine as a hobby. In my application, the complexity of a scenery is limited by the discrete VRAM, while the cuda- and rt-cores can be pooled, as you mentioned. The professionals I am following in the community are in content creating, rendering videos and video editing. They recommended nvlink for video rendering and editing. Anyway, a single 3090 or 4090 are good enough for professional work. For 700 $ I would buy a used 3090 at once. Unfortunately, no ebay in my homeland, retail price still around 1600 $. I went from 970 to 2070 and are planning to upgrade to a 4090 if there is no inexpensive 4070 12-16 GB version. The 4080 models don't met my price-performance expectation.
Posted on Reply
#245
N/A
What is this 3060 starts at 329 no change as of Jan 12th, 2021 year, the same good old MSR of the cards that we never saw because of mining, and could not affort because of inflated prices, are now starting where they left.

3060 must drop to 199 ASAP, 3070 299, 3080 499. and launch $599 4070 with 12GB, 7168 shaders together with 4080 12G. Nvidia is driving me nuts
Posted on Reply
#246
Valantar
AusWolfI've noticed that the advantages of new nodes have been diminishing while the prices of final products have been rapidly growing lately. Rhetorical question: are we coming close to a point when developing new manufacturing nodes just isn't worth it anymore?
I think it's more a case of new nodes being so incredibly expensive to develop that we're seeing a proliferation of specialized and optimized sub-nodes, which in turn leads to smaller perceived changes node-to-node. But in a way, yes - we are definitely reaching a point where for some products, moving to a new node doesn't really make sense, which renders new nodes less useful.
Posted on Reply
#247
AnotherReader
oxrufiioxoApparently prices have gone up ay least 20% since TSMC announced 5nm at 17000 per wafer back in Q1 2020.

Again hard to get a guage of what Nvidia is actually paying. They are also Apparently not offering discounts on large orders.

That's about all I can find... Again not defending Nvidia the two 4080s currently look bad.... Flagships are always overpriced but adjusted for inflation the 4090 is cheaper than the 3090 non ti at launch at least in the US.

At the same time I won't be comparing these cards to launch prices I'll be doing it vs the current 3090ti at 1100 usd and my 3080ti at it's current 800 ish usd price to guage if the 4090/4080 16GB is worth buying.
They increased prices by up to 10% for the newer nodes. 20% was for older nodes.
Posted on Reply
#248
ratirt
Jeez, if this is not the worst release and pricing then I dont know what is. Screw the performance uplift since that one has nothing to do with any progress or advancement if you pay almost a $900 for what should have been a mid range affordable card. It does make sense what NV has been trying to do with the Turing and the Ampere price hikes etc. Slowly, the prices are going up for the top and mid products and this is exactly the outcome. I'm guessing, it wont stop.
Posted on Reply
#249
AusWolf
ratirtJeez, if this is not the worst release and pricing then I dont know what is. Screw the performance uplift since that one has nothing to do with any progress or advancement if you pay almost a $900 for what should have been a mid range affordable card. It does make sense what NV has been trying to do with the Turing and the Ampere price hikes etc. Slowly, the prices are going up for the top and mid products and this is exactly the outcome. I'm guessing, it wont stop.
Nvidia in 2014: "Look, here's twice the performance for the same price."
Nvidia in 2022: "You need to pay more to get more. That's just the way it is."
WTF?
Posted on Reply
#250
ratirt
AusWolfNvidia in 2014: "Look, here's twice the performance for the same price."
Nvidia in 2022: "You need to pay more to get more. That's just the way it is."
WTF?
Exactly that WTF. I understand that prices can go up due to some turbulence in economics we currently have but this is ridiculous and using that angle to justify unjustified price hikes which NV was preparing for for the last few years is not OK and I think the entire situation in Russia vs Ukraine war and post covid stuff makes it easier for companies to justify these hikes. But all of this is understandable from a company perspective. Use the angle you can to make people pay more. What I don't understand is some people here who are OK with these hikes ( a blind man would noticed what NV has been doing starting with Turing) and justify this because the performance is higher. Yes it is supposed to be higher every gen so you get more for the same price but it has to be evident. If you get scraps as more instead is that still OK? I disagree with the prices and angle that economy is suffering and inflation. Just because there is a little choke in the economy, prices should not be twice as high. Everything is being turned over and the worst par is, more and more people are OK with it. What a disgrace. AMD has a chance to attack NV with pricing and hard but they wont. They will go the same price hike route and will price the products accordingly to NV. At least that is what I bet on.

About the DLSS 3. Do I understand it correctly that only the 4000 series cards can use the DLSS 3.0 or will the Turing and Ampere be able too?
Posted on Reply
Add your own comment
Jul 1st, 2024 12:37 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts