Monday, August 20th 2018

NVIDIA GeForce RTX 2080, 2070, and 2080 Ti Specifications Revealed

(Update 1: NVIDIA at its Koln event also revealed that these graphics cards were made for overclocking, with highly-improved power regulation and management systems designed just for that purpose. Jensen Huang himself added that these new graphics cards, with their dual 13-blade radial fan design, work at one fifth the audio levels of a previous-gen GeForce GTX 1080 Ti.

With product pages and pricing of the GeForce RTX 2080, RTX 2070, and the RTX 2080 Ti going up ahead of formal unveiling. The RTX 2080 leads the pack with 2944 CUDA cores, 1515 MHz GPU clocks, 1710 MHz boost, and 14 Gbps memory; while the RTX 2070 is equipped with 2304 CUDA cores, 1410 MHz GPU clocks, 1620 MHz boost, and an unchanged 14 Gbps GDDR6 memory. The RTX 2080 Ti leads the pack with 4352 CUDA cores, 1350 MHz core, 1545 MHz boost, and 14 Gbps memory, but a wider 352-bit memory interface.
Add your own comment

59 Comments on NVIDIA GeForce RTX 2080, 2070, and 2080 Ti Specifications Revealed

#26
bug
FordGT90ConceptInteresting, so RTX 2070 has about the same power consumption as GTX 1080. If we pretend the 27w is baked into that figure, the efficiency gains are minimal at best, nonexistent at worst.
It seems they're forced to backpaddle a little, otherwise AMD will never catch up.
Posted on Reply
#27
efikkan
FordGT90ConceptVega 20 could still happen this year yet. GloFo 7nm vs TSMC 12nm (and less RT/Tensor garbage).
I believe TSMC will be first on 7 nm, and AMD will have a very low volume of Vega20 around Christmas.
But considering it will bring more fp64 performance and probably marginally higher clocks, what relevance does it have for the consumer market? At best we're looking at a symbolic top model performing a little better than Vega 64, which by that time will be competing with GTX 2060 in the $300 segment.
Vayra86Yup. AMD has a real opportunity here to leap forward on raw gaming performance and let the RTX nonsense pass them by entirely, instead of choosing to embrace it and implement it in a crappy way.

That headroom can make their current tech competitive again. And they have TONS of room to play with price now.
Really? In which timeframe? You do know that Pascal offers 80-85% more performance per watt vs. Polaris and Vega, and Volta and Turing is even better. Even with Nvidia focusing on RTX, they still have a historic gap to close.

But AMD is certainly best at one thing; they have the biggest room for improvement!
Posted on Reply
#28
Tsukiyomi91
im intrigued at the RTX2070... under $500 is a good sign IMO.
Posted on Reply
#29
Somethingnew
Tsukiyomi91im intrigued at the RTX2070... under $500 is a good sign IMO.
Good sign ? So 100 $ price hike is ok ? Actually if it would be similar to gtx 1080 performance level , when prevorious generation gtx 1070 was in gtx 980 ti performance level , price hike , the same amount of memory , 12 nm procces, and higher tdp , i would call that to good . Actually that's what happens when there isn't any real competiton . If the trend continue the next generation of middle range graphics could cost 600 $ ,
Posted on Reply
#30
efikkan
SomethingnewGood sign ? So 100 $ price hike is ok ? Actually if it would be similar to gtx 1080 performance level , when prevorious generation gtx 1070 was in gtx 980 ti performance level , price hike , the same amount of memory , 12 nm procces, and higher tdp , i would call that to good . Actually that's what happens when there isn't any real competiton . If the trend continue the next generation of middle range graphics could cost 600 $ ,
Remember that GeForce 8800 Ultra launched at $829 in 2007 ($1008 in 2018 dollars), and that was at a time when AMD was competing.

I still wish prices were lower though…
Posted on Reply
#31
FordGT90Concept
"I go fast!1!11!1!"
efikkanI believe TSMC will be first on 7 nm, and AMD will have a very low volume of Vega20 around Christmas.
But considering it will bring more fp64 performance and probably marginally higher clocks, what relevance does it have for the consumer market? At best we're looking at a symbolic top model performing a little better than Vega 64, which by that time will be competing with GTX 2060 in the $300 segment.
You're forgetting that Vega 64 is basically Fury X with a few minor architectural tweaks:
Fury X: 8.9 million transistors, 596 mm², 28 nm TSMC
1080 Ti: 11.8 million transistors, 471 mm², 16 nm TSMC
Vega 64: 12.5 million transistors, 510 mm², 14 nm GloFo
2080 Ti: 18.6 million transistors, 754 mm², 12 nm TSMC

I think NVIDIA wasting their time on RT/Tensor gave AMD a window to steal the gaming crown.
Posted on Reply
#32
bug
SomethingnewGood sign ? So 100 $ price hike is ok ? Actually if it would be similar to gtx 1080 performance level , when prevorious generation gtx 1070 was in gtx 980 ti performance level , price hike , the same amount of memory , 12 nm procces, and higher tdp , i would call that to good . Actually that's what happens when there isn't any real competiton . If the trend continue the next generation of middle range graphics could cost 600 $ ,
You can rest easy. From Nvidia's presentation:
The RTX 2070 has higher performance than the Titan Xp
because
SM is completely brand new
Concurrent FP and INT execution: FP for colors, INT for addresses, for example
Enjoy.
Posted on Reply
#33
efikkan
FordGT90ConceptYou're forgetting that Vega 10 is basically Fiji with a minor architectural tweak:
Fury X: 8.9 million transistors, 596 mm², 28 nm TSMC
Vega 64: 12.5 million transistors, 510 mm², 14 nm GloFo
2080 Ti: 18.6 million transistors, 754 mm², 12 nm TSMC

AMD isn't really trying.
I know that very well. AMD have basically given up desktop gaming for the time being.

I still don't understand the relevance of Vega 20 as you referred to, we both know it wouldn't make a real difference vs. Turing.
Posted on Reply
#34
FordGT90Concept
"I go fast!1!11!1!"
There's literally nothing stopping AMD from making a Vega 96 on 7 nm.
Posted on Reply
#35
Somethingnew
But higher performance in what exactly ? You mean ray tracking and tensor cores ? I am interested in real world performance in gaming . If the rtx 2070 will be in gtx 1080 level of perormance , espeicailly considering that price hike , then i would be dissapointed , and that would mean that we would pay the same money per frame that in 2016 , isn't that dissapointing ? We will see . I will wait for review for final conlusion , but so far it's seems that pricing is a big dissapointement .
Posted on Reply
#36
bug
FordGT90ConceptThere's literally nothing stopping AMD from making a Vega 96 on 7 nm.
I'd say there is. It's that power consumption every AMD aficionado tells me doesn't matter one bit.
Posted on Reply
#37
Durvelle27
Why are people saying pricing hike

Literally the GTX 2070 is the same price as the current GTX 1080
Posted on Reply
#38
efikkan
FordGT90ConceptThere's literally nothing stopping AMD from making a Vega 96 on 7 nm.
Oh, but there is. You know GPUs are always scaled down, not up. Just slapping more clusters into an existing design is not really an option. Even if the remaining design stays "the same", it will take quite some time to re-balance everything. And even a hypothetical "Vega 96" with 50% extra cores wouldn't give anywhere near 50% extra gaming performance, all the scaling issues present in Vega will just become even more visible. The largest bottleneck for Vega is the GPU's ability to schedule work chunks, this will just get harder with more clusters.

AMD's problem isn't lack of brute force power, it's lack of resource management.
Posted on Reply
#39
Somethingnew
Yes rtx 2070 , is in the price range of gtx 1080 , so after almost 3 years , we would pay the same for the same performance , where is progress in that ? Only higher price point . If rtx 2070 would be in performance level of gtx 1080 ,with the same price , how you would desribe that ? Almost 3 years of waiting , for the same performance of gtx 1080 and for the same price ?
Posted on Reply
#40
kings
FordGT90ConceptI think NVIDIA wasting their time on RT/Tensor gave AMD a window to steal the gaming crown.
Wasting time? I don´t get it, when AMD advertised Async Compute, Primitive Shaders, etc, many people had an orgasm. But now with Nvidia, with tech that will be the future of gaming, it's a waste of time!

Above all, Nvidia is trying to propel the industry forward with new technologies and more efficient ways of extracting performance! It's not the simple core increase, clock increase, to continue to give more frames!

Of course it will not be immediate, this is just the beginning, but no doubt that this will be the future of gaming. The trade-off for now is the price. Next year, with the density of 7nm, maybe it allow smaller and cheaper chips.

One part that I found very interesting, is using the Tensor Cores for anti-aliasing (DLSS), which in theory can lead to minimal or nonexistent losses, than the others AA methods.

It would indeed be something fantastic, if we can have AA in its splendor without loss of performance. The demo of UE4 they showed at 4K with AA, the 1080Ti can only run it at 30fps and the RTX with the Tensor Cores, ran it at 73fps or 78fps.
Posted on Reply
#41
Xzibit
bugYou can rest easy. From Nvidia's presentation:

because


Enjoy.
"SM is completely brand new
Concurrent FP and INT execution: FP for colors, INT for addresses, for example"

No its not. Its in Volta. Its new to GeForce. Turing is a revised Volta and we can look to Titan V on what to expect.
Posted on Reply
#42
FordGT90Concept
"I go fast!1!11!1!"
bugI'd say there is. It's that power consumption every AMD aficionado tells me doesn't matter one bit.
Wider chips can run lower power and still come out with more performance.

Also bare in mind that 16 nm TSMC is better than 14 nm GloFo. We don't know how 7 nm GloFo will stack up to 12 nm TSMC.
kingsWasting time? I don´t get it, when AMD advertised Async Compute, Primitive Shaders, etc, many people had an orgasm.
Primitive Shaders? No one. Async compute? Can see it working with ReLive, DirectX 12, and Vulkan.
kingsBut now with Nvidia, with tech that will be the future of gaming, it's a waste of time!
Because no one is saying that other than NVIDIA.
Posted on Reply
#43
efikkan
It will be interesting to see these new redesigned SMs finetuned for gaming. Estimating performance based on Pascal will not be accurate.

But it's funny to see all the opinionators on Youtube having a meltdown over this launch.
Posted on Reply
#44
Prince Valiant
kingsWasting time? I don´t get it, when AMD advertised Async Compute, Primitive Shaders, etc, many people had an orgasm. But now with Nvidia, with tech that will be the future of gaming, it's a waste of time!

Above all, Nvidia is trying to propel the industry forward with new technologies and more efficient ways of extracting performance! It's not the simple core increase, clock increase, to continue to give more frames!

Of course it will not be immediate, this is just the beginning, but no doubt that this will be the future of gaming. The trade-off for now is the price. Next year, with the density of 7nm, maybe it allow smaller and cheaper chips.

One part that I found very interesting, is using the Tensor Cores for anti-aliasing (DLSS), which in theory can lead to minimal or nonexistent losses, than the others AA methods.

It would indeed be something fantastic, if we can have AA in its splendor without loss of performance. The demo of UE4 they showed at 4K with AA, the 1080Ti can only run it at 30fps and the RTX with the Tensor Cores, ran it at 73fps or 78fps.
I'll take a pass on getting excited about yet another AA method we hardly know anything about. The number of dead AA methods has been piling up for years.
Posted on Reply
#45
FordGT90Concept
"I go fast!1!11!1!"
Too right. When's the last time anyone got excited about anything AA? Oh right, when it was invented back in the late 1900s. They're using more clocks to get inferior results. Only reason why they made it is so that the Tensor core isn't idle. Bridge to nowhere, that. If they would have invested in more compute power instead of Tensor core, people could be running 4K without AA instead of 2560x1440 with AA. This is Fermi all over again which is an opportunity for AMD to exploit.
Posted on Reply
#46
bug
efikkanOh, but there is. You know GPUs are always scaled down, not up. Just slapping more clusters into an existing design is not really an option. Even if the remaining design stays "the same", it will take quite some time to re-balance everything. And even a hypothetical "Vega 96" with 50% extra cores wouldn't give anywhere near 50% extra gaming performance, all the scaling issues present in Vega will just become even more visible. The largest bottleneck for Vega is the GPU's ability to schedule work chunks, this will just get harder with more clusters.

AMD's problem isn't lack of brute force power, it's lack of resource management.
The bad news is, when we got Pascal, everybody was hoping Polaris will keep it in check. Then they hoped Vega will do that job.
This time we have Turing and AMD doesn't even have a public plan about their next step.
Posted on Reply
#47
Fluffmeister
Nvidia are in the privileged position of having their previous gen $699 GTX 1080 Ti still being 30% ahead of the competiton, throw in R&D, much larger die, GDDR6 and Vega being all mouth and no trousers.... and here we are.
Posted on Reply
#48
Tatty_Two
Gone Fishing
SomethingnewYes rtx 2070 , is in the price range of gtx 1080 , so after almost 3 years , we would pay the same for the same performance , where is progress in that ? Only higher price point . If rtx 2070 would be in performance level of gtx 1080 ,with the same price , how you would desribe that ? Almost 3 years of waiting , for the same performance of gtx 1080 and for the same price ?
Sadly it is called inflation, market share, market dominance and a lack of top end competition...... ohhh and a little bit of arrogance thrown in for good measure.
Posted on Reply
#49
Tsukiyomi91
@Somethingnew also, how sure are you that the RTX2070 will be "the same performance of GTX1080" when there isn't any solid evidence being circulated or even proven? How sure are you that it'll be that bad? Does your word carries weight or you're just afraid of a new architecture that made the entire GPU market deemed obsolete because of how it does in ray tracing? Please keep your thoughts to yourself until reviewers got their hands on the reference cards, bench them with current release drivers & write up their findings.
Anyways...
With AMD having no luck or having time constraints in releasing their upcoming Navi or shrunken down Vega cores to keep up with Nvidia's new silicon, I can say that they've lost the battle yet again & the end users; which is us, are paying the price.
Posted on Reply
#50
Manoa
Tsukiyomi91@SomethingnewI can say that they've lost the battle yet again & the end users; which is us, are paying the price.
it the true, regardless of if the RT/Tensors are silicon that was worth to spend, such price for graphics is too much - better buy a gun for this price and start shooting people it will give the best ray tracing EVER and unlimited bit-depth graphics lel
Posted on Reply
Add your own comment
Dec 19th, 2024 09:35 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts