Friday, June 28th 2019
NVIDIA RTX SUPER Lineup Detailed, Pricing Outed
NVIDIA has officially confirmed pricing and SKU availability for its refreshed Turing lineup featuring the SUPER graphics cards we've been talking about for ages now. Primed as a way to steal AMD's Navi release thunder, the new SUPER lineup means previously-released NVIDIA gppahics cards have now hit an EOL-status as soon as their souped-up, SUPER versions are available, come July 2nd.
The RTX 2060 and RTX 2080 Ti will live on, for now, as the cheapest and most powerful entries unto the world of hardware-based raytracing acceleration, respectively. The RTX 2070 and RTX 2080, however, will be superseded by the corresponding 2070 SUPER and 2080 SUPER offerings, with an additional RTX 2060 SUPER being offered so as to compete with AMD's RX 5700 ($399 for NVIDIA's new RTX 2060 SUPER vs $379 for the AMD RX 5700, which is sandwiched in the low-end by the RTX 2060 at $349).The RTX 2070 SUPER will be positioned at a higher pricing point than AMD's upcoming RX 5700 XT ($499 vs $449), which should put it mildly ahead in performance - just today we've seen benchmarks that showed AMD's RX 5700 XT trading blows with the non-SUPER RTX 2070. The NVIDIA RTX 2080 SUPER will get improved performance as well as a drop in pricing, down to $699 versus the original's (exorbitantly high compared to the GTX 1080's pricing of $549) $799.
Source:
Videocardz
The RTX 2060 and RTX 2080 Ti will live on, for now, as the cheapest and most powerful entries unto the world of hardware-based raytracing acceleration, respectively. The RTX 2070 and RTX 2080, however, will be superseded by the corresponding 2070 SUPER and 2080 SUPER offerings, with an additional RTX 2060 SUPER being offered so as to compete with AMD's RX 5700 ($399 for NVIDIA's new RTX 2060 SUPER vs $379 for the AMD RX 5700, which is sandwiched in the low-end by the RTX 2060 at $349).The RTX 2070 SUPER will be positioned at a higher pricing point than AMD's upcoming RX 5700 XT ($499 vs $449), which should put it mildly ahead in performance - just today we've seen benchmarks that showed AMD's RX 5700 XT trading blows with the non-SUPER RTX 2070. The NVIDIA RTX 2080 SUPER will get improved performance as well as a drop in pricing, down to $699 versus the original's (exorbitantly high compared to the GTX 1080's pricing of $549) $799.
152 Comments on NVIDIA RTX SUPER Lineup Detailed, Pricing Outed
Sure there are a few Nvidia halo cards that have pushed price up, but below that, Nvidia and AMD have been toe to toe like they are now all the time. What is really happening here, is that AMD is riding along on Nvidia's price hikes with much smaller GPUs and even though that might help their bottom line a bit, it certainly does not help us and its actually a polar opposite of what Nvidia does with the larger Turing dies. AMD right now does not innovate, does not bring absolute performance up to a new level, and does not have a value option except in its 2/3 year old leftovers - and probably has a better margin on Navi than Nvidia on Turing from 2060 and up.
Its easy to shit on Nvidia (not you persay) for pushing the envelope, but really? And you know that even I don't like RT nonsense in GPUs... Small difference, its not 1999 anymore, we have 20 years of graphics development to work with and get almost similar results with much less horsepower. In those 20 years we also saw production cost for games explode and the market demand did the same. With that demand, the current state of graphics is really good already. Any new technology is fighting an uphill battle, while back in 1999 even a blind man could see there was a lot to improve. And then there's that nasty little bugger called Moore's Law and the limited potental for shrinks.
Honestly they can price that halo card up to the moon its still better than nothing. Even the 2080ti is helping the trickle down of performance. But releasing 'plenty fast' sub top end cards does not and we see proof of that right now.
TDP of 5700 is around 180W.. (a 250mm^2 chip).
2080 is what, 30%-is faster than that at 545mm^2 (minus node)?
Hardly something unreachable, even ignoring high yield rumors. nvidia cannot move to 7nm overnight, for starters.
Elaborate why AMD "can't" catch nVidia's "top end" please.
Right now Nvidia offers RTX 2080 at $700 and 215W, and RTX 2080 Ti costing >$1000 and 250W.
Competing with these will be hard enough, but remember that Navi 2x will primarily compete with the successor of Turing on 7nm, and I assume by that time Nvidia will push down those performance tiers and improve efficiency further.
But as @efikkan points out, TDP budget is going to be a problem once again. 7nm doesn't change that all that much, and if you remove the node and just look at architecture AMD still has work to do. Perf/watt is still a thing and again, we're only even comparing this all to OLD Nvidia stuff - while Navi 20 is yet to release. Timing. Time to market. Relevance. Did you seriously think Nvidia is just now taking a look at what to do with 7nm? I surely hope not... If you were, be ready for another Kepler refresh >>> Maxwell curb stomp because that is very likely the jump we will see there.
The reason people with a 300-400 card budget are looking up at the halo cards is because that will indicate how worthwhile that 300-400 dollar purchase really is. After all, if performance just about flatlines after, say, a 2060, why would you spend 700-800 on the 2080? At the same time, today's 700-800 card is tomorrow's 300-400 card (simply put).
Progress in the high end matters, it is essential to keep the market moving forward. What we are seeing since Turing is not that and the result is price ánd performance stagnation. Since Navi will be too late to even matter in that sense, even Navi 20 catching up to 2080ti is unlikely to make a difference, unless, again, AMD is willing to play the value game they really could play with these GPUs due to their size.
Going from 180w to 280-ish w and doubling chip size should get one way past 30%-ish performance bump. I think we'll see 5800, 5900 by the end of the year, while Turing would come not earlier than Q2 next year, given Huang's comments.
Besides, if Turing is good, pricing on it hardly will be.
- memory bandwidth; AMD's delta compression is still behind the curve, and they will be needing a lot of bandwidth to work with 2080ti-levels of data transfer. Something that even Radeon VII with 16GB HBM hasn't had to do yet; even though it should be more than capable; Navi carries GDDR6 and we've seen that even HBM equipped Vega benefits from memory tweaks... The best thing AMD could achieve on GDDR5 was GTX 1060 6GB performance, give or take. Not exactly a feat.
- we have yet to see a proper GPU Boost implementation, though I believe Navi does offer that, or at least improves on it. But as good as GPU Boost 3.0? Fingers crossed.
- if they go very big and lower clockrate as a result, that will rapidly destroy their die size advantage and therefore margins; and ideally they'd go the other way around: higher clockrates while keeping die size under control. They've only just begun on the 7nm node. Exploding die size this early is a huge long-term problem if you intend to remain competitive. Turing's large dies are built on a 12nm node with no future. On 7nm, they will have a lot of breathing room even with dedicated RT hardware.
- Time to market. Nvidia already releases the Super cards now... and they still have 7nm to work with. So by then, AMD once again has a 280~300W(OC) card with probably a large die fighting Nvidia's sub-top end that will probably need about 180-210W. History repeats...
I'm finding it hard to be optimistic about this. The numbers don't lie and unless AMD pulls out an architectural rabbit, they're always going to lag behind. And note: that is even while completely lacking RT hardware. If the shit really hits the fan, Nvidia could just shrink their die by 20% and nobody would ever notice :rolleyes:
That doesn't at all mean AMD could not do it, let alone, it could simply use wider memory bus. Keep in mind, that a sizable part of the mentioned 180w is consumed by the memory/mem controller. So doubling the chip size should be at around 280-300w, I think. (as it was with Vega 64 vs Polaris. In fact, Vega 64 is more than twice bigger than Polaris) 5700/5700XT will be available starting 7.7.2019, bigger guys probably later on, but the most attractive thing about them will be the price.
Obscene margins on 2080 and beyond mean AMD has lots of space to maneuver, downclocking, bigger size, dropping price.
Heck, anything will be better than having that 16Gb HBM2 Vega VII at $699.
So yes, I would agree that Navi's (5700/xt) selling point will be price. And that is another case of history repeats, unfortunately.
It's just a matter of allocating money to the project.
Back to bigger chip discussion. 5700/XT are 40CU.
60CU chip would have size of about 350mm^2 (with a couple of CUs disabled)
40CUs at 1700Mhz = 8.7TF
60CU @1700 = 13TF (+50% vs 5700XT) - at around 250W perhaps?
60CU @1600 = 12.2TF (+40% vs 5700XT)
60CU @1500 = 11.5TF (+32% vs 5700XT)
Ain't outlook quite rosy in team red?
A difference in perception is fine. Time will tell... But I will say my crystal ball has a pretty decent hitrate.
How come 350mm chip taking on 2080 with roughly the same power consumption will "fail short"?
It can well fail short at sales, because clueless buy green.
It's more of a perception than anything else, just how many refer to Fury as "power hog" when, in fact, it was on par with 980Ti.
More to it, both Sony and Microsoft have promised that upcoming consoles will support RT.
They are very likely to go with 7nm EUV, which further lowers power consumption.