• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

NVIDIA GeForce RTX 2060 Founders Edition Pictured, Tested

You are saying a 2080 Ti should be 600$? Because they “doubled” it? I know you are going to say “it was a metaphor”, and that’s where problems come in. It’s NOT double, at most I would say they charged 10-25% more than it’s worth. 10-25% is NOT double. That’s where customer satisfaction can be altered, by misleading a consumer market and saying it’s absolutely not worth it where in reality it’s not like what you said.
No, It's not he comparison I wish to make. The 2080 Ti is way more powerful than a 2060, any memory setup considered.

A 2080Ti should cost ~800/900 IMO and a 2080 600$.
If AMD was in the competition, I think that nvidia would be much closer to that range of prices.
 
People compare this to the price of the GTX 1060, but forget the RX 480 was on the market at the same time to keep prices a bit lower! Basically, the closest competition to the RTX 2060 will be the RX Vega 56 (if we believe in the leaks), which still costs upwards of $400~$450, except for one or another occasional promotion.

Unless AMD pulls something off their hat in January, $350 to $400 for the RTX 2060 will be in tune with what AMD also offers! Nvidia with their dominant position, is not interested in disrupting the market with price/performance.

Nice to see bunch of couch GPU designers and financial analysts knows better than a multi million GPU company regarding both technology and pricing. It is called capitalism for a reason, no competition means Nvidia can have free say on how much they price their cards. You don’t like it then don’t buy, good for you. Someone else likes it they buy and it is entirely their own business. NVIDIA is “greedy” sure yeah they better f*ucking be greedy. They are a for profit company not a f*ucking charity.

Good to see a few people out there are onto it, probably others too I just haven't quoted you all. In the absence of competition at certain price points which AMD has generally been able to do at least in the low/mid/upper-midrange (and often top teir) segments previously for some time, Nvidia just has this ability to charge a premium for premium performance. Add to that fact the upper-end RTX chips are enormous and use newer more expensive memory and yeah, you find them charging top dollar for them, and so they should in that position.

As has been said, don't like it? vote with your wallet! I sure have. I bought a GTX1080 at launch and ~2.5 years later I personally have no compelling reason to upgrade, that comes down to my rig, screen, available time to game, what I play, price performance etc etc etc, add it all together - that equation is different for every buyer.

Do I think 20 series RTX is worth it? Not yet but I'm glad someones doing it, I've seen BFV played with it on and I truly hope Ray Tracing is in the future of gaming.

My take is that when one or both of these two things happen prices will drop, perhaps but a negligible amount, perhaps significantly;

1. Nvidia clears out all (or virtually all) 10 series stock, which the market still seems hungry for, partly because many offerings offer more than adequate performance for the particular consumer's needs.
2. AMD answer the 20 series upper level performance, or release cards matching 1080/2070/vega perf at lower prices (or again, both)
 
Yeah, but you're not the one making them having to recoup manufacturing and R&D costs. And for the record, 2080's are not far from the $600 you mentioned.
My guess would be at least some of the R&D has been covered by Volta. It's the sheer die size that makes Turing so damn expensive.
If Nvidia manages to tweak the hardware for their 7nm lineup, then we'll have a strong proposition for DXR. Otherwise, we'll have to wait for another generation.
 
My guess would be at least some of the R&D has been covered by Volta.
Maybe, but how many Volta cards have they actually sold? Even so, Volta and Turing are not the same. They have similarities, but are different enough that a lot of R&D is unrelated and doesn't cross over.
It's the sheer die size that makes Turing so damn expensive.
That is what I was referring to with manufacturing costs. Pricey wafer dies, even if you manage a high wafer/die ratio yield. That price goes way up of you can't manage at least an 88% wafer yield, which will be challenging given the total size of a functional die.
 
Maybe, but how many Volta cards have they actually sold? Even so, Volta and Turing are not the same. They have similarities, but are different enough that a lot of R&D is unrelated and doesn't cross over.

Well, Volta is only for professional cards. The Quadro goes for $9,000, God knows how much they charge for a Tesla.
And about differences, I'm really not aware of many, save for the tweaks Nvidia did to make it more fit for DXR (and probably less fit for general computing). I'm sure Anand did a piece highlighting the differences (and I'm sure I read it), but nothing striking has stuck.

That said, yes, R&D does not usually pay off after just one iteration. I was just saying they've already made some of that back.
 
Last edited:
Nvidia clears out all (or virtually all) 10 series stock

They could certainly just restock 10 series at the prices they are at and leave the 20 series where they are to continue this pricing model until AMD releases something. Lord help us all if it doesn't compete.
 
No, the first Titan was released along with the 700 series.
The top model of the 600 series was the GTX 690, having two GK104 chips.

Yes, the original titan was released with the 7 series, but both use the kepler architecture. In fact, the 680 and 770 are identical, sans for a small clock speed increase in favor of the 770 (1045 vs 1006Mhz). You can even flash a 680 to 770 and vice versa (if the cards use the same pcb like say a reference design).

I have to correct you there.
GK100 was bad and Nvidia had to do a fairly "last minute" rebrand of "GTX 670 Ti" into "GTX 680". (I remember my GTX 680 box had stickers over all the product names) The GK100 was only used for some compute cards, but the GK110 was a revised version, which ended up in the GTX 780, and was pretty much what the GTX 680 should have been.

You have to remember that Kepler was a major architectural redesign for Nvidia.

No. GK100=GK110. The reason for the extra "1" is the name change from 6 series to 7 series - try to make it look like a new product. Kepler also has GK2xx branded chips witch do contain small architectural improvements, mostly for the scheduler and power efficiency. Again, and for the last time - the 680 is NOT GK100 - it's GK104. Nvidia did not release any GPU with the GK100 codename, not even in the professional market. There were rumors and the tech press did speculate that GK100 would be reserved for the (then) new Tesla, but that never happened. This is the complete list of Kepler GPUs, both 6 and 7 series, and including the Titan, Quadro and Tesla cards:
  • Full GK104 - GTX 680, GTX 770, GTX 880M and several profesional cards.
  • Cut down GK104 - GTX 660, 760, 670, 680M, 860M and several professional cards
  • GK106 - GTX 650ti boost, 650ti, 660, and several mobile and pro cards.
  • GK107 - GTX 640, 740, 820 and lots of mobile cards
  • GK110 - GTX 780 (cut down GK110), GTX 780ti and the original Titan, as well as the Titan Black, Titan Z and loads of Tesla / Quadro cards like the K6000
  • GK208 - entry level 7 series and 8 series cards, both Geforce and Quadro branded
  • GK208B - entry level 7 series and 8 series cards, both Geforce and Quadro branded
  • GK210 - Revised and slightly cut-down version of the GK100/GK110. Launched as the Tesla K80
  • GK20A - GPU built into the Tegra K1 SoC
I know from a trustworthy source (nvidia board partner employee) that nvidia had no issues whatsoever with the GK100. If fact internal testing showed what a huge leap in performance Kepler was over Fermi. This is THE REASON why nvidia decided to launch the GK104 mid-range chip as the GTX 680 - the GK104 is 30 to 50% faster then the GF100 used in the GTX 480 and 580. Some clever people in management came up with this marketing stunt - spread Kepler over two series of cards, 600 and 700 series, and release the GK104 first, and save the full GK100 (GK110) for the later 700 series and launch it as a premium product, creating a new market segment with the 780ti and original titan. GK110 is simply the name chosen for launch, replacing the GK100 moniker, mainly to attempt to obfuscate savvy consumers and the tech press. As nvidia naming schemes go, the GK110 should have been an entry level chip: GK104>GK106>GK107 -> the smaller the last number, the larger the chip. The only Kepler revision is named GK2xx (see above) and only includes entry level cards.
 
Last edited:
While true, gk110 had two versions too. GK110 and GK110b, latter could clock a bit higher. All gtx780 "AIBs GHz editions" used the gk110b version of that chip. Going back to that ancient history, everybody knows that gtx780ti had aged pretty bad mostly because of 3GB vram. But how has GTX 780 6GB version aged?
 
I told y’all the power plug was on the side lol
 
Back
Top