Monday, September 17th 2018
NVIDIA Segregates Turing GPUs; Factory Overclocking Forbidden on the Cheaper Variant
While working on GPU-Z support for NVIDIA's RTX 20-series graphics cards, we noticed something curious. Each GPU model has not one, but two device IDs assigned to it. A device ID is a unique identification that tells Windows which specific device is installed, so it can select and load the relevant driver software. It also tells the driver, which commands to send to the chip, as they vary between generations. Last but not least, the device ID can be used to enable or lock certain features, for example in the professional space. Two device IDs per GPU is very unusual. For example, all GTX 1080 Ti cards, whether reference or custom design, are marked as 1B06. Titan Xp on the other hand, which uses the same physical GPU, is marked as 1B02. NVIDIA has always used just one ID per SKU, no matter if custom-design, reference or Founders Edition.
We reached out to industry sources and confirmed that for Turing, NVIDIA is creating two device IDs per GPU to correspond to two different ASIC codes per GPU model (for example, TU102-300 and TU102-300-A for the RTX 2080 Ti). The Turing -300 variant is designated to be used on cards targeting the MSRP price point, while the 300-A variant is for use on custom-design, overclocked cards. Both are the same physical chip, just separated by binning, and pricing, which means NVIDIA pretests all GPUs and sorts them by properties such as overclocking potential, power efficiency, etc.When a board partner uses a -300 Turing GPU variant, factory overclocking is forbidden. Only the more expensive -30-A variants are meant for this scenario. Both can be overclocked manually though, by the user, but it's likely that the overclocking potential on the lower bin won't be as high as on the higher rated chips. Separate device IDs could also prevent consumers from buying the cheapest card, with reference clocks, and flashing it with the BIOS from a faster factory-overclocked variant of that card (think buying an MSI Gaming card and flashing it with the BIOS of Gaming X).
All Founders Edition and custom designs that we could look at so far, use the same -300-A GPU variant, which means the device ID is not used to separate Founders Edition from custom design cards.
We reached out to industry sources and confirmed that for Turing, NVIDIA is creating two device IDs per GPU to correspond to two different ASIC codes per GPU model (for example, TU102-300 and TU102-300-A for the RTX 2080 Ti). The Turing -300 variant is designated to be used on cards targeting the MSRP price point, while the 300-A variant is for use on custom-design, overclocked cards. Both are the same physical chip, just separated by binning, and pricing, which means NVIDIA pretests all GPUs and sorts them by properties such as overclocking potential, power efficiency, etc.When a board partner uses a -300 Turing GPU variant, factory overclocking is forbidden. Only the more expensive -30-A variants are meant for this scenario. Both can be overclocked manually though, by the user, but it's likely that the overclocking potential on the lower bin won't be as high as on the higher rated chips. Separate device IDs could also prevent consumers from buying the cheapest card, with reference clocks, and flashing it with the BIOS from a faster factory-overclocked variant of that card (think buying an MSI Gaming card and flashing it with the BIOS of Gaming X).
All Founders Edition and custom designs that we could look at so far, use the same -300-A GPU variant, which means the device ID is not used to separate Founders Edition from custom design cards.
90 Comments on NVIDIA Segregates Turing GPUs; Factory Overclocking Forbidden on the Cheaper Variant
Naming schemes and product stacks and pricing is all artificial and abstract. You need to look at absolutes: die size, transistor count, bus width and VRAM system, and the board design. Those are indicators that tell you how much it costs to produce a GPU and affect yields. A great example of how things can change is the way Nvidia used the first Titan. In the end, it became a 'budget friendly' 780, barely losing performance. That was an x80 product using a Gx100 chip while the x80ti was essentially a full fat Gx100. Just a year earlier, the same company used a 104 to create an x70 and x80. See how these things shift?
You need to get your 'facts' straight.
Yeah I will say it again: Rationalize Nvidia selling low end for $600 all your want. Drink dat Koolaid Bro! Hope that 10-20% performance gain is worth it for those who rationalize this as a "high end" card lol.
High = top of the totem pole. The 2070 won't be half as strong as a full Turing card lol. That means it is barely mid range at best...
I know, this is hard to swallow, but the reality is, large dies are costly and that means top end GPU can and will see price changes depending on its size, and can even push it out of the gaming market altogether because it simply isn't profitable to make one for gamers (the history of Titan in a nutshell). Gamers, mind you, that are more concerned with 'top of the totem pole epeen' than they are with realistic numbers and facts.
There is a difference between pointing out why something is the way it is, and agreeing with it. I've always said Turing and its large dies are a wasteful practice with questionable returns. I would have much rather seen Pascal ported to 12nm and die size used for raw performance. Thén we could justify the current price point.
-815=V100/T100 (815 is a bigger number than the others!)
-715=TU102
-545=TU104
-445=TU106 (V100 is 80% bigger)
-300=TU116 (This is less than half as big as the biggest number)
-<200=TU118
See the 2070 at the bottom of midrange? Let me look up the definition of "middle" for you: "at an equal distance from the extremities of something; central."
That is my entire point, that the 2070 is half of the performance Nvidia could be bringing to the table right now. I do not call that anything short of what it literally is: half of an Enthusiast card. At best you could compare this to cards like the GTX 660 Ti and R9 380X - half of the top card.
Your argument that "things got more expensive" is also complete BS. It is not this much more expensive. The bloody GTX 580 sold for $499 with a die almost as big as the TU104, and it had TERRIBLE yields on 40nm at the time. 12nm has no such yield problems, and in fact it was built for good yields on large cards.
You cannot compare die size between nodes! Middle is Middle.
Nvidia is starting to look like Apple. Charging 1200$ for a PHONE!
With autoclocking you sort of get the performance you directly pay for with regards to the cooling attached and Vrm design etc so overclocking is for benches only mostly now but it's nice to have the option on my personal possessions.
I think that they were spoiled by the mining trend profit avalanche and that they're doing everything they can to milk us for more cash now that its over.
They rightly noticed that gamers were still buying their products when prices were so grossly inflated, and they're counting on us to continue doing just that.
The gouge is the new norm. (its bullshit too)
I'm ready to never buy into 20 series GPUs (and beyond) as a way to protest because voting with your wallet is most effective.
Their hobbling of mid-range card's SLI capabilities was another step in the rape of the gaming market.
I really don't have to own the very best GPUs on the market. Good GPUs will be fine for me. Ones that I can Crossfire together are key for me.
NVIDIA can kiss my ass.
They are priced disproportionately high, and yet... people buy them! So is it that these cards are priced high, or the average cards priced too low?
Well... IMO nvidia is testing this new high pricing. With a glut of inventory they have always they really have nothing to lose by testing increased pricing "a-la Apple style". If their sales don't suffer, then you better bet this will be the new pricing norm.
So the consumer is determining themselves what price is fair market value. Not buying into the new high prices is the only way to avoid massive, permanent, price hikes.
Heck, even AMD saw this and apparently changed their plans on their (very expensive to produce) Vega 2 Instinct cards which they seemingly never thought of selling as a consumer card until they saw nvidia pulling it off with *massive* margins on their new parts.
I'm doing pretty good for GPUs right now. I have one 1080Ti, one Vega-56, two 1080FEs, two 1070Ti, and two Vega-64s.
Any new purchases will be AMD based until NVIDIA stops with their Reindeer games.