Tuesday, December 25th 2018

NVIDIA GeForce RTX 2060 to Ship in Six Variants Based on Memory Size and Type

NVIDIA drew consumer ire for differentiating its GeForce GTX 1060 into two variants based on memory, the GTX 1060 3 GB and GTX 1060 6 GB, with the two also featuring different GPU core-configurations. The company plans to double-down - or should we say, triple-down - on its sub-branding shenanigans with the upcoming GeForce RTX 2060. According to VideoCardz, citing a GIGABYTE leak about regulatory filings, NVIDIA could be carving out not two, but six variants of the RTX 2060!

There are at least two parameters that differentiate the six (that we know of anyway): memory size and memory type. There are three memory sizes, 3 GB, 4 GB, and 6 GB. Each of the three memory sizes come in two memory types, the latest GDDR6 and the older GDDR5. Based on the six RTX 2060 variants, GIGABYTE could launch up to thirty nine SKUs. When you add up similar SKU counts from NVIDIA's other AIC partners, there could be upward of 300 RTX 2060 graphics card models to choose from. It won't surprise us if in addition to memory size and type, GPU core-configurations also vary between the six RTX 2060 variants compounding consumer confusion. The 12 nm "TU106" silicon already has "A" and "non-A" ASIC classes, so there could be as many as twelve new device IDs in all! The GeForce RTX 2060 is expected to debut in January 2019.
Source: VideoCardz
Add your own comment

230 Comments on NVIDIA GeForce RTX 2060 to Ship in Six Variants Based on Memory Size and Type

#201
bug
lexluthermiesterNot everyone.
I would expect a clueless user will just try various presets and settle for one. A more informed user will know how to tweak at least some of the settings. I wouldn't expect many users to just max out everything and refuse to play any other way anymore than I expect drivers to get behind the wheel and just press the pedal to the metal. Users that do that probably don't buy a mid range card to begin with. But I have seen stranger things.
lexluthermiesterReally? Turning off setting that don't mean much will not result in a game looking like a "potato". Further, turning off or down certain settings to max performance on a lower tier card is also not going to have that result either.
It depends on the game. In some titles, "medium" can be hard to distinguish from "ultra". In some titles, "medium" can look significantly worse. And then there's the matter of the game itself. Some give you time to admire the scenery, some do not.
Posted on Reply
#202
lexluthermiester
bugIt depends on the game. In some titles, "medium" can be hard to distinguish from "ultra". In some titles, "medium" can look significantly worse. And then there's the matter of the game itself. Some give you time to admire the scenery, some do not.
Good points all!
Posted on Reply
#203
EarthDog
lexluthermiesterNot everyone.

Really? Turning off setting that don't mean much will not result in a game looking like a "potato". Further, turning off or down certain settings to max performance on a lower tier card is also not going to have that result either.
I didnt say everyone. ;)

I didnt buy a PC to have it look worse than a console. Some need to...some choose to, others like ultra. It is what it is.
Posted on Reply
#204
lexluthermiester
EarthDogI didnt buy a PC to have it look worse than a console.
LOLOLOLOLOL!
Posted on Reply
#205
FordGT90Concept
"I go fast!1!11!1!"
I only tend to change graphics settings when the game on start looks atrocious (especially games defaulting to anything less than my monitor's native resolution). That was the case with Max Payne (defaulted to 800x600). Naturally, the game didn't know what an RX 590 was so it defaults to medium/low settings. That's when I turn everything up to max (hehe), check the framerate which was noticeably terrible at around 35 fps, and adjusted MSAA and tessellation down. Obviously the game uses an NVIDIA-biased implementation of tessellation which hasn't been optimized for AMD in the last six years.

Newer games with newer cards, the defaults are usually good enough. It's older games that don't know what the graphics card is that need tweaking.
Posted on Reply
#206
GhostRyder
bugI'm betting GDDR5 and 6 support is baked in because availability couldn't be predicted. Irl we'll see one or the other taking the lion's share.
That's a fair point I had not considered. Personally I just hope there really are not that many versions lol.
Posted on Reply
#207
Divide Overflow
Que the threads asking to "unlock" the extra memory in their lower tier models.
Posted on Reply
#209
B-Real
lexluthermiesterThat's an opinion, and not a very good one. Every RTX card I've installed/used kicks the ever living snot out of the GTX10XX/Titan counter parts to say nothing of Radeon cards. And that's before RTRT is factored into the equation. The only people whining about the price are the people who can't afford them or have to wait an extra bit of time to save money for one. Everyone else is buying them up, which is why they are consistently still selling out, months after release.
So for you, it is enough that the 20 series can't even beat the 10 series the 900 series did with the 700 to say it's not the worst price-performance GPU ever made.
lexluthermiesterHere's a set of facts;
1. Every generation of new GPU's get a price increase.
You are simply LYING. Comparing MSRP prices (that's what you have to do, not comparing previous gen GPU after their price drop to the new gen), there was a price decrease of $50-100 in the 700-900 switch (where there was a slightly bigger jump in performance), a price increase of $50-100 in the 900-1000 switch, which brought a HUGE performance leap. There was minimal-none price jump with the 600-700 switch except for the $150 increase of the 780. There was also minimal-none increase in the case of the 500-600. And now we are speaking of $100-300 (which in reality was more like $500) price jump. Don't you really feel how pathetic is that, or you are just an NV employee?

Plus it wouldn't have been that bad if there was a minimal price increase for the RTX series, let's say 50-50$ for the 2070 and 2080. But anything you say, just check Techpowerup's poll before the release of RTX, check the stock market, check general reception of potential customers about the card, and you will now you are just simply lying to yourself, too.
lexluthermiesterThe 2080 cleanly beats out the 1080 and beats out 1080ti if it doesn't match it. Also RTX offers advancements Pascal can not. The 2080/2080ti and RTX Titan are the best on the market. NVidia knows this and demands a premium price for it. If you don't want to pay that price, ok, don't buy one. Settle for less.
As some already reacted to this, I have to do that too: Wow, cleanly beats out a 2,5 year old card by nearly 30%. What a result! 2080 is 1% faster than the 1080Ti, which is totally equal in performance, so it doesnt' beat it out. Just to remind you: 1080 beat the 980 Ti by 38% while costing $50 less. The 2080 equals the 1080Ti while costing the same. And as mentioned before, 1080Ti can be OCd better than the 2080.
lexluthermiesterTry this, turn the MSAA off. Turn the shadows down to low. Turn the tessellation back on to normal. Granted you're running a Radeon, but shouldn't matter too much concerning memory load. Post a screen shot with those setting to see if the memory load drops. The reason I ask is as follows, now that GPU prices have come down for used 1060's, I've been installing then in client PC's and tweaking driver settings to keep performance good. So I know that 1060 3gb cards are good gaming cards when settings are config'd well.
So you are advising people who want to buy a ~ $350+ 3GB 2060 (which is near the price of the 1070 with 8GB) to lower settings in FHD. LOL. No other words needed. I hope you advise your customers only Intel-NV rigs. :D

The fact is that objectively the only really good point in the RTX is series is the Founder's Edition's solid cooling solutions (in terms of noise, cooling performance) and neat look (subjective).
Posted on Reply
#210
lexluthermiester
B-RealSo for you, it is enough that the 20 series can't even beat the 10 series the 900 series did with the 700 to say it's not the worst price-performance GPU ever made.
Wow, what a conclusion. Clearly you have looked at the review performance graphs. Well done.
B-RealYou are simply LYING.
Are you sure? Maybe I'm postulating a set of suggestions based on pure hyperbole?
B-RealAs some already reacted to this, I have to do that too
Of course you would. Sure.
B-RealWow, cleanly beats out a 2,5 year old card by nearly 30%.
It would seem you know how to read like an expert..
B-RealWhat a result! 2080 is 1% faster than the 1080Ti
So 30% is equal to 1%? Is that what you're saying?
B-Realwhich is totally equal in performance
Your math skills are dizzying!
B-Realso it doesnt' beat it out.
Ok, sure.
B-RealJust to remind you: 1080 beat the 980 Ti by 38% while costing $50 less. The 2080 equals the 1080Ti while costing the same. And as mentioned before, 1080Ti can be OCd better than the 2080.
Gee, thanks for the reminders. You're very helpful.

@Vayra86
Earlier you said I was making a fool of myself... How are things going on that?
Posted on Reply
#211
Vayra86
lexluthermiesterWow, what a conclusion. Clearly you have looked at the review performance graphs. Well done.

Are you sure? Maybe I'm postulating a set of suggestions based on pure hyperbole?

Of course you would. Sure.

It would seem you know how to read like an expert..

So 30% is equal to 1%? Is that what you're saying?

Your math skills are dizzying!

Ok, sure.

Gee, thanks for the reminders. You're very helpful.

@Vayra86
Earlier you said I was making a fool of myself... How we doing on that?
Im not seeing much of a change. Topic went right back to shit the moment you started 'moderating' everything posted.

Suffice to say, Im out, enjoy yourselves
Posted on Reply
#212
vip3r011
$250 mayb i cna dream for a rtx 2060 3gb
Posted on Reply
#213
remixedcat
Nah nvidia needs to have a card with over 30 variations like the galaxy s4 lmao
Posted on Reply
#214
RichF
remixedcatNah nvidia needs to have a card with over 30 variations like the galaxy s4 lmao
Ask and ye shall receive...

Gigabyte will have at least 40 variants of the GeForce RTX 2060 graphics card according to an EEC (Eurasian Economic Commission) product listing. link

:rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes:

(eyeroll count not gratuitous)
Posted on Reply
#216
bug
RichFAsk and ye shall receive...

Gigabyte will have at least 40 variants of the GeForce RTX 2060 graphics card according to an EEC (Eurasian Economic Commission) product listing. link

:rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes:

(eyeroll count not gratuitous)
Yeah, there's no 40 cards in there. Just an assumption, based on what could vary between models.
Posted on Reply
#218
FordGT90Concept
"I go fast!1!11!1!"
I talked to someone in the know about the theoretical of a 32-bit D3D10 game and how that relates to VRAM. Textures practically have to pass through the RAM to reach the VRAM. The only way to not do that is via DMA which is very fringe stuff.

My understanding is that there's a variety of ways to make it crash because that RAM limitation is absolute.
1) If you try to load too many resources, GART will overflow crashing the executable.
2) If you try to load a huge asset (depending on conditions but could be smaller than 3 GiB in size) , it will crash because the RAM can't hold the asset before handing it to the VRAM.
3) If you try to hold too many assets in RAM in transit to VRAM and you fail to release them the RAM fast enough and it goes over the virtual memory limit, it will crash.

In other words, even under 32-bit D3D10, you're dancing on a razer's edge when dealing with VRAM. VRAM (unless DMA is used which good luck with that) is practically limited by addressable RAM space.

Coming full circle, this is fundamentally why Fury X 4 GiB and GTX 970 3.5 GiB were okay a few years ago but not so much now. Any game that might need 64-bit address space is usually 64-bit. The days of claustrophobic memory usage are gone.
Posted on Reply
#219
bug
FordGT90ConceptI talked to someone in the know about the theoretical of a 32-bit D3D10 game and how that relates to VRAM. Textures practically have to pass through the RAM to reach the VRAM. The only way to not do that is via DMA which is very fringe stuff.

My understanding is that there's a variety of ways to make it crash because that RAM limitation is absolute.
1) If you try to load too many resources, GART will overflow crashing the executable.
2) If you try to load a huge asset (depending on conditions but could be smaller than 3 GiB in size) , it will crash because the RAM can't hold the asset before handing it to the VRAM.
3) If you try to hold too many assets in RAM in transit to VRAM and you fail to release them the RAM fast enough and it goes over the virtual memory limit, it will crash.

In other words, even under 32-bit D3D10, you're dancing on a razer's edge when dealing with VRAM. VRAM (unless DMA is used which good luck with that) is practically limited by addressable RAM space.

Coming full circle, this is fundamentally why Fury X 4 GiB and GTX 970 3.5 GiB were okay a few years ago but not so much now. Any game that might need 64-bit address space is usually 64-bit. The days of claustrophobic memory usage are gone.
DMA access used to be everywhere a while back, not sure whether Win10 restricts is somehow (and I wouldn't be surprised if it does).
As for loading huge textures, two things. First, there's this thing called streaming - you don't have to keep the entire thing into RAM at the same time to load it. Second, I'm pretty sure no game uses a 3GB piece of texture.
Posted on Reply
#220
FordGT90Concept
"I go fast!1!11!1!"
Developers use the methods provided to them by APIs like D3D, OpenGL, and Vulkan which don't use DMA to move the textures. DMA hasn't been used in earnest since before those APIs became common. All of those APIs also support super textures either for sectioning (using segments of a larger image to reduce HDD/SDD load) or sky domes.

Yes, streaming, but most engines that are big on streaming (like UE4) and are of the era where >4 GiB VRAM can be used are also already 64-bit so it's a non-issue. This is where you run into problems with graphics cards that have <4 GiB VRAM because the API is having to shuffle assets between RAM and VRAM which translates to stutter.
Posted on Reply
#221
bug
FordGT90ConceptDevelopers use the methods provided to them by APIs like D3D, OpenGL, and Vulkan which don't use DMA to move the textures. DMA hasn't been used in earnest since before those APIs became common. All of those APIs also support super textures either for sectioning (using segments of a larger image to reduce HDD/SDD load) or sky domes.
That doesn't prevent said APIs from doing DMA internally.
FordGT90ConceptYes, streaming, but most engines that are big on streaming (like UE4) and are of the era where >4 GiB VRAM can be used are also already 64-bit so it's a non-issue. This is where you run into problems with graphics cards that have <4 GiB VRAM because the API is having to shuffle assets between RAM and VRAM which translates to stutter.
Streaming is the non-naive way to handle large assets in programming.
Posted on Reply
#222
FordGT90Concept
"I go fast!1!11!1!"
bugThat doesn't prevent said APIs from doing DMA internally.
But they don't and that's the point. Everything in VRAM flows through RAM.

The argument I made before about 32-bit and VRAM being intrinsically linked is effectively true. Now that the 32-bit barrier is gone, VRAM usage has soared in games that can use it.
Posted on Reply
Add your own comment
Jul 27th, 2024 12:14 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts