• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

GeForce GTX 880 ES Intercepted En Route Testing Lab, Features 8 GB Memory?

Joined
Jun 13, 2012
Messages
1,388 (0.31/day)
Processor i7-13700k
Motherboard Asus Tuf Gaming z790-plus
Cooling Coolermaster Hyper 212 RGB
Memory Corsair Vengeance RGB 32GB DDR5 7000mhz
Video Card(s) Asus Dual Geforce RTX 4070 Super ( 2800mhz @ 1.0volt, ~60mhz overlock -.1volts)
Storage 1x Samsung 980 Pro PCIe4 NVme, 2x Samsung 1tb 850evo SSD, 3x WD drives, 2 seagate
Display(s) Acer Predator XB273u 27inch IPS G-Sync 165hz
Power Supply Corsair RMx Series RM850x (OCZ Z series PSU retired after 13 years of service)
Mouse Logitech G502 hero
Keyboard Logitech G710+
Just because GK104 was mid-range in the Kepler hierarchy it hardly warranted mid range prices.

People seem to be quick to forget it launched being both cheaper and faster than the competition:
http://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_680/1.html

GTX680 was top of like card for that series, it was GK104 chip. It wasn't midrange at the time it was released.


Don't see why everyone assumes its 256bit memory bus as it would be a bit of a downgrade performance wise if they did that. 256bit was nothing but a rumor so best to leave it as one til proper specs are released.
 
Joined
Jan 13, 2009
Messages
424 (0.07/day)
Just because it makes sense, doesn't make it right, though. Listed memory and shader counts don't make sense to me, and to me point to that dual-GPU... W1zz is probably bang-on as to why there might be an 8 GB listing, however. I hadn't considered that it might have to do with memory controller testing, and that makes even more sense to me. 3000 shaders tops 780 TI though.
I didn't say I'd bet the house on it.
 
Joined
Dec 31, 2009
Messages
19,371 (3.56/day)
Benchmark Scores Faster than yours... I'd bet on it. :)
256-bit memory interface only ? Can someone convince me why not 512 ?
Sure... what is the bandwidth of a 512bit bus running at 1250Mhz versus a 256bit bus running at 1750 Mhz?
(Answer: in the same ballpark)

It is, for the most part, two different ways of getting to the same thing. making a 512bit bus is more expensive, but using 1250Mhz rated DDR5 is cheaper. Versus a cheaper to make bus and more expensive ram IC's.
 
Last edited:
Joined
May 5, 2014
Messages
134 (0.03/day)
Location
Hà Nội, Việt Nam
System Name Acer Aspire V3-571G
Processor Core i3 3110M Dual Core 2.4 GHz Ivy Bridge
Motherboard Intel HM77
Memory 4Gb DDR3 1333 MHz Dual Channel
Video Card(s) Intel HD Graphics 4000/NVIDIA GeForce GT630M
Storage Kingston V300 120Gb SATA3 SSD
Display(s) Asus VS239H 23-inch AH-IPS Monitor
Audio Device(s) Realtek HD with Dolby™ Digital Plus
Mouse Logitech G502 Proteus Core Tunable Gaming Mouse + Corsair MM400 Hard Plastic Gaming Mouse Mat
Keyboard Logitech G310 Atlas Dawn Mechanical Gaming Keyboard | Romer-G Switches by Omron and Logitech
Software Windows 10 Pro x64
Benchmark Scores Nah, nevermind me then xD
Sure... what is the bandwidth of a 512bit bus running at 1250Mhz versus a 256bit bus running at 1750 Mhz?
(Answer: in the same ballpark)

It is, for the most part, two different ways of getting to the same thing. making a 512bit bus is more expensive, but using 1250Mhz rated DDR5 is cheaper. Versus a cheaper to make bus and more expensive ram IC's.

I'm down with this one. Thanks man ! I hope I can learn more from you
 
Joined
Aug 16, 2010
Messages
1 (0.00/day)
Nvidia knows a 512 bit memory bus will destroy any and all games at ultra settings even with just a 2GB Gfx card ~ this is just the reason they will only keep the 512bit bus for their flagship products.

512 bit = more heat, this is why I think 8 Gigs of 256 bit ram is the route they are going with.

8GB gddr5 on a graphics card has been long overdue, and these cards will equip the gamer for 3k and 4k gaming. Lets not forget custom GPU resolution scaling on existing drivers allow you to play on resolutions much higher than your current monitor supports. Other people like 3d artists who use software like Lumion / Lightwave 3d / Unreal Engine 4 / Adobe CS suite etc will benefit greatly from these 8gig cards !!!

I vote for 8gigs of ram anytime. They are preparing for smooth fps on next generation game engines.

256-bit memory interface only ? Can someone convince me why not 512 ?

Yes I also vote for a 512bit memory bus 8gig Gfx card which is affordable, however Nvidia may not be willing to come to the table instead.... they continue to drag the market and gamers along with it.

Nvidia knows a 512 bit memory bus will destroy any and all games at ultra settings even with just a 2GB Gfx card ~ this is just the reason they will only keep the 512bit bus for their flagship products.

512 bit = more heat, this is why I think 8 Gigs of 256 bit ram is the route they are going with.

8GB gddr5 on a graphics card has been long overdue, and these cards will equip the gamer for 3k and 4k gaming. Lets not forget custom GPU resolution scaling on existing drivers allow you to play on resolutions much higher than your current monitor supports. Other people like 3d artists who use software like Lumion / Lightwave 3d / Unreal Engine 4 / Adobe CS suite etc will benefit greatly from these 8gig cards !!!

I vote for 8gigs of ram anytime. They are preparing for smooth fps on next generation game engines.
 
Last edited by a moderator:
Joined
Sep 7, 2011
Messages
2,785 (0.58/day)
Location
New Zealand
System Name MoneySink
Processor 2600K @ 4.8
Motherboard P8Z77-V
Cooling AC NexXxos XT45 360, RayStorm, D5T+XSPC tank, Tygon R-3603, Bitspower
Memory 16GB Crucial Ballistix DDR3-1600C8
Video Card(s) GTX 780 SLI (EVGA SC ACX + Giga GHz Ed.)
Storage Kingston HyperX SSD (128) OS, WD RE4 (1TB), RE2 (1TB), Cav. Black (2 x 500GB), Red (4TB)
Display(s) Achieva Shimian QH270-IPSMS (2560x1440) S-IPS
Case NZXT Switch 810
Audio Device(s) onboard Realtek yawn edition
Power Supply Seasonic X-1050
Software Win8.1 Pro
Benchmark Scores 3.5 litres of Pale Ale in 18 minutes.
Nvidia knows a 512 bit memory bus will destroy any and all games at ultra settings even with just a 2GB Gfx card ~ this is just the reason they will only keep the 512bit bus for their flagship products.
No, it's because memory controllers add die complexity and size. The memory bus also needs to balance the core components. It is why you don't see a 512-bit (or 384 for that matter) used in ANY GPU except the largest die of an architecture.
Care to name ANY GPU regardless of vendor that wasn't a flagship of the architecture that had a high bus width?
512 bit = more heat.
No. Transistor density is actually lower in the uncore ( memory controllers, cache, I/O etc ) than in the core. The only reason that high bus width GPUs use more power is because they are large pieces of silicon with more cores than the mainstream/entry level GPUs.
 
Joined
Apr 29, 2014
Messages
4,290 (1.11/day)
Location
Texas
System Name SnowFire / The Reinforcer
Processor i7 10700K 5.1ghz (24/7) / 2x Xeon E52650v2
Motherboard Asus Strix Z490 / Dell Dual Socket (R720)
Cooling RX 360mm + 140mm Custom Loop / Dell Stock
Memory Corsair RGB 16gb DDR4 3000 CL 16 / DDR3 128gb 16 x 8gb
Video Card(s) GTX Titan XP (2025mhz) / Asus GTX 950 (No Power Connector)
Storage Samsung 970 1tb NVME and 2tb HDD x4 RAID 5 / 300gb x8 RAID 5
Display(s) Acer XG270HU, Samsung G7 Odyssey (1440p 240hz)
Case Thermaltake Cube / Dell Poweredge R720 Rack Mount Case
Audio Device(s) Realtec ALC1150 (On board)
Power Supply Rosewill Lightning 1300Watt / Dell Stock 750 / Brick
Mouse Logitech G5
Keyboard Logitech G19S
Software Windows 11 Pro / Windows Server 2016
Yes I also vote for a 512bit memory bus 8gig Gfx card which is affordable, however Nvidia may not be willing to come to the table instead.... they continue to drag the market and gamers along with it.

Nvidia knows a 512 bit memory bus will destroy any and all games at ultra settings even with just a 2GB Gfx card ~ this is just the reason they will only keep the 512bit bus for their flagship products.

512 bit = more heat, this is why I think 8 Gigs of 256 bit ram is the route they are going with.

8GB gddr5 on a graphics card has been long overdue, and these cards will equip the gamer for 3k and 4k gaming. Lets not forget custom GPU resolution scaling on existing drivers allow you to play on resolutions much higher than your current monitor supports. Other people like 3d artists who use software like Lumion / Lightwave 3d / Unreal Engine 4 / Adobe CS suite etc will benefit greatly from these 8gig cards !!!

I vote for 8gigs of ram anytime. They are preparing for smooth fps on next generation game engines.
You can make up for the bus being smaller with core clocks which maybe what NVidia is going for here since that's how they have done it before in fact both do something like that as a trade every now and then (AMD and Nvidia). Plus these are only rumors and rumors do change with time. The 8gb itself is what will be king if this turns into the real GTX 880 because that's where Nvidia has been falling behind is with enough ram to run ultra HD setups, 3gb was not cutting it this round.
No, it's because memory controllers add die complexity and size. The memory bus also needs to balance the core components. It is why you don't see a 512-bit (or 384 for that matter) used in ANY GPU except the largest die of an architecture.
Care to name ANY GPU regardless of vendor that wasn't a flagship of the architecture that had a high bus width?
Pfft, ok. Try the HD 6850-6870, both had a 256bit bus just like the HD 6950 and 6970. LINK1, link2, link3.
 
Joined
Mar 24, 2011
Messages
2,356 (0.47/day)
Location
VT
Processor Intel i7-10700k
Motherboard Gigabyte Aurorus Ultra z490
Cooling Corsair H100i RGB
Memory 32GB (4x8GB) Corsair Vengeance DDR4-3200MHz
Video Card(s) MSI Gaming Trio X 3070 LHR
Display(s) ASUS MG278Q / AOC G2590FX
Case Corsair X4000 iCue
Audio Device(s) Onboard
Power Supply Corsair RM650x 650W Fully Modular
Software Windows 10
The 8gb itself is what will be king if this turns into the real GTX 880 because that's where Nvidia has been falling behind is with enough ram to run ultra HD setups, 3gb was not cutting it this round.

Except that it was. Ultra HD is defined as 4K+, do you seriously think any current GPU will run out of VRAM before it runs out of processing power at that high of a resolution?

Pfft, ok. Try the HD 6850-6870, both had a 256bit bus just like the HD 6950 and 6970. LINK1, link2, link3.

HD6850 and HD6870 were basically revised versions of HD5850 and HD5870--AMD's former flagship. Plus, AMD switched from VLIW5 to VLIW4 between Barts and Cayman.
 
Joined
Apr 29, 2014
Messages
4,290 (1.11/day)
Location
Texas
System Name SnowFire / The Reinforcer
Processor i7 10700K 5.1ghz (24/7) / 2x Xeon E52650v2
Motherboard Asus Strix Z490 / Dell Dual Socket (R720)
Cooling RX 360mm + 140mm Custom Loop / Dell Stock
Memory Corsair RGB 16gb DDR4 3000 CL 16 / DDR3 128gb 16 x 8gb
Video Card(s) GTX Titan XP (2025mhz) / Asus GTX 950 (No Power Connector)
Storage Samsung 970 1tb NVME and 2tb HDD x4 RAID 5 / 300gb x8 RAID 5
Display(s) Acer XG270HU, Samsung G7 Odyssey (1440p 240hz)
Case Thermaltake Cube / Dell Poweredge R720 Rack Mount Case
Audio Device(s) Realtec ALC1150 (On board)
Power Supply Rosewill Lightning 1300Watt / Dell Stock 750 / Brick
Mouse Logitech G5
Keyboard Logitech G19S
Software Windows 11 Pro / Windows Server 2016
HD6850 and HD6870 were basically revised versions of HD5850 and HD5870--AMD's former flagship. Plus, AMD switched from VLIW5 to VLIW4 between Barts and Cayman.
But Barts was not the flagship of that generation, Cayman was and they still had a 256bit bus just like it.

Except that it was. Ultra HD is defined as 4K+, do you seriously think any current GPU will run out of VRAM before it runs out of processing power at that high of a resolution?
At 4k is what I was referring to, that resolution has already seen the 3gb maxing out and causes the gap that the GTX 780ti had gotten over the 290x to become lower than at the lower resolutions. In Multi-GPU setups, the 290X even takes the lead in many situations or keeps it within a few FPS average difference. The new EVGA GTX 780ti 6gb edition solves that problem out right or this next gen maxwell will.
 
Last edited:
Joined
Mar 24, 2011
Messages
2,356 (0.47/day)
Location
VT
Processor Intel i7-10700k
Motherboard Gigabyte Aurorus Ultra z490
Cooling Corsair H100i RGB
Memory 32GB (4x8GB) Corsair Vengeance DDR4-3200MHz
Video Card(s) MSI Gaming Trio X 3070 LHR
Display(s) ASUS MG278Q / AOC G2590FX
Case Corsair X4000 iCue
Audio Device(s) Onboard
Power Supply Corsair RM650x 650W Fully Modular
Software Windows 10
But Barts was not the flagship of that generation, Cayman was and they still had a 256bit bus just like it.

Because it was a repackaged flagship product. Using that same logic a GTX 760 or 770 also somewhat proves the point.

At 4k is what I was referring to, that resolution has already seen the 3gb maxing out and causes the gap that the GTX 780ti had gotten over the 290x to become lower than at the lower resolutions. In Multi-GPU setups, the 290X even takes the lead in many situations or keeps it within a few FPS average difference. The new EVGA GTX 780ti 6gb edition solves that problem out right or this next gen maxwell will.

Prove it. Show me a single benchmark of the 780Ti with 6GB vastly outperforming the 3GB version. I cannot find a single one. For that matter, find me a good example of a card offering significant performance gains from doubling the VRAM period. Because I can show you dozens of benchmarks saying it makes no difference.
 
Joined
Apr 29, 2014
Messages
4,290 (1.11/day)
Location
Texas
System Name SnowFire / The Reinforcer
Processor i7 10700K 5.1ghz (24/7) / 2x Xeon E52650v2
Motherboard Asus Strix Z490 / Dell Dual Socket (R720)
Cooling RX 360mm + 140mm Custom Loop / Dell Stock
Memory Corsair RGB 16gb DDR4 3000 CL 16 / DDR3 128gb 16 x 8gb
Video Card(s) GTX Titan XP (2025mhz) / Asus GTX 950 (No Power Connector)
Storage Samsung 970 1tb NVME and 2tb HDD x4 RAID 5 / 300gb x8 RAID 5
Display(s) Acer XG270HU, Samsung G7 Odyssey (1440p 240hz)
Case Thermaltake Cube / Dell Poweredge R720 Rack Mount Case
Audio Device(s) Realtec ALC1150 (On board)
Power Supply Rosewill Lightning 1300Watt / Dell Stock 750 / Brick
Mouse Logitech G5
Keyboard Logitech G19S
Software Windows 11 Pro / Windows Server 2016
Because it was a repackaged flagship product. Using that same logic a GTX 760 or 770 also somewhat proves the point.
Confused by your wording a bit, it was repackaged yes but the highest performing card of that generation was cayman which in the end of the day was still VLIW architecture. Hawaii vs Tahiti could be viewed the same way in the fact they are both GCN yet the revision number still seems to be a confused point where its either considered 1.1 or 2.0 GCN depending on where you look. But at the end of the day, they are still all part of GCN.


Prove it. Show me a single benchmark of the 780Ti with 6GB vastly outperforming the 3GB version. I cannot find a single one. For that matter, find me a good example of a card offering significant performance gains from doubling the VRAM period. Because I can show you dozens of benchmarks saying it makes no difference.
Ok link, very game dependent but you can see the 290X does use beyond 3gb of memory on games like Crysis 3. As far as a 6gb 780, kinda hard to show since they are pretty new to the market still.
 
Joined
Sep 7, 2011
Messages
2,785 (0.58/day)
Location
New Zealand
System Name MoneySink
Processor 2600K @ 4.8
Motherboard P8Z77-V
Cooling AC NexXxos XT45 360, RayStorm, D5T+XSPC tank, Tygon R-3603, Bitspower
Memory 16GB Crucial Ballistix DDR3-1600C8
Video Card(s) GTX 780 SLI (EVGA SC ACX + Giga GHz Ed.)
Storage Kingston HyperX SSD (128) OS, WD RE4 (1TB), RE2 (1TB), Cav. Black (2 x 500GB), Red (4TB)
Display(s) Achieva Shimian QH270-IPSMS (2560x1440) S-IPS
Case NZXT Switch 810
Audio Device(s) onboard Realtek yawn edition
Power Supply Seasonic X-1050
Software Win8.1 Pro
Benchmark Scores 3.5 litres of Pale Ale in 18 minutes.
No, it's because memory controllers add die complexity and size. The memory bus also needs to balance the core components. It is why you don't see a 512-bit (or 384 for that matter) used in ANY GPU except the largest die of an architecture.
Care to name ANY GPU regardless of vendor that wasn't a flagship of the architecture that had a high bus width?
Pfft, ok. Try the HD 6850-6870, both had a 256bit bus just like the HD 6950 and 6970. LINK1, link2, link3.
Pfft. Try again. This time take your time reading what's written. I've added a subtle hint to help you.

--------------------------------------------------------------------------------------------------------


BTW, if you consider 256-bit some kind of pinnacle of bus width, then I'm sure GT 230 owners amongst others would be truly surprised.
...and why prattle on about Cayman? Every man and his dog knows that the R970 was bandwidth starved, and was a principle reason why Tahiti added IMC's. From Anand's Tahiti review:
As it turns out, there’s a very good reason that AMD went this route. ROP operations are extremely bandwidth intensive, so much so that even when pairing up ROPs with memory controllers, the ROPs are often still starved of memory bandwidth. With Cayman AMD was not able to reach their peak theoretical ROP throughput even in synthetic tests, never mind in real-world usage.

Except that it was. Ultra HD is defined as 4K+, do you seriously think any current GPU will run out of VRAM before it runs out of processing power at that high of a resolution?
Some people on the internet said it is true, so some other people believe it. The main differences between AMD and Nvidia's architecture re: 4K, are raster throughput and its relative ratio to the number of compute units (or SMX's in Nvidia's case), scheduling of instructions, latency, and cache setup.
Raw bandwidth and framebuffer numbers don't take into account the fundamental difference in architectures, which is why a 256-bit GK 104 with 192 Gb/s of bandwidth can basically live in the same performance neighbourhood as a 384-bit Tahiti with 288 Gb/s. within the confines of a narrow gaming focus.
 
Last edited:
Joined
Apr 29, 2014
Messages
4,290 (1.11/day)
Location
Texas
System Name SnowFire / The Reinforcer
Processor i7 10700K 5.1ghz (24/7) / 2x Xeon E52650v2
Motherboard Asus Strix Z490 / Dell Dual Socket (R720)
Cooling RX 360mm + 140mm Custom Loop / Dell Stock
Memory Corsair RGB 16gb DDR4 3000 CL 16 / DDR3 128gb 16 x 8gb
Video Card(s) GTX Titan XP (2025mhz) / Asus GTX 950 (No Power Connector)
Storage Samsung 970 1tb NVME and 2tb HDD x4 RAID 5 / 300gb x8 RAID 5
Display(s) Acer XG270HU, Samsung G7 Odyssey (1440p 240hz)
Case Thermaltake Cube / Dell Poweredge R720 Rack Mount Case
Audio Device(s) Realtec ALC1150 (On board)
Power Supply Rosewill Lightning 1300Watt / Dell Stock 750 / Brick
Mouse Logitech G5
Keyboard Logitech G19S
Software Windows 11 Pro / Windows Server 2016
Pfft. Try again. This time take your time reading what's written. I've added a subtle hint to help you.

BTW, if you consider 256-bit some kind of pinnacle of bus width, then I'm sure GT 230 owners amongst others would be truly surprised.
...and why prattle on about Cayman? Every man and his dog knows that the R970 was bandwidth starved, and was a principle reason why Tahiti added IMC's. From Anand's Tahiti review:
Pfft, it was the highest of that architecture at that time, your quote said:
Care to name ANY GPU regardless of vendor that wasn't a flagship of the architecture that had a high bus width?
Its old get over it, you just said that and it was the HIGHEST of the VLIW architecture meaning your wrong. Trying to change it by saying its not that high compared to todays standards does not mean anything because memory when were talking about a card from a couple years ago especially since 256 bit bus width is still being used on many mid range cards these days. The highest at the generational battles from Nvidia was the 384 bit bus on the GTX 580 for a single GPU at that generational POINT.
 
Last edited:
Joined
Sep 7, 2011
Messages
2,785 (0.58/day)
Location
New Zealand
System Name MoneySink
Processor 2600K @ 4.8
Motherboard P8Z77-V
Cooling AC NexXxos XT45 360, RayStorm, D5T+XSPC tank, Tygon R-3603, Bitspower
Memory 16GB Crucial Ballistix DDR3-1600C8
Video Card(s) GTX 780 SLI (EVGA SC ACX + Giga GHz Ed.)
Storage Kingston HyperX SSD (128) OS, WD RE4 (1TB), RE2 (1TB), Cav. Black (2 x 500GB), Red (4TB)
Display(s) Achieva Shimian QH270-IPSMS (2560x1440) S-IPS
Case NZXT Switch 810
Audio Device(s) onboard Realtek yawn edition
Power Supply Seasonic X-1050
Software Win8.1 Pro
Benchmark Scores 3.5 litres of Pale Ale in 18 minutes.
Pfft, it was the highest of that architecture at that time, your quote said:
No, it's because memory controllers add die complexity and size. The memory bus also needs to balance the core components. It is why you don't see a 512-bit (or 384 for that matter) used in ANY GPU except the largest die of an architecture.
Care to name ANY GPU regardless of vendor that wasn't a flagship of the architecture that had a high bus width?
My god, did you fail learning at school. Why prattle on about the HD 6970 when I was speaking of second tier and lower GPUs. My quote you yourself quoted makes that abundantly clear.
:shadedshu: :roll:

You can hone your "refuting points that aren't being put forward" skills with someone else. I see it as lazy, boring, and counterproductive.
 
Joined
Apr 29, 2014
Messages
4,290 (1.11/day)
Location
Texas
System Name SnowFire / The Reinforcer
Processor i7 10700K 5.1ghz (24/7) / 2x Xeon E52650v2
Motherboard Asus Strix Z490 / Dell Dual Socket (R720)
Cooling RX 360mm + 140mm Custom Loop / Dell Stock
Memory Corsair RGB 16gb DDR4 3000 CL 16 / DDR3 128gb 16 x 8gb
Video Card(s) GTX Titan XP (2025mhz) / Asus GTX 950 (No Power Connector)
Storage Samsung 970 1tb NVME and 2tb HDD x4 RAID 5 / 300gb x8 RAID 5
Display(s) Acer XG270HU, Samsung G7 Odyssey (1440p 240hz)
Case Thermaltake Cube / Dell Poweredge R720 Rack Mount Case
Audio Device(s) Realtec ALC1150 (On board)
Power Supply Rosewill Lightning 1300Watt / Dell Stock 750 / Brick
Mouse Logitech G5
Keyboard Logitech G19S
Software Windows 11 Pro / Windows Server 2016
My god, did you fail learning at school. Why prattle on about the HD 6970 when I was speaking of second tier and lower GPUs. My quote you yourself quoted makes that abundantly clear.
:shadedshu: :roll:

You can hone your "refuting points that aren't being put forward" skills with someone else. I see it as lazy, boring, and counterproductive.
HD 6870 is a second Tier GPU and had a 256 bit bus same as the HD 6970 which was the flagship of that generation. Nice try changing the subject again...
Care to name ANY GPU regardless of vendor that wasn't a flagship of the architecture that had a high bus width?
Try the HD 6850-6870, both had a 256bit bus just like the HD 6950 and 6970. LINK1, link2, link3.
Apparently reading is not your strong suit.
6870 = 256bit
6970 = 256bit
6870 is not the highest of that generation, 6970 is, that was high back then...The only card that had a higher bus at that generational point was its competitor the GTX 580.
 
Joined
Mar 24, 2011
Messages
2,356 (0.47/day)
Location
VT
Processor Intel i7-10700k
Motherboard Gigabyte Aurorus Ultra z490
Cooling Corsair H100i RGB
Memory 32GB (4x8GB) Corsair Vengeance DDR4-3200MHz
Video Card(s) MSI Gaming Trio X 3070 LHR
Display(s) ASUS MG278Q / AOC G2590FX
Case Corsair X4000 iCue
Audio Device(s) Onboard
Power Supply Corsair RM650x 650W Fully Modular
Software Windows 10
Ok link, very game dependent but you can see the 290X does use beyond 3gb of memory on games like Crysis 3. As far as a 6gb 780, kinda hard to show since they are pretty new to the market still.

There are plenty of games that use more than 3GB of VRAM, it doesn't mean you will see any real performance boost from adding more of it. Case in point; http://hexus.net/tech/reviews/graphics/43109-evga-geforce-gtx-680-classified-4gb/?page=7 . BF3 can use more than 3GB of VRAM at 5760x1080, and the difference between a 4GB and 2GB GTX 680 is nonexistant (the 4GB version is also clocked a bit higher so take any gains with a grain of salt). Here's exactly what they said about Crysis 2 during their review:
Of more interest is the 2,204MB framebuffer usage when running the EVGA card, suggesting that the game, set to Ultra quality, is stifled by the standard GTX 680's 2GB. We ran the game on both GTX 680s directly after one another and didn't feel the extra smoothness implied by the results of the 4GB-totin' card.
Just to discredit any notion that I'm cherry picking here's Anandtech reaching a similar conclusion. And Guru3D. And oh look, TPU's own review. The only review I found that has results in favor of a 4GB variation of the GTX 680 was LegionHardware testing at 7680x1600, and still only getting like 12fps, in other words, the gpu ran out of power before memory really became a factor.
 
Top