• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA Explains GeForce RTX 40 Series VRAM Functionality

T0@st

News Editor
Joined
Mar 7, 2023
Messages
2,077 (3.29/day)
Location
South East, UK
NVIDIA receives a lot of questions about graphics memory, also known as the frame buffer, video memory, or "VRAM", and so with the unveiling of our new GeForce RTX 4060 Family of graphics cards we wanted to share some insights, so gamers can make the best buying decisions for their gaming needs. What Is VRAM? VRAM is high speed memory located on your graphics card.

It's one component of a larger memory subsystem that helps make sure your GPU has access to the data it needs to smoothly process and display images. In this article, we'll describe memory subsystem innovations in our latest generation Ada Lovelace GPU architecture, as well as how the speed and size of GPU cache and VRAM impacts performance and the gameplay experience.



GeForce RTX 40 Series Graphics Cards Memory Subsystem: Improving Performance & Efficiency
Modern games are graphical showcases, and their install sizes can now exceed 100 GB. Accessing this massive amount of data happens at different speeds, determined by the specifications of the GPU, and to some extent your system's other components. On GeForce RTX 40 Series graphics cards, new innovations accelerate the process for smooth gaming and faster frame rates, helping you avoid texture stream-in or other hiccups.

The Importance Of Cache
GPUs include high-speed memory caches that are close to the GPU's processing cores, which store data that is likely to be needed. If the GPU can recall the data from the caches, rather than requesting it from the VRAM (further away) or system RAM (even further away), the data will be accessed and processed faster, increasing performance and gameplay fluidity, and reducing power consumption.

GeForce GPUs feature a Level 1 (L1) cache (the closest and fastest cache) in each Streaming Multiprocessor (SM), up to twelve of which can be found in each GeForce RTX 40 Series Graphics Processing Cluster (GPC). This is followed by a fast, larger, shared Level 2 (L2) cache that can be accessed quickly with minimal latency.

Accessing each cache level incurs a latency hit, with the tradeoff being greater capacity. When designing our GeForce RTX 40 Series GPUs, we found a singular, large L2 cache to be faster and more efficient than other alternatives, such as those featuring a small L2 cache, and a large, slower to access L3 cache.



Prior generation GeForce GPUs had much smaller L2 Caches, resulting in lower performance and efficiency compared to today's GeForce RTX 40 Series GPUs.



During use, the GPU first searches for data in the L1 data cache within the SM, and if the data is found in L1 there's no need to access the L2 data cache. If data is not found in L1, it's called a "cache miss", and the search continues into the L2 cache. If data is found in L2, that's called an L2 "cache hit" (see the "H" indicators in the above diagram), and data is provided to the L1 and then to the processing cores.

If data is not found in the L2 cache, an L2 "cache miss", the GPU now tries to obtain the data from the VRAM. You can see a number of L2 cache misses in the above diagram that depicts our prior architecture memory subsystem, which causes a number of VRAM accesses.

If the data's missing from the VRAM, the GPU requests it from your system's memory. If the data is not in system memory, it can typically be loaded into system memory from a storage device like an SSD or hard drive. The data is then copied into VRAM, L2, L1, and ultimately fed to the processing cores. Note that different hardware -and software- based strategies exist to keep the most useful, and most reused data present in caches.

Each additional data read or write operation through the memory hierarchy slows performance and uses more power, so by increasing our cache hit rate we increase frame rates and efficiency.



Compared to prior generation GPUs with a 128-bit memory interface, the memory subsystem of the new NVIDIA Ada Lovelace architecture increases the size of the L2 cache by 16X, greatly increasing the cache hit rate. In the examples above, representing 128-bit GPUs from Ada and prior generation architectures, the hit rate is much higher with Ada. In addition, the L2 cache bandwidth in Ada GPUs has been significantly increased versus prior GPUs. This allows more data to be transferred between the cores and the L2 cache as quickly as possible.

Shown in the diagram below, NVIDIA engineers tested the RTX 4060 Ti with its 32 MB L2 cache against a special test version of RTX 4060 Ti using only a 2 MB L2, which represents the L2 cache size of previous generation 128-bit GPUs (where 512 KB of L2 cache was tied to each 32-bit memory controller).

In testing with a variety of games and synthetic benchmarks, the 32 MB L2 cache reduced memory bus traffic by just over 50% on average compared to the performance of a 2 MB L2 cache. See the reduced VRAM accesses in the Ada Memory Subsystem diagram above.

This 50% traffic reduction allows the GPU to use its memory bandwidth 2X more efficiently. As a result, in this scenario, isolating for memory performance, an Ada GPU with 288 GB/sec of peak memory bandwidth would perform similarly to an Ampere GPU with 554 GB/sec of peak memory bandwidth. Across an array of games and synthetic tests, the greatly increased hit rates improve frame rates by up to 34%.



Memory Bus Width Is One Aspect Of A Memory Subsystem
Historically, memory bus width has been used as an important metric for determining the speed and performance class of a new GPU. However, the bus width by itself is not a sufficient indicator of memory subsystem performance. Instead, it's helpful to understand the broader memory subsystem design and its overall impact on gaming performance.

Due to the advances in the Ada architecture, including new RT and Tensor Cores, higher clock speeds, the new OFA Engine, and Ada's DLSS 3 capabilities, the GeForce RTX 4060 Ti is faster than the previous-generations, 256-bit GeForce RTX 3060 Ti and RTX 2060 SUPER graphics cards, all while using less power.



Altogether, the tech specs deliver a great 60-class GPU with high performance for 1080p gamers, who account for the majority of Steam users.



The Amount of VRAM Is Dependent On GPU Architecture
Gamers often wonder why a graphics card has a certain amount of VRAM. Current-generation GDDR6X and GDDR6 memory is supplied in densities of 8 GB (1 GB of data) and 16Gb (2 GB of data) per chip. Each chip uses two separate 16-bit channels to connect to a single 32-bit Ada memory controller. So a 128-bit GPU can support 4 memory chips, and a 384-bit GPU can support 12 chips (calculated as bus width divided by 32). Higher capacity chips cost more to make, so a balance is required to optimize prices.

On our new 128-bit memory bus GeForce RTX 4060 Ti GPUs, the 8 GB model uses four 16Gb GDDR6 memory chips, and the 16 GB model uses eight 16Gb chips. Mixing densities isn't possible, preventing the creation of a 12 GB model, for example. That's also why the GeForce RTX 4060 Ti has an option with more memory (16 GB) than the GeForce RTX 4070 Ti and 4070, which have 192-bit memory interfaces and therefore 12 GB of VRAM.

Our 60-class GPUs have been carefully crafted to deliver the optimum combination of performance, price, and power efficiency, which is why we chose a 128-bit memory interface. In short, higher capacity GPUs of the same bus width always have double the memory.

Do On Screen Display (OSD) Tools Report VRAM Usage Accurately?
Gamers often cite the "VRAM usage" metric in On Screen Display performance measurement tools. But this number isn't entirely accurate, as all games and game engines work differently. In the majority of cases, a game will allocate VRAM for itself, saying to your system, 'I want it in case I need it'. But just because it's holding the VRAM, doesn't mean it actually needs all of it. In fact, games will often request more memory if it's available.

Due to the way memory works, it's impossible to know precisely what's being actively used unless you're the game's developer with access to development tools. Some games offer a guide in the options menu, but even that isn't always accurate. The amount of VRAM that is actually needed will vary in real time depending on the scene and what the player is seeing.

Furthermore, the behavior of games can vary when VRAM is genuinely used to its max. In some, memory is purged causing a noticeable performance hitch while the current scene is reloaded into memory. In others, only select data will be loaded and unloaded, with no visible impact. And in some cases, new assets may load in slower as they're now being brought in from system RAM.

For gamers, playing is the only way to truly ascertain a game's behavior. In addition, gamers can look at "1% low" framerate measurements, which can help analyze the actual gaming experience. The 1% Low metric - found in the performance overlay and logs of the free NVIDIA FrameView app, as well as other popular measurement tools - measures the average of the slowest 1% of frames over a certain time period.

Automate Setting Selection With GeForce Experience & Download The Latest Patches
Recently, some new games have released patches to better manage memory usage, without hampering the visual quality. Make sure to get the latest patches for new launches, as they commonly fix bugs and optimize performance shortly after launch.

Additionally, GeForce Experience supports most new games, offering optimized settings for each supported GeForce GPU and VRAM configuration, giving gamers the best possible experience by balancing performance and image quality. If you're unfamiliar with game option lingo and just want to enjoy your games from the second you load them, GeForce Experience can automatically tune game settings for a great experience each time.

NVIDIA Technologies Can Help Developers Reduce VRAM Usage
Games are richer and more detailed than ever before, necessitating those 100 GB+ installs. To help developers optimize memory usage, NVIDIA has several free developer tools and SDKs, including:

These are just a few of the tools and technologies that NVIDIA freely provides to help developers optimize their games for all GPUs, platforms, and memory configurations.

Some Applications Can Use More VRAM
Beyond gaming, GeForce RTX graphics cards are used around the world for 3D animation, video editing, motion graphics, photography, graphic design, architectural visualization, STEM, broadcasting, and AI. Some of the applications used in these industries may benefit from additional VRAM. For example, when editing 4K or 8K timelines in Premiere, or crafting a massive architectural scene in D5 Render.

On the gaming side, high resolutions also generally require an increase in VRAM. Occasionally, a game may launch with an optional extra large texture pack and allocate more VRAM. And there are a handful of games which run best at the "High" preset on the 4060 Ti (8 GB), and maxed-out "Ultra" settings on the 4060 Ti (16 GB). In most games, both versions of the GeForce RTX 4060 Ti (8 GB and 16 GB) can play at max settings and will deliver the same performance.



The benefit of the PC platform is its openness, configurability and upgradability, which is why we're offering the two memory configurations for the GeForce RTX 4060 Ti; if you want that extra VRAM, it will be available in July.

A GPU For Every Gamer
Following the launch of the GeForce RTX 4060 Family, there'll be optimized graphics cards for each of the three major game resolutions. However you play, all GeForce RTX 40 Series GPUs will deliver a best-in-class experience, with leading power efficiency, supported by a massive range of game-enhancing technologies, including NVIDIA DLSS 3, NVIDIA Reflex, NVIDIA G-SYNC, NVIDIA Broadcast, and RTX Remix.



For the latest news about all the new games and apps that leverage the full capabilities of GeForce RTX graphics cards, stay tuned to GeForce.com.

View at TechPowerUp Main Site | Source
 

ixi

Joined
Aug 19, 2014
Messages
1,451 (0.39/day)
Not beyond fast, but beyond overpriced.

We increased DLSS performance yeeeeey. Which we have already made few other advertisments while we hype the dlss 3
3.0 against 2.0 and we limit dlss 3.0 so that it can be ran on 4xxx. Hype the fake resolution baby.
 
Last edited:

Space Lynx

Astronaut
Joined
Oct 17, 2014
Messages
17,265 (4.67/day)
Location
Kepler-186f
Processor 7800X3D -25 all core ($196)
Motherboard B650 Steel Legend ($179)
Cooling Frost Commander 140 ($42)
Memory 32gb ddr5 (2x16) cl 30 6000 ($80)
Video Card(s) Merc 310 7900 XT @3100 core $(705)
Display(s) Agon 27" QD-OLED Glossy 240hz 1440p ($399)
Case NZXT H710 (Red/Black) ($60)
I noticed they didn't include Hogwarts Legacy in the vram talk,

Cap Lying GIF by Twitch
 
Joined
Feb 20, 2019
Messages
8,284 (3.93/day)
System Name Bragging Rights
Processor Atom Z3735F 1.33GHz
Motherboard It has no markings but it's green
Cooling No, it's a 2.2W processor
Memory 2GB DDR3L-1333
Video Card(s) Gen7 Intel HD (4EU @ 311MHz)
Storage 32GB eMMC and 128GB Sandisk Extreme U3
Display(s) 10" IPS 1280x800 60Hz
Case Veddha T2
Audio Device(s) Apparently, yes
Power Supply Samsung 18W 5V fast-charger
Mouse MX Anywhere 2
Keyboard Logitech MX Keys (not Cherry MX at all)
VR HMD Samsung Oddyssey, not that I'd plug it into this though....
Software W10 21H1, barely
Benchmark Scores I once clocked a Celeron-300A to 564MHz on an Abit BE6 and it scored over 9000.
That sounds like of lot of PR/marketing copium and damage control. The article completely sidesteps the issue of the VRAM having to hold the texture assets. Sure, larger cache means shader functions have fewer cache hits, but that's like 2% of what VRAM is actually used for by games and game delelopers.

I guess we'll see if the 4060 8GB can fit textures into VRAM that the 3060Ti cannot; I certainly have doubts.
 
Joined
May 1, 2023
Messages
83 (0.14/day)
They're really out on damage control here, huh? I don't remeber them ever trying to convince people of their design choices in the past.

A good 4060 Ti 8Gb vs 4060 Ti 16GB showdown would get to the bottom of this with some real world testing. It'll be interesting to see if the 1% lows improve as much as that 3070 that was upgraded from 8GB to 16GB.
 
Joined
Dec 29, 2021
Messages
67 (0.06/day)
Location
Colorado
Processor Ryzen 7 7800X3D
Motherboard Asrock x670E Steel Legend
Cooling Arctic Liquid Freezr II 420mm
Memory 64GB G.Skill DDR5 CAS30 fruity LED RAM
Video Card(s) Nvidia RTX 4080 (Gigabyte)
Storage 2x Samsung 980 Pros, 3x spinning rust disks for ~20TB total storage
Display(s) 2x Asus 27" 1440p 165hz IPS monitors
Case Thermaltake Level 20XT E-ATX
Audio Device(s) Onboard
Power Supply Super Flower Leadex VII 1000w
Mouse Logitech g502
Keyboard Logitech g915
Software Windows 11 Insider Preview
Summary translated from Nvidiaese:

"We're pretty much busted this generation since we skimped out on VRAM on our overpriced cards. We're going to write this big article talking about caching to justify our profit margins.
Meanwhile we won't mention the horrendous stuttering that happens when a cache miss happens and they happen a lot.

Eat it fuckers. Give us your money. Jensen wants to buy a ranch in Aspen."
 

Space Lynx

Astronaut
Joined
Oct 17, 2014
Messages
17,265 (4.67/day)
Location
Kepler-186f
Processor 7800X3D -25 all core ($196)
Motherboard B650 Steel Legend ($179)
Cooling Frost Commander 140 ($42)
Memory 32gb ddr5 (2x16) cl 30 6000 ($80)
Video Card(s) Merc 310 7900 XT @3100 core $(705)
Display(s) Agon 27" QD-OLED Glossy 240hz 1440p ($399)
Case NZXT H710 (Red/Black) ($60)
Summary translated from Nvidiaese:

"We're pretty much busted this generation since we skimped out on VRAM on our overpriced cards. We're going to write this big article talking about caching to justify our profit margins.
Meanwhile we won't mention the horrendous stuttering that happens when a cache miss happens and they happen a lot.

Eat it fuckers. Give us your money. Jensen wants to buy a ranch in Aspen."

don't forget his wall of leather jackets and marble countered kitchen. he wants that yacht club membership next boys, so you need less vram for max profits! :roll:
 
Joined
Apr 2, 2008
Messages
434 (0.07/day)
System Name -
Processor Ryzen 9 5900X
Motherboard MSI MEG X570
Cooling Arctic Liquid Freezer II 280 (4x140 push-pull)
Memory 32GB Patriot Steel DDR4 3733 (8GBx4)
Video Card(s) MSI RTX 4080 X-trio.
Storage Sabrent Rocket-Plus-G 2TB, Crucial P1 1TB, WD 1TB sata.
Display(s) LG Ultragear 34G750 nano-IPS 34" utrawide
Case Define R6
Audio Device(s) Xfi PCIe
Power Supply Fractal Design ION Gold 750W
Mouse Razer DeathAdder V2 Mini.
Keyboard Logitech K120
VR HMD Er no, pointless.
Software Windows 10 22H2
Benchmark Scores Timespy - 24522 | Crystalmark - 7100/6900 Seq. & 84/266 QD1 |
Yawn.... 16gb on a 4060 series card while welcome is imho, utterly pointless. The line should have been -

4050 - 8GB
4060 - 12GB
4070 - 16GB
4080 - 20GB
4090 - 24GB

The reason I am considering a 4080 to replace my 3080 is because I am getting the more processing grunt for the same power useage, had the 7900XTX had the same power useage as the 4080 I would have gone with that instead.
 
Joined
Jan 17, 2023
Messages
28 (0.04/day)
Processor Ryzen 5 4600G
Motherboard Gigabyte B450M DS3H
Cooling AMD Stock Cooler
Memory 3200Mhz CL16-20-20-40 memory
Video Card(s) ASRock Challenger RX 6600
Storage 1TB WD SN570 NVME SSD and 2TB Seagate 7200RPM HDD
Software Windows 11 Pro
TL;DR :

NVIDIA tries to justify VRAM stagnation in their overpriced cards by using caching as a cop-out and trying to doubt the accuracy of OSD readings on VRAM usage.
 

TheLostSwede

News Editor
Joined
Nov 11, 2004
Messages
17,653 (2.41/day)
Location
Sweden
System Name Overlord Mk MLI
Processor AMD Ryzen 7 7800X3D
Motherboard Gigabyte X670E Aorus Master
Cooling Noctua NH-D15 SE with offsets
Memory 32GB Team T-Create Expert DDR5 6000 MHz @ CL30-34-34-68
Video Card(s) Gainward GeForce RTX 4080 Phantom GS
Storage 1TB Solidigm P44 Pro, 2 TB Corsair MP600 Pro, 2TB Kingston KC3000
Display(s) Acer XV272K LVbmiipruzx 4K@160Hz
Case Fractal Design Torrent Compact
Audio Device(s) Corsair Virtuoso SE
Power Supply be quiet! Pure Power 12 M 850 W
Mouse Logitech G502 Lightspeed
Keyboard Corsair K70 Max
Software Windows 10 Pro
Benchmark Scores https://valid.x86.fr/yfsd9w
I'm really feeling the love in this thread... :love:
 
Joined
Sep 10, 2018
Messages
6,923 (3.05/day)
Location
California
System Name His & Hers
Processor R7 5800X/ R7 7950X3D Stock
Motherboard X670E Aorus Pro X/ROG Crosshair VIII Hero
Cooling Corsair h150 elite/ Corsair h115i Platinum
Memory Trident Z5 Neo 6000/ 32 GB 3200 CL14 @3800 CL16 Team T Force Nighthawk
Video Card(s) Evga FTW 3 Ultra 3080ti/ Gigabyte Gaming OC 4090
Storage lots of SSD.
Display(s) A whole bunch OLED, VA, IPS.....
Case 011 Dynamic XL/ Phanteks Evolv X
Audio Device(s) Arctis Pro + gaming Dac/ Corsair sp 2500/ Logitech G560/Samsung Q990B
Power Supply Seasonic Ultra Prime Titanium 1000w/850w
Mouse Logitech G502 Lightspeed/ Logitech G Pro Hero.
Keyboard Logitech - G915 LIGHTSPEED / Logitech G Pro
I'm really feeling the love in this thread... :love:

The fact the Nvidia thinks a 400 usd gpu should be used for 1080p med/high settings is pretty sad.
 

TheLostSwede

News Editor
Joined
Nov 11, 2004
Messages
17,653 (2.41/day)
Location
Sweden
System Name Overlord Mk MLI
Processor AMD Ryzen 7 7800X3D
Motherboard Gigabyte X670E Aorus Master
Cooling Noctua NH-D15 SE with offsets
Memory 32GB Team T-Create Expert DDR5 6000 MHz @ CL30-34-34-68
Video Card(s) Gainward GeForce RTX 4080 Phantom GS
Storage 1TB Solidigm P44 Pro, 2 TB Corsair MP600 Pro, 2TB Kingston KC3000
Display(s) Acer XV272K LVbmiipruzx 4K@160Hz
Case Fractal Design Torrent Compact
Audio Device(s) Corsair Virtuoso SE
Power Supply be quiet! Pure Power 12 M 850 W
Mouse Logitech G502 Lightspeed
Keyboard Corsair K70 Max
Software Windows 10 Pro
Benchmark Scores https://valid.x86.fr/yfsd9w
The fact the Nvidia thinks a 400 usd gpu should be used for 1080p med/high settings is pretty sad.
I didn't say that I disagreed with any of the comments here...
 
Joined
Sep 10, 2018
Messages
6,923 (3.05/day)
Location
California
System Name His & Hers
Processor R7 5800X/ R7 7950X3D Stock
Motherboard X670E Aorus Pro X/ROG Crosshair VIII Hero
Cooling Corsair h150 elite/ Corsair h115i Platinum
Memory Trident Z5 Neo 6000/ 32 GB 3200 CL14 @3800 CL16 Team T Force Nighthawk
Video Card(s) Evga FTW 3 Ultra 3080ti/ Gigabyte Gaming OC 4090
Storage lots of SSD.
Display(s) A whole bunch OLED, VA, IPS.....
Case 011 Dynamic XL/ Phanteks Evolv X
Audio Device(s) Arctis Pro + gaming Dac/ Corsair sp 2500/ Logitech G560/Samsung Q990B
Power Supply Seasonic Ultra Prime Titanium 1000w/850w
Mouse Logitech G502 Lightspeed/ Logitech G Pro Hero.
Keyboard Logitech - G915 LIGHTSPEED / Logitech G Pro
I didn't say that I disagreed with any of the comments here...

I didn't take your comment that way I was more talking in general the state of the sub 400 gpu is pretty sad.
 
Joined
Nov 7, 2017
Messages
52 (0.02/day)
TL;DR :

NVIDIA tries to justify VRAM stagnation in their overpriced cards by using caching as a cop-out and trying to doubt the accuracy of OSD readings on VRAM usage.
At the end though they admit that 8 GBs is not enough for the best settings of some games and kind of push you to buy the 16 GB model if you are wary.
 

Space Lynx

Astronaut
Joined
Oct 17, 2014
Messages
17,265 (4.67/day)
Location
Kepler-186f
Processor 7800X3D -25 all core ($196)
Motherboard B650 Steel Legend ($179)
Cooling Frost Commander 140 ($42)
Memory 32gb ddr5 (2x16) cl 30 6000 ($80)
Video Card(s) Merc 310 7900 XT @3100 core $(705)
Display(s) Agon 27" QD-OLED Glossy 240hz 1440p ($399)
Case NZXT H710 (Red/Black) ($60)
Yawn.... 16gb on a 4060 series card while welcome is imho, utterly pointless. The line should have been -

4050 - 8GB
4060 - 12GB
4070 - 16GB
4080 - 20GB
4090 - 24GB

The reason I am considering a 4080 to replace my 3080 is because I am getting the more processing grunt for the same power useage, had the 7900XTX had the same power useage as the 4080 I would have gone with that instead.

I agree with this.

My 5800x3d caps out around a 7900 XT... so if my 6800 XT will ever sell, my plan is to go route of 7900 XT. If 4070 ti had 16gb of vram I would have went with that instead. Vram is important these days, thats been demonstrated clearly by several people.
 
Joined
Jan 9, 2023
Messages
304 (0.44/day)
Notice how they changed the settings for A Plague Tale and Resident Evil for the 8GB card.
Both games are notable for breaking pretty badly if they run out of VRAM.
Surely this has nothing to do with it?
Nahhhh
 
D

Deleted member 229121

Guest
I agree with this.

My 5800x3d caps out around a 7900 XT... so if my 6800 XT will ever sell, my plan is to go route of 7900 XT. If 4070 ti had 16gb of vram I would have went with that instead. Vram is important these days, thats been demonstrated clearly by several people.

Move to 4K? That's my plan anyway, need something with adaptive sync.
 

OneMoar

There is Always Moar
Joined
Apr 9, 2010
Messages
8,795 (1.64/day)
Location
Rochester area
System Name RPC MK2.5
Processor Ryzen 5800x
Motherboard Gigabyte Aorus Pro V2
Cooling Thermalright Phantom Spirit SE
Memory CL16 BL2K16G36C16U4RL 3600 1:1 micron e-die
Video Card(s) GIGABYTE RTX 3070 Ti GAMING OC
Storage Nextorage NE1N 2TB ADATA SX8200PRO NVME 512GB, Intel 545s 500GBSSD, ADATA SU800 SSD, 3TB Spinner
Display(s) LG Ultra Gear 32 1440p 165hz Dell 1440p 75hz
Case Phanteks P300 /w 300A front panel conversion
Audio Device(s) onboard
Power Supply SeaSonic Focus+ Platinum 750W
Mouse Kone burst Pro
Keyboard SteelSeries Apex 7
Software Windows 11 +startisallback
please don't parrot these press releases
nvidia knows what they need todo they just don't want to
Nvidia just needs to stop hamstringing cards with 8gb of vram
DLSS3 is good but isn't an excuse to produce inferior hardware
 
Last edited:

Space Lynx

Astronaut
Joined
Oct 17, 2014
Messages
17,265 (4.67/day)
Location
Kepler-186f
Processor 7800X3D -25 all core ($196)
Motherboard B650 Steel Legend ($179)
Cooling Frost Commander 140 ($42)
Memory 32gb ddr5 (2x16) cl 30 6000 ($80)
Video Card(s) Merc 310 7900 XT @3100 core $(705)
Display(s) Agon 27" QD-OLED Glossy 240hz 1440p ($399)
Case NZXT H710 (Red/Black) ($60)
Move to 4K? That's my plan anyway, need something with adaptive sync.

I am considering 4k eventually, not yet though as I really prefer playing my PC games at high refresh rates, and I would need a 4090 for that, honestly wouldn't mind getting a 4090 if I had a decent paying job right now. I am waiting for 32" 4k OLED 165hz with a panel heatsink for longevity, that's my stop. by the time that comes out, 5090 will be out. so maybe I will do a 5090 and that. for now I am content with 1440p 165hz, I have a really good panel on my NZXT it looks damn good to my eyes.

Move to 4K? That's my plan anyway, need something with adaptive sync.

I am considering 4k eventually, not yet though as I really prefer playing my PC games at high refresh rates, and I would need a 4090 for that, honestly wouldn't mind getting a 4090 if I had a decent paying job right now. I am waiting for 32" 4k OLED 165hz with a panel heatsink for longevity, that's my stop. by the time that comes out, 5090 will be out. so maybe I will do a 5090 and that. for now I am content with 1440p 165hz, I have a really good panel on my NZXT it looks damn good to my eyes.
 
Joined
May 17, 2021
Messages
3,005 (2.33/day)
Processor Ryzen 5 5700x
Motherboard B550 Elite
Cooling Thermalright Perless Assassin 120 SE
Memory 32GB Fury Beast DDR4 3200Mhz
Video Card(s) Gigabyte 3060 ti gaming oc pro
Storage Samsung 970 Evo 1TB, WD SN850x 1TB, plus some random HDDs
Display(s) LG 27gp850 1440p 165Hz 27''
Case Lian Li Lancool II performance
Power Supply MSI 750w
Mouse G502
at this point the vram discussion is more like a shouting match, lots of unreasonable claims, everyone has their own opinion, and me i'm still waiting for reasonable tests in reasonable scenarios. This should be a 1440p card to be use with medium settings, and that's what i want to see. Not 1080p (even if anyone can use it for that for sure), not ultra, not RT (no one is actually taking you seriously RT), not 4k, not tested with a 4090 pretending to be a 4060.
 
  • Like
Reactions: N/A
Top