• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

CHOO CHOOOOO!!!!1! Navi Hype Train be rollin'

eidairaman1

The Exiled Airman
Joined
Jul 2, 2007
Messages
43,255 (6.74/day)
Location
Republic of Texas (True Patriot)
System Name PCGOD
Processor AMD FX 8350@ 5.0GHz
Motherboard Asus TUF 990FX Sabertooth R2 2901 Bios
Cooling Scythe Ashura, 2×BitFenix 230mm Spectre Pro LED (Blue,Green), 2x BitFenix 140mm Spectre Pro LED
Memory 16 GB Gskill Ripjaws X 2133 (2400 OC, 10-10-12-20-20, 1T, 1.65V)
Video Card(s) AMD Radeon 290 Sapphire Vapor-X
Storage Samsung 840 Pro 256GB, WD Velociraptor 1TB
Display(s) NEC Multisync LCD 1700V (Display Port Adapter)
Case AeroCool Xpredator Evil Blue Edition
Audio Device(s) Creative Labs Sound Blaster ZxR
Power Supply Seasonic 1250 XM2 Series (XP3)
Mouse Roccat Kone XTD
Keyboard Roccat Ryos MK Pro
Software Windows 7 Pro 64
They should work better I would argue due to the fundamental way GPUs work (SIMT/SPMD). That makes it much easier to decentralize the chip into compute modules, also with GPUs you didn't have to worry much about added latencies to begin with.

The problem is why would you want to make one right know ? A chiplet GPU would only makes sense if you reached the absolute limit of size/power/performance and any further advancement would affect any one of those metrics to the point it is no longer feasible to make a monolithic GPU. Or, if you want to make an APU (right know that's a bad idea on PCs due to lack of bandwidth).

That's exactly where Rome sits right know on the CPU front, that was meant to be the biggest , fastest most power efficient CPU AMD can make right know. With Navi they clearly didn't have those goals in mind. They wanted an APU for consoles and whatever design resulted from that they decided to port that on PCs in the form of dedicated graphics.

Best to wait and see for next 2 years
 

FordGT90Concept

"I go fast!1!11!1!"
Joined
Oct 13, 2008
Messages
26,263 (4.42/day)
Location
IA, USA
System Name BY-2021
Processor AMD Ryzen 7 5800X (65w eco profile)
Motherboard MSI B550 Gaming Plus
Cooling Scythe Mugen (rev 5)
Memory 2 x Kingston HyperX DDR4-3200 32 GiB
Video Card(s) AMD Radeon RX 7900 XT
Storage Samsung 980 Pro, Seagate Exos X20 TB 7200 RPM
Display(s) Nixeus NX-EDG274K (3840x2160@144 DP) + Samsung SyncMaster 906BW (1440x900@60 HDMI-DVI)
Case Coolermaster HAF 932 w/ USB 3.0 5.25" bay + USB 3.2 (A+C) 3.5" bay
Audio Device(s) Realtek ALC1150, Micca OriGen+
Power Supply Enermax Platimax 850w
Mouse Nixeus REVEL-X
Keyboard Tesoro Excalibur
Software Windows 10 Home 64-bit
Benchmark Scores Faster than the tortoise; slower than the hare.
How much do games use these days at 4K ultra? Is 8GB really a big limitation?
I quickly checked the analyses TPU provides.
E.g. Metro Exodus, not even 6GB with RTX on:
https://www.techpowerup.com/reviews/Performance_Analysis/Metro_Exodus/6.html
Generally speaking, most games tested use around 4GB in 4K.
The biggest usage I've found:
https://www.techpowerup.com/reviews/Performance_Analysis/Middle_Earth_Shadow_of_War/5.html
8.3GB, but the comment is crucial. Usual 4GB on the "high" settings.

Also, I remember very well that 8GB was perfectly fine when Vega came out. AMD convinced us that HBM2 and HBCC mean it doesn't need more. That it performs like a card that has more memory.
What happened to all that?
I generally don't trust these memory requirement assessments for games because it varies wildly. For example, there's reviews that say GTX 1080 uses at most 4 GiB on Assassin's Creed: Origins at 4K ultra. I'm not playing it on ultra, only using 1920x1200, and I've seen my VRAM exceed 4 GiB. This may be because I was many hours into a play session before I checked and the amount of content cached in VRAM has accumulated over time. Or it could simply be where I was in the world.

Point is: memory is something you have enough of until you don't. Xbox One X makes 9 GiB of VRAM/RAM available games and most of that is used by the GPU. This has lead to more games pushing closer to 6 GiB memory use. PS5 and Microsoft's answer to it are likely to raise the memory threshold higher. 8 GiB will likely be okay in dGPUs but there will be more games in the next 3-5 years that want more than 8 at high resolutions.

I know that I'm blowing $500+ on a card now, I would want at least 10 GiB because I do tend to use my cards for 3-5 years.
 
Last edited:
Joined
Apr 21, 2010
Messages
5,731 (1.06/day)
Location
West Midlands. UK.
System Name Ryzen Reynolds
Processor Ryzen 1600 - 4.0Ghz 1.415v - SMT disabled
Motherboard mATX Asrock AB350m AM4
Cooling Raijintek Leto Pro
Memory Vulcan T-Force 16GB DDR4 3000 16.18.18 @3200Mhz 14.17.17
Video Card(s) Sapphire Nitro+ 4GB RX 580 - 1450/2000 BIOS mod 8-)
Storage Seagate B'cuda 1TB/Sandisk 128GB SSD
Display(s) Acer ED242QR 75hz Freesync
Case Corsair Carbide Series SPEC-01
Audio Device(s) Onboard
Power Supply Corsair VS 550w
Mouse Zalman ZM-M401R
Keyboard Razor Lycosa
Software Windows 10 x64
Benchmark Scores https://www.3dmark.com/spy/6220813

FordGT90Concept

"I go fast!1!11!1!"
Joined
Oct 13, 2008
Messages
26,263 (4.42/day)
Location
IA, USA
System Name BY-2021
Processor AMD Ryzen 7 5800X (65w eco profile)
Motherboard MSI B550 Gaming Plus
Cooling Scythe Mugen (rev 5)
Memory 2 x Kingston HyperX DDR4-3200 32 GiB
Video Card(s) AMD Radeon RX 7900 XT
Storage Samsung 980 Pro, Seagate Exos X20 TB 7200 RPM
Display(s) Nixeus NX-EDG274K (3840x2160@144 DP) + Samsung SyncMaster 906BW (1440x900@60 HDMI-DVI)
Case Coolermaster HAF 932 w/ USB 3.0 5.25" bay + USB 3.2 (A+C) 3.5" bay
Audio Device(s) Realtek ALC1150, Micca OriGen+
Power Supply Enermax Platimax 850w
Mouse Nixeus REVEL-X
Keyboard Tesoro Excalibur
Software Windows 10 Home 64-bit
Benchmark Scores Faster than the tortoise; slower than the hare.
They should work better I would argue due to the fundamental way GPUs work (SIMT/SPMD). That makes it much easier to decentralize the chip into compute modules, also with GPUs you didn't have to worry much about added latencies to begin with.
:roll: That last sentence has me in stitches. Latencies is the reason why 100+ GB/s memory is common in cards. if You want 144 fps, you need to do everything to draw that frame in 6.94 ms. 6.94 ms! GPUs have no time to wait for anything.

CPUs: threads are localized by nature. GPUs: work on warps/wavefronts where each warp/wavefront spawns thousands of parallel executions accessing the same data. Have some reading material.

Remember how SLI/Crossfire works: every GPU mirrors the memory from every other GPU. This is extremely wasteful but it's the only way to make sure every GPU has access to the data it needs, when it needs it. When you have multiple GPU chiplets that need to access the same resource, each chiplet trying to access it is going to be penalized by the other chiplets trying to access it as well. The memory controller would have be able to copy resources to all chiplets' local memory simultaneously and in real time. But that presents its own problem: latency between the memory controllers and the chiplets. It's one thing, after another, after another that leads to not being able to meet that 6.94 ms goal.

The hope was that infinity fabric would be fast enough to make it possible but...the talk of Navi and chiplets hasn't ever really materialized. I mean, PS5 could have a chiplet with separate Zen 2, I/O, and Navi packages on an MCM but putting multiple Navis together on one package with the intent to masquerade as one GPU...there's no hints at that since years ago (vague reference to "scalability").

The problem is why would you want to make one right know ?
Lower cost, better yields, and better performance.
 
Last edited:
Joined
Jan 8, 2017
Messages
9,624 (3.28/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
GPUs have no time to wait for anything.

You can get plenty of work done in 6.94ms if you have the bandwidth. GPUs are designed to work around latencies not to improve upon them, it's the reason why for examples GPUs typically have an order of magnitude smaller caches compared to similarly sized CPUs. The predictability and assumption of what sort of programs are going to be run on a GPU makes it so that scheduling can hide latencies very well.

Edit :

GPUs: work on WAVEs where each WAVE spawns thousands of threads accessing the same data.

Not the same data, typically. GPUs execute instructions into a SIMD fashion (Single-Instruction-Multiple-Data) and the work is generally data independent, if GPUs worked as you described it manufactures would have given up long ago. Thankfully they don't.

And wavefronts do not "spawn thousands of threads", I think you are seriously confusing things here. A wavefront is a collection of 32 software threads (Nvidia, they call it a warp) or 64 (AMD) that are subsequently scheduled for execution on to the actual hardware threads within an CU/SM. Wavefronts don't even consist of threads in absolute terms, they are more like instructions from a collection of threads. The same instruction gets executed from multiple threads, aka SIMT, another design philosophy for GPUs.
 
Last edited:

FordGT90Concept

"I go fast!1!11!1!"
Joined
Oct 13, 2008
Messages
26,263 (4.42/day)
Location
IA, USA
System Name BY-2021
Processor AMD Ryzen 7 5800X (65w eco profile)
Motherboard MSI B550 Gaming Plus
Cooling Scythe Mugen (rev 5)
Memory 2 x Kingston HyperX DDR4-3200 32 GiB
Video Card(s) AMD Radeon RX 7900 XT
Storage Samsung 980 Pro, Seagate Exos X20 TB 7200 RPM
Display(s) Nixeus NX-EDG274K (3840x2160@144 DP) + Samsung SyncMaster 906BW (1440x900@60 HDMI-DVI)
Case Coolermaster HAF 932 w/ USB 3.0 5.25" bay + USB 3.2 (A+C) 3.5" bay
Audio Device(s) Realtek ALC1150, Micca OriGen+
Power Supply Enermax Platimax 850w
Mouse Nixeus REVEL-X
Keyboard Tesoro Excalibur
Software Windows 10 Home 64-bit
Benchmark Scores Faster than the tortoise; slower than the hare.
I did major edits to post above clarifying all those things and giving an example why it hasn't been done.

Also: look at page 22 of the referenced document: Radeon HD 7970 = 2560 active threads per L1 data cache. So yes, "thousands of threads."
 
Last edited:
Joined
Jun 28, 2016
Messages
3,595 (1.15/day)
Point is: memory is something you have enough of until you don't.
But why would it run out? 8GB is a lot.
A few games checked by TPU pull everything they can ever use as soon as possible. And it's still under 10GB. So what's the point of more memory?

And just as you said: even at 1080p a GPU can gather 4GB of data over a long session.But that simply means holding a lot of data that it'll never use, like textures of locations you can't get back to.
When GPU reaches memory limit, it simply removes the stuff that's least likely to be useful.

So if we assume 4K games need around 10GB, what's the point of 16GB vs 8GB? You'll use 2 more for stuff that doesn't have to be there. And you won't use the other 6GB. Ever.
That unused HBM2 costs you $100 - money that could have given you a much faster GPU instead.

And once again: Vega was designed to work well with less RAM, right? All these presentations and videos weren't a dream? :)
Xbox One X makes 9 GiB of VRAM/RAM available games and most of that is used by the GPU.
Of course. Consoles are reading as much as they can the moment you start a game. That's why it takes so long to start a game (compared to PCs).
This has lead to more games pushing closer to 6 GiB memory use. PS5 and Microsoft's answer to it are likely to raise the memory threshold higher. 8 GiB will likely be okay in dGPUs but there will be more games in the next 3-5 years that want more than 8 at high resolutions.
But why? I mean: where is these extra GB come from?
The only thing that could significantly increase the RAM requirement is higher resolution. Radeon VII is a 4K card tops (at max settings).

Check the tests TPU made. They started 3 years ago.
https://www.techpowerup.com/reviews/?category=PC+Port+Testing
Most games tested at 4K used around 4-5GB.
There are a few outliers, like CoD or RoTR. In their case performance doesn't drop on cards will smaller VRAM, so the GPU holds more data than it needs. W1zzard mentions that in the comments as well.

I see no reason why 4K games would suddenly require 10GB - let alone 16GB. If you know one, please share. :)
I know that I'm blowing $500+ on a card now, I would want at least 10 GiB because I do tend to use my cards for 3-5 years.
But wouldn't you prefer the card to have more performance, not more RAM?
Games still run on 8GB. And they still will 5 years from now. It's just that they'll need to read data from disks a bit more often. It's not a big problem - I doubt you'd notice. AMD could have spent these $100 on extra CUs instead. Or better cooler. Or just make a profit for a change and save money for R&D.

Maybe 5 years from now you'll have a 5 or 6K monitor. Maybe 6K games will utilize 12GB of RAM. But so what? Radeon VII is too slow for that anyway.
 
Joined
Jan 8, 2017
Messages
9,624 (3.28/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
None of what's in there goes at odds with what I have been saying, as a matter of fact it matches it entirely. A wavefront doesn't consists of thousands of threads, it's either 32 or 64, hopefully we got that out of the way. Only when we get to the collection of wavefronts can we talk about thousands of threads.

Radeon HD 7970 = 2560 active threads per L1 data cache.

Those 2560 threads may or may not all go to L1 , they may go to L2 or global memory. The fact that there can be bytes of instruction/data available per thread should make it obvious to you that caches do not play much a role here and neither does the latency benefit associated with them.

That's how a GPU handles the horrendous memory access times, you make it so that work is already scheduled while you fetch data :
l.png


You can expand this vertically and all you would need is more bandwidth, latency could remain untouched or even degrade slightly. That's why this isn't critical even when you need a frame done in 6.94 ms and why chiplets would not be hard to implement. You can only do this because you know ahead of time that you are going to execute the same sequences of instructions over multiple pieces of data. You are talking about memory controllers and how there would be a contention between multiple chiplets accessing the same data. First of all they rarely need to access the same data as I pointed out and secondly this isn't anything new , you already have this problem with multiple CU/SMs querying the same data from the same memory controller. Nothing about this makes it so that this couldn't be dealt with easily.
 
Last edited:

FordGT90Concept

"I go fast!1!11!1!"
Joined
Oct 13, 2008
Messages
26,263 (4.42/day)
Location
IA, USA
System Name BY-2021
Processor AMD Ryzen 7 5800X (65w eco profile)
Motherboard MSI B550 Gaming Plus
Cooling Scythe Mugen (rev 5)
Memory 2 x Kingston HyperX DDR4-3200 32 GiB
Video Card(s) AMD Radeon RX 7900 XT
Storage Samsung 980 Pro, Seagate Exos X20 TB 7200 RPM
Display(s) Nixeus NX-EDG274K (3840x2160@144 DP) + Samsung SyncMaster 906BW (1440x900@60 HDMI-DVI)
Case Coolermaster HAF 932 w/ USB 3.0 5.25" bay + USB 3.2 (A+C) 3.5" bay
Audio Device(s) Realtek ALC1150, Micca OriGen+
Power Supply Enermax Platimax 850w
Mouse Nixeus REVEL-X
Keyboard Tesoro Excalibur
Software Windows 10 Home 64-bit
Benchmark Scores Faster than the tortoise; slower than the hare.
But why would it run out? 8GB is a lot.
A few games checked by TPU pull everything they can ever use as soon as possible. And it's still under 10GB. So what's the point of more memory?

And just as you said: even at 1080p a GPU can gather 4GB of data over a long session.But that simply means holding a lot of data that it'll never use, like textures of locations you can't get back to.
When GPU reaches memory limit, it simply removes the stuff that's least likely to be useful.

So if we assume 4K games need around 10GB, what's the point of 16GB vs 8GB? You'll use 2 more for stuff that doesn't have to be there. And you won't use the other 6GB. Ever.
That unused HBM2 costs you $100 - money that could have given you a much faster GPU instead.
I have an original PCI Radeon with 32 MB. That's enough VRAM, right? Right!?![/sarcasm]

But why? I mean: where is these extra GB come from?
Larger textures, more triangles, async compute workload, etc.

Most games tested at 4K used around 4-5GB.
3 years ago. At that time, GTX 970 were made functionally obsolete because it doesn't have enough VRAM. Now GTX 1060 6 GiB is dangerously close to being overloaded. Meanwhile, RX 470 8 GiB is fine, so is RTX 2080...but not for long.

But wouldn't you prefer the card to have more performance, not more RAM?
They're one in the same for Radeon VII. 1 TB/s versus 483.8 MB/s and 16 GiB versus 8 GiB. You get double the bandwidth along with double the VRAM. It's win-win, for the reasonable price of $100.

That's why this isn't critical even when you need a frame done in 6.94 ms and why chiplets would not be hard to implement.
You sound so sure of yourself, yet, it hasn't been done yet, even in prototyping. Either engineers are wrong that have every incentive to go chiplet or you're wrong. Guess who my money is on.
 
Last edited:

eidairaman1

The Exiled Airman
Joined
Jul 2, 2007
Messages
43,255 (6.74/day)
Location
Republic of Texas (True Patriot)
System Name PCGOD
Processor AMD FX 8350@ 5.0GHz
Motherboard Asus TUF 990FX Sabertooth R2 2901 Bios
Cooling Scythe Ashura, 2×BitFenix 230mm Spectre Pro LED (Blue,Green), 2x BitFenix 140mm Spectre Pro LED
Memory 16 GB Gskill Ripjaws X 2133 (2400 OC, 10-10-12-20-20, 1T, 1.65V)
Video Card(s) AMD Radeon 290 Sapphire Vapor-X
Storage Samsung 840 Pro 256GB, WD Velociraptor 1TB
Display(s) NEC Multisync LCD 1700V (Display Port Adapter)
Case AeroCool Xpredator Evil Blue Edition
Audio Device(s) Creative Labs Sound Blaster ZxR
Power Supply Seasonic 1250 XM2 Series (XP3)
Mouse Roccat Kone XTD
Keyboard Roccat Ryos MK Pro
Software Windows 7 Pro 64
Joined
Mar 10, 2010
Messages
11,880 (2.19/day)
Location
Manchester uk
System Name RyzenGtEvo/ Asus strix scar II
Processor Amd R5 5900X/ Intel 8750H
Motherboard Crosshair hero8 impact/Asus
Cooling 360EK extreme rad+ 360$EK slim all push, cpu ek suprim Gpu full cover all EK
Memory Gskill Trident Z 3900cas18 32Gb in four sticks./16Gb/16GB
Video Card(s) Asus tuf RX7900XT /Rtx 2060
Storage Silicon power 2TB nvme/8Tb external/1Tb samsung Evo nvme 2Tb sata ssd/1Tb nvme
Display(s) Samsung UAE28"850R 4k freesync.dell shiter
Case Lianli 011 dynamic/strix scar2
Audio Device(s) Xfi creative 7.1 on board ,Yamaha dts av setup, corsair void pro headset
Power Supply corsair 1200Hxi/Asus stock
Mouse Roccat Kova/ Logitech G wireless
Keyboard Roccat Aimo 120
VR HMD Oculus rift
Software Win 10 Pro
Benchmark Scores laptop Timespy 6506
How much do games use these days at 4K ultra? Is 8GB really a big limitation?
I quickly checked the analyses TPU provides.
E.g. Metro Exodus, not even 6GB with RTX on:
https://www.techpowerup.com/reviews/Performance_Analysis/Metro_Exodus/6.html
Generally speaking, most games tested use around 4GB in 4K.
The biggest usage I've found:
https://www.techpowerup.com/reviews/Performance_Analysis/Middle_Earth_Shadow_of_War/5.html
8.3GB, but the comment is crucial. Usual 4GB on the "high" settings.

Also, I remember very well that 8GB was perfectly fine when Vega came out. AMD convinced us that HBM2 and HBCC mean it doesn't need more. That it performs like a card that has more memory.
What happened to all that?


I believe it's a bit too early to say chiplets work great in CPUs and that Zen2 is a successor. Don't you think? ;-)
I generally don't trust these memory requirement assessments for games because it varies wildly. For example, there's reviews that say GTX 1080 uses at most 4 GiB on Assassin's Creed: Origins at 4K ultra. I'm not playing it on ultra, only using 1920x1200, and I've seen my VRAM exceed 4 GiB. This may be because I was many hours into a play session before I checked and the amount of content cached in VRAM has accumulated over time. Or it could simply be where I was in the world.

Point is: memory is something you have enough of until you don't. Xbox One X makes 9 GiB of VRAM/RAM available games and most of that is used by the GPU. This has lead to more games pushing closer to 6 GiB memory use. PS5 and Microsoft's answer to it are likely to raise the memory threshold higher. 8 GiB will likely be okay in dGPUs but there will be more games in the next 3-5 years that want more than 8 at high resolutions.

I know that I'm blowing $500+ on a card now, I would want at least 10 GiB because I do tend to use my cards for 3-5 years.
Apex legends can use more than 8GB and that's dx11,as a 4k ultra IQ gamer that 8GBlimit gets tested Today, imagine what spec GtaVI will need.
 
Joined
Jun 28, 2016
Messages
3,595 (1.15/day)
Apex legends can use more than 8GB and that's dx11,as a 4k ultra IQ gamer that 8GBlimit gets tested Today, imagine what spec GtaVI will need.
"Can use" and "needs" are slightly different things. We've already said in this thread that GPUs gather a lot of data, so likely many games will go over the 8GB mark at some point. It doesn't mean 8GB are needed.
Apex Legends official recommended specs state 8GB RAM. And the game runs perfectly fine with that budget - even at 4K 60fps max settings.
Can it gather more data? Sure. Does it improve performance? I bet it doesn't. If you have tests that show it does, post a link.

GTA VI will work well on 4GB 6GB and won't benefit from more than 8GB. You know how I know this? Because that's the amount of RAM mainstream GPUs have. They want to sell tens of millions copies of this game. They'll design it to look well on tens of millions of PCs.
 
Last edited:

eidairaman1

The Exiled Airman
Joined
Jul 2, 2007
Messages
43,255 (6.74/day)
Location
Republic of Texas (True Patriot)
System Name PCGOD
Processor AMD FX 8350@ 5.0GHz
Motherboard Asus TUF 990FX Sabertooth R2 2901 Bios
Cooling Scythe Ashura, 2×BitFenix 230mm Spectre Pro LED (Blue,Green), 2x BitFenix 140mm Spectre Pro LED
Memory 16 GB Gskill Ripjaws X 2133 (2400 OC, 10-10-12-20-20, 1T, 1.65V)
Video Card(s) AMD Radeon 290 Sapphire Vapor-X
Storage Samsung 840 Pro 256GB, WD Velociraptor 1TB
Display(s) NEC Multisync LCD 1700V (Display Port Adapter)
Case AeroCool Xpredator Evil Blue Edition
Audio Device(s) Creative Labs Sound Blaster ZxR
Power Supply Seasonic 1250 XM2 Series (XP3)
Mouse Roccat Kone XTD
Keyboard Roccat Ryos MK Pro
Software Windows 7 Pro 64
"Can use" and "needs" are slightly different things. We've already said in this thread that GPUs gather a lot of data, so likely many games will go over the 8GB mark at some point. It doesn't mean 8GB are needed.
Apex Legends official recommended specs state 8GB RAM. And the game runs perfectly fine with that budget - even at 4K 60fps max settings.
Can it gather more data? Sure. Does it improve performance? I bet it doesn't. If you have tests that show it does, post a link.

GTA VI will work well on 4GB and won't benefit from more than 8GB. You know how I know this? Because that's the amount of RAM mainstream GPUs have. They want to sell tens of millions copies of this game. They'll design it to look well on tens of millions of PCs.
I dont think Grand Theft Auto 6 is out yet, so don't assume.
 
Joined
Jan 8, 2017
Messages
9,624 (3.28/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
You sound so sure of yourself, yet, it hasn't been done yet, even in prototyping.

The good thing is you don't even need to believe me, you only need look at how GPU performance evolved in the last decade or so. This is the reason why GPU manufactures have been able to keep up a fairly linear performance increase over the years, because their major concerns were how do you fit more execution resources and how do you get more memory bandwidth, latency while it can't be ignored wasn't the main focus.

And on the other side that's one of the reasons CPU performance hasn't gone up much as of lately, you can make a CPU core with a million execution ports and TB/s of bandwidth available, if it can't get that one particular instruction and data in time it will all be for nothing, latency reins king here.

Chiplet CPUs were a much more difficult problem to crack, which is probably why AMD decided to focus on this first.

Either engineers are wrong that have every incentive to go chiplet

So basically you argument boils down to : it hasn't been done yet so it can't be done or it's very difficult. Kind of an weak point, regardless I feel like I explained my reasons to believe the opposite fairly well so I'll end it here.
 
Joined
Jun 28, 2016
Messages
3,595 (1.15/day)
I dont think Grand Theft Auto 6 is out yet, so don't assume.
I'm not assuming. It's an educated guess, sometimes called "a forecast".

Nvidia dominates the market and their mainstream cards of latest generation are 6-8 GB (not 4-8 as I said earlier). For the next 2 years no game developer will make a high volume game that requires more. It wouldn't make any sense.
 
Joined
May 5, 2019
Messages
6 (0.00/day)
System Name My Bitch
Processor Intel i7 3770K@4.6ghz
Motherboard ASUS P8Z77-V Pro
Cooling Cooler Master Hyper 212 EVO
Memory Sniper 16GB DDR3 2400 (PC3 19200)
Video Card(s) Gigabyte RX Vega 56
Storage Samsung SSD & WD Sata 7200 RPM Drive
Display(s) LG OLED 65"
Case Corsair Crystal Series 570X RGB ATX Mid-Tower
Audio Device(s) Sound Blaster Audigy SE 7.1 + Yahama 7.1 Receiver
Power Supply Seasonic 550 Watt 80+ Gold Certified
Mouse Logitech M705
Keyboard Logitech K520
Software Win 10
How odd. Suddenly those who trashed AdoredTV as an AMD fanboy are the biggest advocates of his latest rumors and speculations...

More interesting is that they want to come off as someone who doesn't want to have anything to do with his fanboy trash yet they are the frist ones to post his videos.


How odd indeed.

I know right? They jump at any chance to toss dirt over AMD's way.

Long term lurker here, finally decided to register to interact a bit because there seems to be very few folks around these days that truly have a honest geniuine love for this hobby. Sad state of affairs, its just as bad as all the political drivel we see here in the U.S. these days.


Lisa Su did that. She was hyped about Navi and now she isn't. Something clearly happened on Navi that was unexpected.

That's the most ridiculous assumption ever, seriously? dude? Come on lol.


Some of you bros are reaching very hard to make a freaking brand/company look as bad as possible at any chance possible and that's utterly childish, can't you see that? How stupid and petty that is?

What ever happened to the genuine appreciation and enthusiasm for this hobby instead of all the snarky Anti-AMD comments that are made to look like honest discussion yet the obvious seeps right through the cracks because of some of these constant attackers that come at any opportunity possible.

I mean, really, AMD's GPUs are a good thing that all of us should be "truly" wanting to see succeed, and frankly, they will, they have for over 20 years, and has brought a lot of great technology to the GPU side of things. One of the reasons i really appreciate AMD GPUs is the innovation they aggressively go after, some really great things have come from this approach and others times not so much, but what AMD has done on the GPU side of things has really steered the GPU industry forward in a lot of ways. I mean, lets just take AMD's Mantle API, honestly, this is what we have today with DirectX12, and Vulkan, its just one example of how AMD's GPU division brings innovation to the tables that a lot of times lands up setting the trend forward for all GPU technologies. So why on earth we got people that have hatred pent-up inside them towards AMD GPUs is utterly ridiculousness. When ever i buy an AMD GPU...i feel like i am buying a GPU prototype..because honestly, a lot of times thats what we are getting with a AMD GPU, prototype hardware feature-sets inside its GPUs. Look at the performance increase we get when a game is truly coded to take advantage of AMD"s superior DirectX12 feature-set hardware in its current modern GPUs..it literally takes a HUGE leap forward in perf, matching Nvidia's next higher tier GPUs..just look at the recent Forza games for one example of this where we see a Vega 64 and 56 leap up in performance increase..its quite amazing and very interesting. We will see a lot of this coming soon with the last round of games for PS4 Pro and Xbox One X consoles as developers are now starting to finally develop games from the ground up to take advantage of AMD DX12 hardware features that we see in the current consoles but more importantly..the upcoming next-generation of consoles that both Xbox Next and PS5 will have Semi-Custom Navi GPU hardware inside, which for PC gamers that happen to have a Vega or Navi GPU..will reap major benefits from this, starting about now and for the foreseeable future...again, especially for next gen games on the upcoming new consoles from Sony and MS. Personally, if someone was wanting the best bang for buck PC GPU hardware that will give them long lasting performance ...AMD GPUs are the way to go, im not talking about the PC enthusiast that must have the best of the best top dollar hardware every 12 months..no, im talking about the PC gamer that wants to see their hardware last for 3-4 years...Vega/Navi is the way to go, no ways about that, you'd be wasting your hard earned cash to go with a Nvidia equivalent, imho.

Anyways, I really do appreciate some of you guys in here that do have a genuine appreciation for both Nvidia and AMD hardware, especially AMD GPU hardware because i find AMD does not get enough credit these days with their GPU hardware, I find AMD GPUs very innovative and is always pushing for the next new GPU technology feature to lead the way forward, and AMD does it time and time again. You gotta keep in mind AMD has a fraction of the R&D that Nvidia has for its GPU development, yet look over the years what AMD has done for the GPU industry, its done a lot and to see people here purposely trash AMD is just down-right disrespectful, not just towards AMD as a company but to those of us that has a true genuine appreciation for this PC Hardware/Gaming hobby some of us love.

If any of you guys are in the market for a GPU and what the best bang for buck, go with a AMD GPU, Lots of games on current PS4 Pro and Xbox One X are now coming out taking full advantage of AMD GPUs inside those consoles and that rolls over to the PC side of things when we see Performance gains as time goes on and more mature AMD drivers come out but the most important thing is to remember that these upcoming next-gen consoles are using AMD GPUs again, using a hybrid of Navi hardware features and some custom features, so PC users with AMD GPUs will reap benefits big time in the future. And don't get me wrong, I appreciate Nvidia just as much but thats for another disucssion since this thread is AMD focused.


-
 
Joined
Jun 28, 2016
Messages
3,595 (1.15/day)
3 years ago. At that time, GTX 970 were made functionally obsolete because it doesn't have enough VRAM. Now GTX 1060 6 GiB is dangerously close to being overloaded. Meanwhile, RX 470 8 GiB is fine, so is RTX 2080...but not for long.
This is not what I was asking about.
You have data points from 3 years. 4K resolution. Pretty much the same RAM usage.

Why do you expect a sudden change of trend now? Why would textures start to grow in next 3 years?
They're one in the same for Radeon VII. 1 TB/s versus 483.8 MB/s and 16 GiB versus 8 GiB. You get double the bandwidth along with double the VRAM. It's win-win, for the reasonable price of $100.
So you'd rather pay $100 for meaningless numbers in datasheet than something empirical?
What's the point of larger transfer or larger VRAM? Nvidia cards still consume less W and give more fps.
I'm sure there's an inflection point. What if AMD made a 32GB version for +$200. Would you go for that? I mean: it's so much RAM! Like on pro cards!

I don't understand how you buy PC parts. What's the goal?
 
Joined
Mar 10, 2010
Messages
11,880 (2.19/day)
Location
Manchester uk
System Name RyzenGtEvo/ Asus strix scar II
Processor Amd R5 5900X/ Intel 8750H
Motherboard Crosshair hero8 impact/Asus
Cooling 360EK extreme rad+ 360$EK slim all push, cpu ek suprim Gpu full cover all EK
Memory Gskill Trident Z 3900cas18 32Gb in four sticks./16Gb/16GB
Video Card(s) Asus tuf RX7900XT /Rtx 2060
Storage Silicon power 2TB nvme/8Tb external/1Tb samsung Evo nvme 2Tb sata ssd/1Tb nvme
Display(s) Samsung UAE28"850R 4k freesync.dell shiter
Case Lianli 011 dynamic/strix scar2
Audio Device(s) Xfi creative 7.1 on board ,Yamaha dts av setup, corsair void pro headset
Power Supply corsair 1200Hxi/Asus stock
Mouse Roccat Kova/ Logitech G wireless
Keyboard Roccat Aimo 120
VR HMD Oculus rift
Software Win 10 Pro
Benchmark Scores laptop Timespy 6506
I'm not assuming. It's an educated guess, sometimes called "a forecast".

Nvidia dominates the market and their mainstream cards of latest generation are 6-8 GB (not 4-8 as I said earlier). For the next 2 years no game developer will make a high volume game that requires more. It wouldn't make any sense.
your belief that increasing settings does not increase graphical image quality (using more Vram)is nonesens ,im not searching nothing to prove that.

But will state , because I've experienced(not assumed or believe) that Apex legends will indeed scale its Vram usage very well , But 4K using 4GB Vram might run ok but its downscaling the resolution scaling to between a quarter and a third, I've tried and It does'nt look the same on a 4 k monitor.

how about you prove settings do little ,since their inclusion in almost every game seams to allude to my illusion of higher image quality by them as being right.

Nvidia mean dick to console land and they're expected to Grow (20%, $$$$$$$$$$$$$$Jpeddy)user base , mainstream will be represented well with Navi derrivetives ,the pc master race can keep buying Nvidia if it wants ,AMD will be fine im sure.

you wont , the future's bleak for you, you dont want amd gpus, you dont want multi cores ,you dont want any better then the lowest grade of GPU in your system, you just want Nvidia to rule the world yet are blind to the many millions who could'nt give a shit who's chip name is inside their gaming beast(console, were talking kids especially with their pro's and ones)

And you love hanging your tongue out in a Tech and computer enthusiasts Forum.o_O:kookoo::D:toast::clap::lovetpu:
 

eidairaman1

The Exiled Airman
Joined
Jul 2, 2007
Messages
43,255 (6.74/day)
Location
Republic of Texas (True Patriot)
System Name PCGOD
Processor AMD FX 8350@ 5.0GHz
Motherboard Asus TUF 990FX Sabertooth R2 2901 Bios
Cooling Scythe Ashura, 2×BitFenix 230mm Spectre Pro LED (Blue,Green), 2x BitFenix 140mm Spectre Pro LED
Memory 16 GB Gskill Ripjaws X 2133 (2400 OC, 10-10-12-20-20, 1T, 1.65V)
Video Card(s) AMD Radeon 290 Sapphire Vapor-X
Storage Samsung 840 Pro 256GB, WD Velociraptor 1TB
Display(s) NEC Multisync LCD 1700V (Display Port Adapter)
Case AeroCool Xpredator Evil Blue Edition
Audio Device(s) Creative Labs Sound Blaster ZxR
Power Supply Seasonic 1250 XM2 Series (XP3)
Mouse Roccat Kone XTD
Keyboard Roccat Ryos MK Pro
Software Windows 7 Pro 64
I'm not assuming. It's an educated guess, sometimes called "a forecast".

Nvidia dominates the market and their mainstream cards of latest generation are 6-8 GB (not 4-8 as I said earlier). For the next 2 years no game developer will make a high volume game that requires more. It wouldn't make any sense.

Educated guess is an oxymoron
 
Joined
Jun 28, 2016
Messages
3,595 (1.15/day)
your belief that increasing settings does not increase graphical image quality (using more Vram)is nonesens ,im not searching nothing to prove that.
That's the question I've asked @FordGT90Concept .
For 3 years we haven't seen a significant increase in VRAM needs. 4K games utilize roughly the same amount. And that's on highest settings games offer.
So why would this trend change now? Why would games launching in next 3 years utilize more?
It'll still be 4K.
Maybe you know?

Looking at RTX cards, clearly RTRT and DLSS use need some RAM (1-2GB above what the game normally needs). But that's hardware that won't magically appear on Radeon VII. And mainstream Nvidia RTX cards are <=8GB nevertheless.

Educated guess is an oxymoron
No, it isn't.
 

eidairaman1

The Exiled Airman
Joined
Jul 2, 2007
Messages
43,255 (6.74/day)
Location
Republic of Texas (True Patriot)
System Name PCGOD
Processor AMD FX 8350@ 5.0GHz
Motherboard Asus TUF 990FX Sabertooth R2 2901 Bios
Cooling Scythe Ashura, 2×BitFenix 230mm Spectre Pro LED (Blue,Green), 2x BitFenix 140mm Spectre Pro LED
Memory 16 GB Gskill Ripjaws X 2133 (2400 OC, 10-10-12-20-20, 1T, 1.65V)
Video Card(s) AMD Radeon 290 Sapphire Vapor-X
Storage Samsung 840 Pro 256GB, WD Velociraptor 1TB
Display(s) NEC Multisync LCD 1700V (Display Port Adapter)
Case AeroCool Xpredator Evil Blue Edition
Audio Device(s) Creative Labs Sound Blaster ZxR
Power Supply Seasonic 1250 XM2 Series (XP3)
Mouse Roccat Kone XTD
Keyboard Roccat Ryos MK Pro
Software Windows 7 Pro 64
That's the question I've asked @FordGT90Concept .
For 3 years we haven't seen a significant increase in VRAM needs. 4K games utilize roughly the same amount. And that's on highest settings games offer.
So why would this trend change now? Why would games launching in next 3 years utilize more?
It'll still be 4K.
Maybe you know?

Looking at RTX cards, clearly RTRT and DLSS use need some RAM (1-2GB above what the game normally needs). But that's hardware that won't magically appear on Radeon VII. And mainstream Nvidia RTX cards are <=8GB nevertheless.


No, it isn't.

It's like guesstimate... smh
 
Joined
Mar 18, 2008
Messages
5,717 (0.93/day)
System Name Virtual Reality / Bioinformatics
Processor Undead CPU
Motherboard Undead TUF X99
Cooling Noctua NH-D15
Memory GSkill 128GB DDR4-3000
Video Card(s) EVGA RTX 3090 FTW3 Ultra
Storage Samsung 960 Pro 1TB + 860 EVO 2TB + WD Black 5TB
Display(s) 32'' 4K Dell
Case Fractal Design R5
Audio Device(s) BOSE 2.0
Power Supply Seasonic 850watt
Mouse Logitech Master MX
Keyboard Corsair K70 Cherry MX Blue
VR HMD HTC Vive + Oculus Quest 2
Software Windows 10 P
Intel will need at least 5 years (most possible 10) to be able to compete with nVidia for the high-end consumer GPU market. And they might be obliged to use Samsung's Fabs in order to even begin their mass-market GPU production. They are not in form lately in many fronts. I also would like to have a 3 or more part competition in CPU and GPU market but it is very hard for a newcomer to compete with the established ones for the 1st few years.

Who knows. Intel is not starting from 0 as they have been making iGPU forever. Realistically with the R&D force as well as pure cash flow, Intel has way better chance of battling Nvidia at dGPU market. AMD with its limited amount of resources fighting on 2 fronts will always be hard.

There will always be competition, don't worry too much.
 
Joined
Jun 28, 2016
Messages
3,595 (1.15/day)
It's like guesstimate... smh
You don't know what "educated guess" means and you can't even use a dictionary...

Focus on giving +1 to people that praise AMD. Why leave the comfort zone?

Who knows. Intel is not starting from 0 as they have been making iGPU forever. Realistically with the R&D force as well as pure cash flow, Intel has way better chance of battling Nvidia at dGPU market. AMD with its limited amount of resources fighting on 2 fronts will always be hard.
Well... not so long ago some people on this forum were sure that Intel designed 6-core CPUs in few months - because AMD surprised them with Zen. Otherwise they'd be making 4 cores forever.
Yet, when it comes to GPUs, the same people are very worried about Intel's R&D potential... ;-)
 

eidairaman1

The Exiled Airman
Joined
Jul 2, 2007
Messages
43,255 (6.74/day)
Location
Republic of Texas (True Patriot)
System Name PCGOD
Processor AMD FX 8350@ 5.0GHz
Motherboard Asus TUF 990FX Sabertooth R2 2901 Bios
Cooling Scythe Ashura, 2×BitFenix 230mm Spectre Pro LED (Blue,Green), 2x BitFenix 140mm Spectre Pro LED
Memory 16 GB Gskill Ripjaws X 2133 (2400 OC, 10-10-12-20-20, 1T, 1.65V)
Video Card(s) AMD Radeon 290 Sapphire Vapor-X
Storage Samsung 840 Pro 256GB, WD Velociraptor 1TB
Display(s) NEC Multisync LCD 1700V (Display Port Adapter)
Case AeroCool Xpredator Evil Blue Edition
Audio Device(s) Creative Labs Sound Blaster ZxR
Power Supply Seasonic 1250 XM2 Series (XP3)
Mouse Roccat Kone XTD
Keyboard Roccat Ryos MK Pro
Software Windows 7 Pro 64
You don't know what "educated guess" means and you can't even use a dictionary...

Focus on giving +1 to people that praise AMD. Why leave the comfort zone?


Well... not so long ago some people on this forum were sure that Intel designed 6-core CPUs in few months - because AMD surprised them with Zen. Otherwise they'd be making 4 cores forever.
Yet, when it comes to GPUs, the same people are very worried about Intel's R&D potential... ;-)

Wrong there hypocrite
 
Joined
Mar 10, 2015
Messages
3,984 (1.10/day)
System Name Wut?
Processor 3900X
Motherboard ASRock Taichi X570
Cooling Water
Memory 32GB GSkill CL16 3600mhz
Video Card(s) Vega 56
Storage 2 x AData XPG 8200 Pro 1TB
Display(s) 3440 x 1440
Case Thermaltake Tower 900
Power Supply Seasonic Prime Ultra Platinum
Yup 970 comes to mind now, not enough ram after 3.5GB.

Honestly, I think the 970 got a pretty decent service life for what it was.

imagine what spec GtaVI will need.

By the time GTAVI makes it to PC, all of these GPUs will be obsolete anyway. What did it take For GTAV? Over a year? RDR2 still isn't here 6 mos later. If anyone knows how to milk people, it's Rockstar.
 
Top