• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Radeon R9 Fury X 4 GB

Joined
Jun 26, 2015
Messages
4 (0.00/day)
Location
India
System Name NEMESIS
Processor Core i5 2500k
Motherboard Asus P8Z68 Deluxe GEN3
Cooling Thermalright Ultra 120 Extreme
Memory 4 x 4 GB GSkill DDR3
Video Card(s) Gigabyte G1 GTX980 OC
Storage Sandisk Extreme Pro 480GB + 10 TB HDD
Case Xigmatek Elysium
Audio Device(s) Onboard
Power Supply Corsair HX1000
Mouse Logitech G600
Keyboard Corsair Vengeance K90
Software Windows 7 SP1
Personally, the main problem that I see with this card is the use of HBM Memory. HBM with its current technical limitation (4GB) is not ready for the big time on a flagship card. AMD pitched this as the pinnacle of cards for UHD, but did nothing to support that. While lack of HDMI 2.0 port is one of those, the more important thing is the 4 GB of HBM that they put on this card.

What is the actual driving force for desiring UHD gaming? It is not just the screen resolution, When we have 4 times the pixel density of FHD, we would like to use those extra pixels to present more detail and that comes from higher and more detailed textures. What we are seeing as UHD gaming today is just the games being run at UHD resolution with the same textures that were designed for FHD. In place of one pixel in FHD, we have four pixels with the same shade in UHD which is quite pointless. The goal of UHD will be realized when we start getting more detailed textures to use for UHD and such textures are going to occupy a lot of VRAM.

No amount of PR about driver optimizations, texture compression and the high bandwidth offered by HBM would be able to side track the fact that 4GB is not going to be enough in the long run when higher resolution textures come into the picture. It maybe true that VRAM is not getting utilized efficiently today and that there might be ways to optimize the drivers to make the allocations better, but applies only for the present day situation when there is head room for such optimizations. Once true UHD optimized games with higher resolution textures start coming out, they simply will not fit in the 4GB VRAM and once swapping from main memory starts, you all know that the bottleneck is going to kill the performance.

Either AMD is banking on the fact that most multi platform games may not yet offer super high resolution textures for UHD in the PC versions or that for the games that do so, they can just compromise the texture quality at the driver or ask the developer to fallback to lower resolution textures for these cards.

Even if the card has the raw compute power for UHD, they have crippled this card by pairing it with 4GB HBM. I would have much rather preferred to have them pair this with 8 GB GDDR instead. They should have waited on the HBM till they could come out with 8 GB modules. Pair it with 8GB GDDR, replace the water cooling with the regular air cooled designs and market it at $500 or even $550 and this would have killed the 980 and 980Ti using the value for money tag. This was never a UHD ready card to begin with. I consider the 390X, to be more of a UHD ready card than Fury X.

There never was any need for them to be king of the hill in terms of performance, they could have claimed it in terms of value for money has they have done in the past. My last 3 GPU purchases and most of my overall GPU purchases were AMD for this reason, but this time, I went for GTX 980 after 10 years of not using an nVidia card because it offered more value for my money when I bought it. Personally I never card about Physx or any of the nvidia specific stuff, but the overall value justified the purchase.

Can Fury X compete with 980 Ti at the same price point? Not unless they beat the 980 Ti in 95% of the games with at least a 7.5~10% better performance margin which currently is not the case. 74% of the gaming GPU market share is currently owned by nVidia and a vast majority of AAA titles are using Gameworks features that add additional value for nVidia GPU users and it doesn't help that AMD fails to optimize their drivers and the game for their own GPUs and resulting in inferior performance. Personally I think all this talk about gameworks being some sort of cheating or blocking out AMD is nothing more than nonsense or a case of sour grapes. AMD should be proactive and resonsible for working with game developers to tweak their games for their GPUs. Further with 6GB of VRAM and a HDMI 2.0 port, it is somewhat more more UHD ready than Fury X.

AMD should drop $100 on the price to compete and maybe even make a Fury X version without the water cooling (even more preferable would be to drop 4 GB HBM in favour of 8GB GDDR).
 
Last edited by a moderator:

Aquinus

Resident Wat-man
Joined
Jan 28, 2012
Messages
13,171 (2.81/day)
Location
Concord, NH, USA
System Name Apollo
Processor Intel Core i9 9880H
Motherboard Some proprietary Apple thing.
Memory 64GB DDR4-2667
Video Card(s) AMD Radeon Pro 5600M, 8GB HBM2
Storage 1TB Apple NVMe, 4TB External
Display(s) Laptop @ 3072x1920 + 2x LG 5k Ultrafine TB3 displays
Case MacBook Pro (16", 2019)
Audio Device(s) AirPods Pro, Sennheiser HD 380s w/ FIIO Alpen 2, or Logitech 2.1 Speakers
Power Supply 96w Power Adapter
Mouse Logitech MX Master 3
Keyboard Logitech G915, GL Clicky
Software MacOS 12.1
Personally, the main problem that I see with this card is the use of HBM Memory. HBM with its current technical limitation (4GB) is not ready for the big time on a flagship card. AMD pitched this as the pinnacle of cards for UHD, but did nothing to support that. While lack of HDMI 2.0 port is one of those, the more important thing is the 4 GB of HBM that they put on this card.

What is the actual driving force for desiring UHD gaming? It is not just the screen resolution, When we have 4 times the pixel density of FHD, we would like to use those extra pixels to present more detail and that comes from higher and more detailed textures. What we are seeing as UHD gaming today is just the games being run at UHD resolution with the same textures that were designed for FHD. In place of one pixel in FHD, we have four pixels with the same shade in UHD which is quite pointless. The goal of UHD will be realized when we start getting more detailed textures to use for UHD and such textures are going to occupy a lot of VRAM.

No amount of PR about driver optimizations, texture compression and the high bandwidth offered by HBM would be able to side track the fact that 4GB is not going to be enough in the long run when higher resolution textures come into the picture. It maybe true that VRAM is not getting utilized efficiently today and that there might be ways to optimize the drivers to make the allocations better, but applies only for the present day situation when there is head room for such optimizations. Once true UHD optimized games with higher resolution textures start coming out, they simply will not fit in the 4GB VRAM and once swapping from main memory starts, you all know that the bottleneck is going to kill the performance.

Either AMD is banking on the fact that most multi platform games may not yet offer super high resolution textures for UHD in the PC versions or that for the games that do so, they can just compromise the texture quality at the driver or ask the developer to fallback to lower resolution textures for these cards.

Even if the card has the raw compute power for UHD, they have crippled this card by pairing it with 4GB HBM. I would have much rather preferred to have them pair this with 8 GB GDDR instead. They should have waited on the HBM till they could come out with 8 GB modules. Pair it with 8GB GDDR, replace the water cooling with the regular air cooled designs and market it at $500 or even $550 and this would have killed the 980 and 980Ti using the value for money tag. This was never a UHD ready card to begin with. I consider the 390X, to be more of a UHD ready card than Fury X.

There never was any need for them to be king of the hill in terms of performance, they could have claimed it in terms of value for money has they have done in the past. My last 3 GPU purchases and most of my overall GPU purchases were AMD for this reason, but this time, I went for GTX 980 after 10 years of not using an nVidia card because it offered more value for my money when I bought it. Personally I never card about Physx or any of the nvidia specific stuff, but the overall value justified the purchase.

Can Fury X compete with 980 Ti at the same price point? Not unless they beat the 980 Ti in 95% of the games with at least a 7.5~10% better performance margin which currently is not the case. 74% of the gaming GPU market share is currently owned by nVidia and a vast majority of AAA titles are using Gameworks features that add additional value for nVidia GPU users and it doesn't help that AMD fails to optimize their drivers and the game for their own GPUs and resulting in inferior performance. Personally I think all this talk about gameworks being some sort of cheating or blocking out AMD is nothing more than nonsense or a case of sour grapes. AMD should be proactive and resonsible for working with game developers to tweak their games for their GPUs. Further with 6GB of VRAM and a HDMI 2.0 port, it is somewhat more more UHD ready than Fury X.

AMD should drop $100 on the price to compete and maybe even make a Fury X version without the water cooling (even more preferable would be to drop 4 GB HBM in favour of 8GB GDDR).
I suspect memory bandwidth has very little to do with why it turned out the way it did. I think because of the performance numbers, it's not unlikely to say that the R9 Fury X is starved for ROPs.
 
Joined
Jun 26, 2015
Messages
4 (0.00/day)
Location
India
System Name NEMESIS
Processor Core i5 2500k
Motherboard Asus P8Z68 Deluxe GEN3
Cooling Thermalright Ultra 120 Extreme
Memory 4 x 4 GB GSkill DDR3
Video Card(s) Gigabyte G1 GTX980 OC
Storage Sandisk Extreme Pro 480GB + 10 TB HDD
Case Xigmatek Elysium
Audio Device(s) Onboard
Power Supply Corsair HX1000
Mouse Logitech G600
Keyboard Corsair Vengeance K90
Software Windows 7 SP1
^^ Yes, the raw performance would definitely benefit from more ROP's. But my argument was specifically about the quantity of memory they put in because they forced themselves to use HBM on it. They have essentially crippled the card by choosing to go with HBM with its current technical limitation of 4GB instead of higher amount of GDDR.

HBM would actually make sense when games have vast amount of textures and other data to deal with where having higher bandwidths will help move or manipulate it faster. Having to reduce the quantity because of opting to go for HBM kind of defeated the purpose for having all that bandwidth in the first place. They should have reserved its use for the next refresh when they would have have larger sized modules of HBM like 8GB or 12GB.

I highly suspect if the current performance numbers for sub FHD or QHD would have been any different if GDDR had been used in place of HBM.
 
Joined
Sep 7, 2011
Messages
2,785 (0.58/day)
Location
New Zealand
System Name MoneySink
Processor 2600K @ 4.8
Motherboard P8Z77-V
Cooling AC NexXxos XT45 360, RayStorm, D5T+XSPC tank, Tygon R-3603, Bitspower
Memory 16GB Crucial Ballistix DDR3-1600C8
Video Card(s) GTX 780 SLI (EVGA SC ACX + Giga GHz Ed.)
Storage Kingston HyperX SSD (128) OS, WD RE4 (1TB), RE2 (1TB), Cav. Black (2 x 500GB), Red (4TB)
Display(s) Achieva Shimian QH270-IPSMS (2560x1440) S-IPS
Case NZXT Switch 810
Audio Device(s) onboard Realtek yawn edition
Power Supply Seasonic X-1050
Software Win8.1 Pro
Benchmark Scores 3.5 litres of Pale Ale in 18 minutes.
^^ Yes, the raw performance would definitely benefit from more ROP's. But my argument was specifically about the quantity of memory they put in because they forced themselves to use HBM on it. They have essentially crippled the card by choosing to go with HBM with its current technical limitation of 4GB instead of higher amount of GDDR.
The only problem with that scenario is that Fiji would be a totally different GPU. The only way AMD could get 4096 ALUs into the chip was because a large amount of the die space usually reserved for GDDR5 memory controllers and I/O is now able to be devoted to the core(s). IF Fiji was GDDR5, it would have just made sacrifices elsewhere. The increased power demand of GDDR5 would have meant the GPU would be clocked lower to compensate. A single Tonga chip has a 384-bit bus feeding 2048 cores/128 TAU/32 ROP. Doubling the core components but only increasing the bus width by a third (to 512-bit) starves the GPU of bandwidth, while going any larger puts the GPU die well outside manufacturability due to size limits of the lithography.
HBM would actually make sense when games have vast amount of textures and other data to deal with where having higher bandwidths will help move or manipulate it faster.
Well, that's just AMD being ahead of the curve. There's always a price to pay when a new technology arrives and its first iteration is not appreciably better than the incumbent. Put it down to the price of progress. AMD needed high bandwidth and low latency for HSA and to compete with Intel's own eDRAM and HMC roadmap. There is way more at stake here than just a consumer graphics card and AMD had to commit to HBM to accelerate its development.
 
Joined
Dec 31, 2009
Messages
19,371 (3.56/day)
Benchmark Scores Faster than yours... I'd bet on it. :)
  1. [/quote]They have essentially crippled the card by choosing to go with HBM with its current technical limitation of 4GB instead of higher amount of GDDR.[/quote]No, they did not. On the cover, it looks like it, but, like NVIDIA with their "ZOMG 256 bit bus wth are you doing", they have better compression algo's. Not to mention they looked very deep at vRAM allocation and found, according to AMD, that 70% of vRAM is not used efficiently... so, its fine. You can see that it tends to pull away from 980ti at a higher res compared to low res too. If vRAM was a limit, that wouldnt be happening. Look at the 780ti vs 290x for example. :)
 
Joined
Jun 26, 2015
Messages
4 (0.00/day)
Location
India
System Name NEMESIS
Processor Core i5 2500k
Motherboard Asus P8Z68 Deluxe GEN3
Cooling Thermalright Ultra 120 Extreme
Memory 4 x 4 GB GSkill DDR3
Video Card(s) Gigabyte G1 GTX980 OC
Storage Sandisk Extreme Pro 480GB + 10 TB HDD
Case Xigmatek Elysium
Audio Device(s) Onboard
Power Supply Corsair HX1000
Mouse Logitech G600
Keyboard Corsair Vengeance K90
Software Windows 7 SP1
^^ That is because none of these games have properly hit the VRAM limit so far. We currently don't have many games that max out 4 GB of VRAM, but 6 months down the line, there will that be new games that will do just that.

Further, this issue of inefficient usage of VRAM applies only to an extent. Are they seriously trying to tell us that each and every game wastes 70% of the memory that it allocates. Its more than likely the worst case scenario. Also, is AMD really going to hand tweak VRAM allocations for each individual game in their drivers. We all know how well AMD is handling their game specific driver tweaking currently. If AMD is so confident that their optimizations will allow the Fury X to run with just 4 GB where another card would require 5 or 6 GB, why didn't they release the 390X with 4GB and use those optimizations instead of putting 8GB VRAM on those cards. Nothing in their allegations of inefficiency and about their strategy for optimization suggests that it has to be memory technology specific. if inefficient memory usage is a issue and they have solution to optimize it at driver level, they should be able to do it for GDDR base cards as well.

Lastly, what about the case when a games compressed textures and other data go above the 4 GB limit beyond the walls for any optimization ? It should not be that difficult if a game has very high resolution textures to be used along with UHD resolutions.

One of the reviewers tested with GTA V @ UHD after tweaking the settings so VRAM usage goes over 4 GB and the Fury X dropped performance severely compared to 980 Ti at same settings. Once he tweaked the settings to make VRAM go above 6 GB, 980 Ti also dropped in performance severely.
 
Joined
Sep 22, 2012
Messages
1,010 (0.23/day)
Location
Belgrade, Serbia
System Name Intel® X99 Wellsburg
Processor Intel® Core™ i7-5820K - 4.5GHz
Motherboard ASUS Rampage V E10 (1801)
Cooling EK RGB Monoblock + EK XRES D5 Revo Glass PWM
Memory CMD16GX4M4A2666C15
Video Card(s) ASUS GTX1080Ti Poseidon
Storage Samsung 970 EVO PLUS 1TB /850 EVO 1TB / WD Black 2TB
Display(s) Samsung P2450H
Case Lian Li PC-O11 WXC
Audio Device(s) CREATIVE Sound Blaster ZxR
Power Supply EVGA 1200 P2 Platinum
Mouse Logitech G900 / SS QCK
Keyboard Deck 87 Francium Pro
Software Windows 10 Pro x64
Fury X is competitive against 980 Ti at very high resolution. Remember, AMD hasn't enabled DX11 MT drivers in Windows 8.1.

I rather see benchmarks done on Windows 10 i.e. Project Cars has frame rate uplift on R9-280 on Windows 10. Windows 10 forces DX11 MT.


I can't agree with that and most customers will not believe in that. Simply they want on paper more video memory... I afraid they will not believe to AMD that 4GB is enough for future.
I imagine someone who plan to buy 1440p monitor or example ASUS 3800R... 4GB is suicide. Than better to wait one year more when AMD offer Fury X with 8GB. They always make rebrand.
Chance of AMD is R9-390X 8GB. Not as single option, but for multi GPU that configuration become better than GTX980 SLI. Maybe is GTX980 better if someone have 600W PSU and 1080p and no plans for more.
But for anything more Hawaii CF is better. Only is problem price. If 4GB is not enough for Fury X than situation is even worse with GTX980. And that's segment hold biggest part of gaming community. People who pay under 500$. Better to say 300-400$ for graphic card because they will want more than 4GB. That's chance of AMD. NVIDIA can offer them only 600-650$ if someone want over 4GB. But someone to believe that Fury X will resist on higher resolution without fps drops next 2 years is very bad. Special because AMD owners keep graphic card longer than NVIDIA customers usually. Situation is even worse because NVIDIA dictate to developers what to do and 980Ti have 6GB, they plan 8GB for Pascal they will force games with more video memory to push people on upgrade and on TITAN X. NVIDIA 100% have plans to offer efficient Pascal with HBM, 8GB of video memory only little stronger than TITAN X but very expensive, and 8GB will be reason for upgrade from GTX980Ti, what people to do with 4GB. New cards from AMD will come for 18 months.
 

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,241 (7.55/day)
Location
Hyderabad, India
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard ASUS ROG Strix B450-E Gaming
Cooling DeepCool Gammax L240 V2
Memory 2x 8GB G.Skill Sniper X
Video Card(s) Palit GeForce RTX 2080 SUPER GameRock
Storage Western Digital Black NVMe 512GB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
This GPU with 8 GB 512-bit GDDR5 would've been 375-400W typical board power (educated guess). GCN 1.1 to GCN 1.2 wasn't as big a perf/Watt leap as Kepler to Maxwell. That's probably why this whole HBM adventure was unavoidable.

Edit. Now I'm really curious to know what this GPU would have been like with 8 GB 512-bit GDDR5. :(
 
Last edited:
Joined
Dec 31, 2009
Messages
19,371 (3.56/day)
Benchmark Scores Faster than yours... I'd bet on it. :)
^^ That is because none of these games have properly hit the VRAM limit so far. We currently don't have many games that max out 4 GB of VRAM, but 6 months down the line, there will that be new games that will do just that.

Further, this issue of inefficient usage of VRAM applies only to an extent. Are they seriously trying to tell us that each and every game wastes 70% of the memory that it allocates. Its more than likely the worst case scenario. Also, is AMD really going to hand tweak VRAM allocations for each individual game in their drivers. We all know how well AMD is handling their game specific driver tweaking currently. If AMD is so confident that their optimizations will allow the Fury X to run with just 4 GB where another card would require 5 or 6 GB, why didn't they release the 390X with 4GB and use those optimizations instead of putting 8GB VRAM on those cards. Nothing in their allegations of inefficiency and about their strategy for optimization suggests that it has to be memory technology specific. if inefficient memory usage is a issue and they have solution to optimize it at driver level, they should be able to do it for GDDR base cards as well.

Lastly, what about the case when a games compressed textures and other data go above the 4 GB limit beyond the walls for any optimization ? It should not be that difficult if a game has very high resolution textures to be used along with UHD resolutions.

One of the reviewers tested with GTA V @ UHD after tweaking the settings so VRAM usage goes over 4 GB and the Fury X dropped performance severely compared to 980 Ti at same settings. Once he tweaked the settings to make VRAM go above 6 GB, 980 Ti also dropped in performance severely.
There are plenty of games that will smash 4GB and 4K resolutions. ;)

The issue isn't at the game level, it is at the API level. So it is game agnostic from what they said. As far as the second part of that paragraph, the 390x is a 290x with higher clocks and 8GB. IT doens't have the new algo's for compression AFAIK. I was at the release in L.A and talked with them about this. :)

What about compressed textures... it would do the same thing any other card does I would imagine in that it 'pages out' to system ram. ANd yes, that performance drop after running out of vRAM is normal... I believe I am missing your point there.

Edit. Now I'm really curious to know what this GPU would have been like with 8 GB 512-bit GDDR5.
Remarkably similar. Memory bandwidth isn't really a limiting factor until 4K +. Power draw would have been a bit higher though. Not as much as the estimates above guessed.
 
Joined
Oct 19, 2007
Messages
8,258 (1.32/day)
Processor Intel i9 9900K @5GHz w/ Corsair H150i Pro CPU AiO w/Corsair HD120 RBG fan
Motherboard Asus Z390 Maximus XI Code
Cooling 6x120mm Corsair HD120 RBG fans
Memory Corsair Vengeance RBG 2x8GB 3600MHz
Video Card(s) Asus RTX 3080Ti STRIX OC
Storage Samsung 970 EVO Plus 500GB , 970 EVO 1TB, Samsung 850 EVO 1TB SSD, 10TB Synology DS1621+ RAID5
Display(s) Corsair Xeneon 32" 32UHD144 4K
Case Corsair 570x RBG Tempered Glass
Audio Device(s) Onboard / Corsair Virtuoso XT Wireless RGB
Power Supply Corsair HX850w Platinum Series
Mouse Logitech G604s
Keyboard Corsair K70 Rapidfire
Software Windows 11 x64 Professional
Benchmark Scores Firestrike - 23520 Heaven - 3670
Im not about to click through 13 pages, but has @xfia tried to come in and defend AMD yet? :laugh:

In all seriousness though, for as much as I like nVIDIA for my video cards (not a fanboy just I like their features like Shadowplay and drivers are just all around better and more plentiful and I will go either side of the fence if performance is there), when will people realize that AMD just isnt what they were 10-12 years ago? They just cannot spank Intel/nVIDIA like they used to.
 

Aquinus

Resident Wat-man
Joined
Jan 28, 2012
Messages
13,171 (2.81/day)
Location
Concord, NH, USA
System Name Apollo
Processor Intel Core i9 9880H
Motherboard Some proprietary Apple thing.
Memory 64GB DDR4-2667
Video Card(s) AMD Radeon Pro 5600M, 8GB HBM2
Storage 1TB Apple NVMe, 4TB External
Display(s) Laptop @ 3072x1920 + 2x LG 5k Ultrafine TB3 displays
Case MacBook Pro (16", 2019)
Audio Device(s) AirPods Pro, Sennheiser HD 380s w/ FIIO Alpen 2, or Logitech 2.1 Speakers
Power Supply 96w Power Adapter
Mouse Logitech MX Master 3
Keyboard Logitech G915, GL Clicky
Software MacOS 12.1
Im not about to click through 13 pages, but has @xfia tried to come in and defend AMD yet? :laugh:

In all seriousness though, for as much as I like nVIDIA for my video cards (not a fanboy just I like their features like Shadowplay and drivers are just all around better and more plentiful and I will go either side of the fence if performance is there), when will people realize that AMD just isnt what they were 10-12 years ago? They just cannot spank Intel/nVIDIA like they used to.
Stop trying to start an argument, you're baiting him and you know it. :mad:
 
Joined
Jun 26, 2015
Messages
4 (0.00/day)
Location
India
System Name NEMESIS
Processor Core i5 2500k
Motherboard Asus P8Z68 Deluxe GEN3
Cooling Thermalright Ultra 120 Extreme
Memory 4 x 4 GB GSkill DDR3
Video Card(s) Gigabyte G1 GTX980 OC
Storage Sandisk Extreme Pro 480GB + 10 TB HDD
Case Xigmatek Elysium
Audio Device(s) Onboard
Power Supply Corsair HX1000
Mouse Logitech G600
Keyboard Corsair Vengeance K90
Software Windows 7 SP1
My point is that paging from system RAM has more likelihood of happening sooner with a card equipped with 4 GB VRAM than one with 6 GB, 8 GB or 12 GB regardless of compression techniques and optimizations. Optimizing resource utilization is definitely important and I don't think nVidia ignores it either, but there would a limit for how much can be achieved through sheer optimizations. Optimizations are not always a replacement for having sufficient amount of resources.

Console games are often heavily optimized for the hardware they run on. But the lower quantity of resources like VRAM mean that after a point, they also end up having to make compromises that impact the final quality.
 
Joined
Oct 19, 2007
Messages
8,258 (1.32/day)
Processor Intel i9 9900K @5GHz w/ Corsair H150i Pro CPU AiO w/Corsair HD120 RBG fan
Motherboard Asus Z390 Maximus XI Code
Cooling 6x120mm Corsair HD120 RBG fans
Memory Corsair Vengeance RBG 2x8GB 3600MHz
Video Card(s) Asus RTX 3080Ti STRIX OC
Storage Samsung 970 EVO Plus 500GB , 970 EVO 1TB, Samsung 850 EVO 1TB SSD, 10TB Synology DS1621+ RAID5
Display(s) Corsair Xeneon 32" 32UHD144 4K
Case Corsair 570x RBG Tempered Glass
Audio Device(s) Onboard / Corsair Virtuoso XT Wireless RGB
Power Supply Corsair HX850w Platinum Series
Mouse Logitech G604s
Keyboard Corsair K70 Rapidfire
Software Windows 11 x64 Professional
Benchmark Scores Firestrike - 23520 Heaven - 3670
Stop trying to start an argument, you're baiting him and you know it. :mad:
Im actually not trying to bait anyone. It is my legitimate viewpoint.
 
Joined
Feb 18, 2011
Messages
1,259 (0.25/day)
3DMark's API overhead benchmark also test GPU's command I/O ports and pathways
Yes, but how is that relevant or argue with what I said? Command ports and pathways are needed to test the overhead of the api.

As I said, AMD cards will see more speed increase from dx12 (obviously since Microsoft makes it, who has not one but two consoles on the market with AMD GPUs), but we just don't know yet (or those who do are still under NDA) how much that speed increase will transfer to real life performance increases in games (so not talking about benchmarks here but actual game performances)... and - I believe - those who think that dx12 will magically make their GPU twice as fast gonna get a rough wake-up call.
 
Joined
Oct 22, 2014
Messages
14,099 (3.82/day)
Location
Sunshine Coast
System Name H7 Flow 2024
Processor AMD 5800X3D
Motherboard Asus X570 Tough Gaming
Cooling Custom liquid
Memory 32 GB DDR4
Video Card(s) Intel ARC A750
Storage Crucial P5 Plus 2TB.
Display(s) AOC 24" Freesync 1m.s. 75Hz
Mouse Lenovo
Keyboard Eweadn Mechanical
Software W11 Pro 64 bit
Joined
Dec 31, 2009
Messages
19,371 (3.56/day)
Benchmark Scores Faster than yours... I'd bet on it. :)
My point is that paging from system RAM has more likelihood of happening sooner with a card equipped with 4 GB VRAM than one with 6 GB, 8 GB or 12 GB regardless of compression techniques and optimizations. Optimizing resource utilization is definitely important and I don't think nVidia ignores it either, but there would a limit for how much can be achieved through sheer optimizations. Optimizations are not always a replacement for having sufficient amount of resources.

Console games are often heavily optimized for the hardware they run on. But the lower quantity of resources like VRAM mean that after a point, they also end up having to make compromises that impact the final quality.
And my point is that with their optimizations, it doesn't arrive as fast as you seem to expect. After a point, you are correct, but because of their optimizations, that point is further down the line than their previous generations. ;)
 

Aquinus

Resident Wat-man
Joined
Jan 28, 2012
Messages
13,171 (2.81/day)
Location
Concord, NH, USA
System Name Apollo
Processor Intel Core i9 9880H
Motherboard Some proprietary Apple thing.
Memory 64GB DDR4-2667
Video Card(s) AMD Radeon Pro 5600M, 8GB HBM2
Storage 1TB Apple NVMe, 4TB External
Display(s) Laptop @ 3072x1920 + 2x LG 5k Ultrafine TB3 displays
Case MacBook Pro (16", 2019)
Audio Device(s) AirPods Pro, Sennheiser HD 380s w/ FIIO Alpen 2, or Logitech 2.1 Speakers
Power Supply 96w Power Adapter
Mouse Logitech MX Master 3
Keyboard Logitech G915, GL Clicky
Software MacOS 12.1
My point is that paging from system RAM has more likelihood of happening sooner with a card equipped with 4 GB VRAM than one with 6 GB, 8 GB or 12 GB regardless of compression techniques and optimizations. Optimizing resource utilization is definitely important and I don't think nVidia ignores it either, but there would a limit for how much can be achieved through sheer optimizations. Optimizations are not always a replacement for having sufficient amount of resources.

Console games are often heavily optimized for the hardware they run on. But the lower quantity of resources like VRAM mean that after a point, they also end up having to make compromises that impact the final quality.
That's why I only recently started having issues with having only 1GB on my 6870s at 1080p, right? While I think you're right, I also think you're wrong. More VRAM is most definitely getting used and going into system memory for textures is a little different, because if the textures in memory aren't accessed often, you may be running at 60FPS (like I am in Elite Dangerous,) but occassionally get a blip because it was either used or it got paged in and something else paged out. It's still playable, but it's sometimes annoying if it happens at just the wrong time. It also depends on a game as well.

Either way, when push comes to shove, 4GB is plenty now and probably will stay that way for at least a couple years unless you're one of those people who shoves AA as high as it can go on your shiny new 4k display, in which case, I can't say you're the typical gamer. :)
 
Joined
Jul 5, 2008
Messages
337 (0.06/day)
System Name Roxy
Processor i7 5930K @ 4.5GHz (167x27 1.35V)
Motherboard X99-A/USB3.1
Cooling Barrow Infinity Mirror, EK 45x420mm, EK X-Res w 10W DDC
Memory 2x16GB Patriot Viper 3600 @3333 16-20-20-38
Video Card(s) XFX 5700 XT Thicc III Ultra
Storage Sabrent Rocket 2TB, 4TB WD Mechanical
Display(s) Acer XZ321Q (144Mhz Freesync Curved 32" 1080p)
Case Modded Cosmos-S Red, Tempered Glass Window, Full Frontal Mesh, Black interior
Audio Device(s) Soundblaster Z
Power Supply Corsair RM 850x White
Mouse Logitech G403
Keyboard CM Storm QuickFire TK
Software Windows 10 Pro
Benchmark Scores https://valid.x86.fr/e5uz5f
I wanted to want this card, but I think AMD should have had two SKUs, one with the CLLC and one with a full cover, single slot waterblock.

How well the card competes with the 12GB Titan-X at 4K shows that (for now at least) the extra 8GB are just for show.
 
Joined
Apr 30, 2012
Messages
3,881 (0.84/day)
 
Joined
Apr 19, 2013
Messages
296 (0.07/day)
System Name Darkside
Processor R7 3700X
Motherboard Aorus Elite X570
Cooling Deepcool Gammaxx l240
Memory Thermaltake Toughram DDR4 3600MHz CL18
Video Card(s) Gigabyte RX Vega 64 Gaming OC
Storage ADATA & WD 500GB NVME PCIe 3.0, many WD Black 1-3TB HD
Display(s) Samsung C27JG5x
Case Thermaltake Level 20 XL
Audio Device(s) iFi xDSD / micro iTube2 / micro iCAN SE
Power Supply EVGA 750W G2
Mouse Corsair M65
Keyboard Corsair K70 LUX RGB
Benchmark Scores Not sure, don't care
And still my HD7950 in CFX performs better than the Fury X......
 
Joined
May 14, 2012
Messages
891 (0.19/day)
Location
US
Processor AMD Ryzen 5 1600X
Motherboard AsRock X370 Taichi
Cooling Corsair H60 Liquid Cooling
Memory 16 GB CORSAIR Vengeance LPX 3000 Mhz (Running at 2933)
Video Card(s) EVGA FTW2 GTX 1070Ti
Storage 740GB of SSDs, 7 TB's of HDDs
Display(s) LG 27UD58P-B 27” IPS 4K
Case Phanteks Enthos Pro M
Audio Device(s) Integrated
Power Supply EVGA 750 P2
Mouse Mionix Naos 8200
Keyboard G Skill Ripjaws RGB Mechanical Keyboard
Software Windows 10 Pro
I know Fury X is slower than the GTX 980 Ti but for the price you do get a water cooler. I say that isn't that bad taken that into account
 
Joined
Apr 16, 2010
Messages
2,070 (0.39/day)
System Name iJayo
Processor i7 14700k
Motherboard Asus ROG STRIX z790-E wifi
Cooling Pearless Assasi
Memory 32 gigs Corsair Vengence
Video Card(s) Nvidia RTX 2070 Super
Storage 1tb 840 evo, Itb samsung M.2 ssd 1 & 3 tb seagate hdd, 120 gig Hyper X ssd
Display(s) 42" Nec retail display monitor/ 34" Dell curved 165hz monitor
Case O11 mini
Audio Device(s) M-Audio monitors
Power Supply LIan li 750 mini
Mouse corsair Dark Saber
Keyboard Roccat Vulcan 121
Software Window 11 pro
Benchmark Scores meh... feel me on the battle field!

60677932.jpg


not talking about any electric bill either..... that thing is border line evil.....gotta know what kinda numbers it puts out........:rockout:
 
Last edited:
Joined
Apr 26, 2009
Messages
517 (0.09/day)
Location
You are here.
System Name Prometheus
Processor Intel i7 14700K
Motherboard ASUS ROG STRIX B760-I
Cooling Noctua NH-D12L
Memory Corsair 32GB DDR5-7200
Video Card(s) MSI RTX 4070Ti Ventus 3X OC 12GB
Storage WD Black SN850 1TB
Display(s) DELL U4320Q 4K
Case SSUPD Meshroom D Fossil Gray
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Corsair SF750 Platinum SFX
Mouse Razer Orochi V2
Keyboard Nuphy Air75 V2 White
Software Windows 11 Pro x64
These days, 3D Mark doesn't say anything anymore. In actual games I think the fourth card usually means worse performance then three, and sometimes even then two cards!
 
Top