- Joined
- Jan 8, 2017
- Messages
- 9,499 (3.27/day)
System Name | Good enough |
---|---|
Processor | AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge |
Motherboard | ASRock B650 Pro RS |
Cooling | 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30 |
Memory | 32GB - FURY Beast RGB 5600 Mhz |
Video Card(s) | Sapphire RX 7900 XT - Alphacool Eisblock Aurora |
Storage | 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB |
Display(s) | LG UltraGear 32GN650-B + 4K Samsung TV |
Case | Phanteks NV7 |
Power Supply | GPS-750C |
I'm speaking from overall benchmarks I've seen so far. Let's treat 3080 and 6800 XT are in the same ballpark performance and 1440p as our control environment.
1) In games where 6800 XT leads 3080, in 4k 3080 leads or difference is miniscule.
2) In games where 6800 XT is hair-width slower than 3080, in 4k 3080 leads farther.
6800 otoh is all around faster than 3070 but in 4k the gap becomes narrower.
That's not the proper way to figure out if a GPU lacks memory bandwidth the higher you go in resolution.
The 6800 and 6800XT both have the exact same memory bandwidth, at 4K the XT is 15% faster at 1440P it's 12% faster. XT scaled better not worse as you would expect if it was indeed lacking memory bandwidth. That's also partly because at 1440P games are more CPU bound but it's fairly obvious there is nothing out of the ordinary which would suggest otherwise.