• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Radeon RX 6800

Joined
Jun 19, 2019
Messages
220 (0.11/day)
Ohh, it is different perfomance/watt numbers now. I have to re-read the test and rethink everything.
 

W1zzard

Administrator
Staff member
Joined
May 14, 2004
Messages
28,345 (3.73/day)
Processor Ryzen 7 5700X
Memory 48 GB
Video Card(s) RTX 4080
Storage 2x HDD RAID 1, 3x M.2 NVMe
Display(s) 30" 2560x1600 + 19" 1280x1024
Software Windows 10 64-bit
Ohh, it is different perfomance/watt numbers now. I have to re-read the test and rethink everything.
Indeed :) What's the outcome of your reconsideration? Personally I don't think it changes much
 
Joined
Nov 18, 2020
Messages
39 (0.02/day)
Location
Arad, Romania
Processor i9-10850K @ 125W Power Limit
Motherboard ASUS TUF Gaming Z590-PLUS
Cooling Noctua NH-D15S
Memory Kingston KF432C16RBK2/64
Video Card(s) ASUS RTX 3070 TUF GAMING O8G @ 950mV / 2010MHz
Storage Samsung 970 EVO Plus 2TB + Kingston KC3000 2TB + Samsung 860 EVO 2TB + Samsung 870 EVO 4TB
Display(s) ASUS PB287Q + DELL S2719DGF
Case FRACTAL Define 7 Dark TG
Audio Device(s) integrated + Microlab FC330 / Audio-Technica ATH-M50s/LE
Power Supply Seasonic PRIME TX-650, 80+ Titanium, 650W
Mouse SteelSeries Rival 600
Keyboard Corsair K70 RGB TKL – CHERRY MX SPEED
This review has been updated with new gaming power consumption numbers for RTX 2080 Ti, RTX 3070, RTX 3080, RTX 3090, RX 6800 and RX 6800 XT. For these cards I'm now running Metro at 1440p and not 1080p, to ensure proper loading.

The perf/W charts have been updated, too, and relevant texts as well.
Now, looking at the new numbers, it's even clearer to me what my problem with Ampere is.
At 1440p, the 3080 is 1% more efficient than the 2080 Ti and at 4K, the 3080 is 7-8% more efficient than the 2080 Ti (with 1440p gaming power numbers used / 4K power numbers wouldn't have changed the dynamic too much, I suppose).
Jumping from 1080p to 1440p for the gaming power measurements, increased the average consumption of the 2080 Ti by only 2 Watts (273W->275W), but the 3080 jumped 36 Watts (303W->339W). This is more realistic in my opinion, because the 3080 has been to a greater extent CPU limited than the 2080 Ti, at 1080p.
This speaks volumes about how efficient Turing was (and Pascal indirectly), even if overpriced and not that popular (in my opinion).
Don't get me wrong, the 3080 delivers great performance (at 4K at least) and is a real upgrade, but at what cost?! At 1440p the performance scales directly with power, not a great feat for a new(ish) node, pushed to the limit.
AMD did a much better job with RDNA2 this time. It reminds me of the efficiency jump from Kepler to Maxwell. Not even a node shrink, in both cases (Kepler->Maxwell / RDNA->RDNA2)!!!
We need 5nm to hopefully see high performance sub 250W cards again. If sub 250 Watts will even be a thing in the future again, with these increasing power consumption numbers.
 
Joined
Jun 29, 2009
Messages
2,012 (0.35/day)
Location
Heart of Eutopia!
System Name ibuytheusedstuff
Processor 5960x
Motherboard x99 sabertooth
Cooling old socket775 cooler
Memory 32 Viper
Video Card(s) 1080ti on morpheus 1
Storage raptors+ssd
Display(s) acer 120hz
Case open bench
Audio Device(s) onb
Power Supply antec 1200 moar power
Mouse mx 518
Keyboard roccat arvo
i would not have thought that the 6800 is so close to 6800xt and now waiting on bios to flash it to xt version or is this lasercuts or whatever so not flashable?
 
Joined
Jun 19, 2019
Messages
220 (0.11/day)
Indeed :) What's the outcome of your reconsideration? Personally I don't think it changes much

I prefer to buy the best efficient GPU, before i had Geforce 1660ti according to your tests, now Geforce 3070. But I almost went to buy RX6800 seeing the much better performance/watt number :D. With the newest figures (with small difference), i think i gonna keep my 3070. With DLSS has again better efficiency and will also receive smart access memory.
 
Joined
Oct 10, 2018
Messages
943 (0.40/day)
Indeed :) What's the outcome of your reconsideration? Personally I don't think it changes much

I think your initial test results stated the Avg. Gaming result power consumption around 160W, now with updated it is just shy over RTX 3070. It is really about choices in the end, but we do know that Ampere series are great undervolters if someone want to go down that road.
 
Joined
Jan 16, 2020
Messages
29 (0.02/day)
Processor Intel Core Ultra 9 285K
Motherboard MSI MAG Z890 TOMAHAWK WIFI | BIOS 1.A71
Cooling Noctua NH-D15 G2 | ARCTIC MX-6
Memory G.Skill Trident Z5 RGB 96GB (2x48GB) DDR5 6400MHz | CL32-39-39-102-141-701-2T | 1,35V | Gear 2
Video Card(s) MSI GeForce RTX 5080 VANGUARD SOC 16GB
Storage Intel Optane 900P 280GB | WD Black 10TB WD101FZBX
Display(s) AOC AG274QZM 27“ 2560 x 1440 10bit 240Hz
Case Lian Li O11 Air Mini Black
Audio Device(s) Creative Sound Blaster AE-7 | Audio-Technica ATH-A990Z
Power Supply be quiet! Dark Power 13 750W
Mouse Logitech G MX518
Keyboard Logitech G213 Prodigy
Software Windows 11 Pro x64 24H2
I don't know if this is a driver issue or something completely different, but in this review, the image quality with ray tracing in Watch Dogs Legion on AMD cards is clearly different from that of NVIDIA cards. I am really curious if some owner of RX 6800 or RX 6800 XT could please test if it's true and if so if it applies to more titles. Thanks for any information on this.
 

W1zzard

Administrator
Staff member
Joined
May 14, 2004
Messages
28,345 (3.73/day)
Processor Ryzen 7 5700X
Memory 48 GB
Video Card(s) RTX 4080
Storage 2x HDD RAID 1, 3x M.2 NVMe
Display(s) 30" 2560x1600 + 19" 1280x1024
Software Windows 10 64-bit
I don't know if this is a driver issue or something completely different, but in this review, the image quality with ray tracing in Watch Dogs Legion on AMD cards is clearly different from that of NVIDIA cards. I am really curious if some owner of RX 6800 or RX 6800 XT could please test if it's true and if so if it applies to more titles. Thanks for any information on this.
AMD lists this in their "known issues" document, some reviewers chose to repro the issue and report on it. I'm sure there's many driver issues in RT, because all RT so far has been developer for and tested on NV only. I'm actually impressed you can run existing games
 
Joined
Jan 27, 2015
Messages
1,794 (0.49/day)
System Name Legion
Processor i7-12700KF
Motherboard Asus Z690-Plus TUF Gaming WiFi D5
Cooling Arctic Liquid Freezer 2 240mm AIO
Memory PNY MAKO DDR5-6000 C36-36-36-76
Video Card(s) PowerColor Hellhound 6700 XT 12GB
Storage WD SN770 512GB m.2, Samsung 980 Pro m.2 2TB
Display(s) Acer K272HUL 1440p / 34" MSI MAG341CQ 3440x1440
Case Montech Air X
Power Supply Corsair CX750M
Mouse Logitech MX Anywhere 25
Keyboard Logitech MX Keys
Software Lots
Is there any reason that every single one of these cards seems to be full-length 3 fan designs?
 
Joined
Nov 22, 2020
Messages
111 (0.07/day)
Processor Ryzen 5 3600
Motherboard ASRock X470 Taichi
Cooling Scythe Kotetsu Mark II
Memory G.SKILL 32GB DDR4 3200 CL16
Video Card(s) EVGA GeForce RTX 3070 FTW3 Ultra (1980 MHz / 0.968 V)
Display(s) Dell P2715Q; BenQ EX3501R; Panasonic TC-P55S60
Case Fractal Design Define R5
Audio Device(s) Sennheiser HD580; 64 Audio 1964-Q
Power Supply Seasonic SSR-650TR
Mouse Logitech G700s; Logitech G903
Keyboard Cooler Master QuickFire TK; Kinesis Advantage
VR HMD Quest 2
i guess this thing is evolving to "what do we want to test for power?". i would definitely throw out "cold card". but the other 3 results are all valid in their own way. opinions?

This is super interesting, and reveals something that no other reviewer has noticed.

Most reviews only measure power consumption during an intense, fully GPU constrained task, using FurMark or whatever game seems to draw the most power when the framerate is uncapped. But that's not how most people use the card, and it may not represent the relative efficiency of these cards in normal use.

If the RX 6000 series can throttle power this well in moments of light load, then it's likely to benefit more than the RTX 3000 series when running vsynced or framecapped.

I would love to know: if you run, for example, the Doom Eternal 4K benchmark with a 120Hz or 100Hz frame cap or dynamic vsync (which are the RX 6800's 50%/95% uncapped frametimes), does that measurably lower the RX 6800's average power consumption? Does it do the same for the RTX 3070?

The answer would be the deciding factor for which card I buy, since I want the smoothest performance I can get while staying under about 200W average thermal load.
 
Joined
Mar 18, 2015
Messages
2,970 (0.82/day)
Location
Long Island
Interesting product placement though I don't know that it will find a home priced as it is .... It I was to look at the 3070 ... I don't see anything that would make it worthwhile to move up to the $580 price point ... and if it dies, I'd be more inclined go up to $650 or $700.


As to the extra the VRAM, it's the proverbial "teats on a bull". In every case where multiple VRAM versions of a card have been issued .... yes every single instance or comparison testing on a reliable web site, from the 6xxx series, 7xx series, 9xxx seies, 10xxx series where this has occured, it has never been shown that the extra VRAM brought anything to the table. By the time you jack up the settings to a point where the VRAM differences is significant, you have outstripped the capabilities of the GPU .... a 33% increase in fps is meaningless when that increase is from 25 to 20 fps. While a small number of games, primarily poor console ports have been exceptions, the exception as the saying goes, proves the rule.


"There isn’t a lot of difference between the cards at 1920×1080 or at 2560×1600. We only start to see minimal differences at 5760×1080, and even so, there is rarely a frame or two difference. If we start to add even more AA, in most cases, the frame rates will drop to unplayable on both cards. This leaves five games out of 30 where a 4GB GTX 770 gives more than a 1 frame per second difference over the 2GB-equipped GTX 770. And one of them, Metro: Last Light still isn’t even quite a single frame difference.

here is one last thing to note with Max Payne 3: It would not normally allow one to set 4xAA at 5760×1080 with any 2GB card as it claims to require 2750MB. However, when we replaced the 4GB GTX 770 with the 2GB version, the game allowed the setting. And there were no slowdowns, stuttering, nor any performance differences that we could find between the two GTX 770s. "



"Some games won’t use much VRAM, no matter how much you offer them, while others are more opportunistic. This is critically important for our purposes, because there’s not an automatic link between the amount of VRAM a game is using and the amount of VRAM it actually requires to run. Our first article on the Fury X showed how Shadow of Mordor actually used dramatically more VRAM on the GTX Titan X as compared with the GTX 980 Ti, without offering a higher frame rate.

While we do see some evidence of a 4GB barrier on AMD cards that the NV hardware does not experience, provoking this problem in current-generation titles required us to use settings that rendered the games unplayable on any current GPU."


W1zzard obviously agrees here as he again echoes what others have said before when he writes:

"Much has been talked about VRAM size during NVIDIA's Ampere launch. The GeForce RTX 3080 has 10 GB VRAM, which I still think is plenty for today and the near future. Planning many years ahead with hardware purchases makes no sense, especially when it comes to VRAM—you'll run out of shading power long before memory becomes a problem. "

This kinda reminds me of the 960 / 380 era ... the 380x was $230 - 240ish ... the $ 60-70 jump to the 970 was just too small to ignore. There's nothing here to complain about it's a solid card .... however priced between the 3070 and 6800XT / 3080 .... AMD's primary competition here is itself ... At $450, it would make a lot more sense.
 
Joined
Mar 10, 2010
Messages
11,880 (2.17/day)
Location
Manchester uk
System Name RyzenGtEvo/ Asus strix scar II
Processor Amd R5 5900X/ Intel 8750H
Motherboard Crosshair hero8 impact/Asus
Cooling 360EK extreme rad+ 360$EK slim all push, cpu ek suprim Gpu full cover all EK
Memory Gskill Trident Z 3900cas18 32Gb in four sticks./16Gb/16GB
Video Card(s) Asus tuf RX7900XT /Rtx 2060
Storage Silicon power 2TB nvme/8Tb external/1Tb samsung Evo nvme 2Tb sata ssd/1Tb nvme
Display(s) Samsung UAE28"850R 4k freesync.dell shiter
Case Lianli 011 dynamic/strix scar2
Audio Device(s) Xfi creative 7.1 on board ,Yamaha dts av setup, corsair void pro headset
Power Supply corsair 1200Hxi/Asus stock
Mouse Roccat Kova/ Logitech G wireless
Keyboard Roccat Aimo 120
VR HMD Oculus rift
Software Win 10 Pro
Benchmark Scores laptop Timespy 6506
Interesting product placement though I don't know that it will find a home priced as it is .... It I was to look at the 3070 ... I don't see anything that would make it worthwhile to move up to the $580 price point ... and if it dies, I'd be more inclined go up to $650 or $700.


As to the extra the VRAM, it's the proverbial "teats on a bull". In every case where multiple VRAM versions of a card have been issued .... yes every single instance or comparison testing on a reliable web site, from the 6xxx series, 7xx series, 9xxx seies, 10xxx series where this has occured, it has never been shown that the extra VRAM brought anything to the table. By the time you jack up the settings to a point where the VRAM differences is significant, you have outstripped the capabilities of the GPU .... a 33% increase in fps is meaningless when that increase is from 25 to 20 fps. While a small number of games, primarily poor console ports have been exceptions, the exception as the saying goes, proves the rule.


"There isn’t a lot of difference between the cards at 1920×1080 or at 2560×1600. We only start to see minimal differences at 5760×1080, and even so, there is rarely a frame or two difference. If we start to add even more AA, in most cases, the frame rates will drop to unplayable on both cards. This leaves five games out of 30 where a 4GB GTX 770 gives more than a 1 frame per second difference over the 2GB-equipped GTX 770. And one of them, Metro: Last Light still isn’t even quite a single frame difference.

here is one last thing to note with Max Payne 3: It would not normally allow one to set 4xAA at 5760×1080 with any 2GB card as it claims to require 2750MB. However, when we replaced the 4GB GTX 770 with the 2GB version, the game allowed the setting. And there were no slowdowns, stuttering, nor any performance differences that we could find between the two GTX 770s. "



"Some games won’t use much VRAM, no matter how much you offer them, while others are more opportunistic. This is critically important for our purposes, because there’s not an automatic link between the amount of VRAM a game is using and the amount of VRAM it actually requires to run. Our first article on the Fury X showed how Shadow of Mordor actually used dramatically more VRAM on the GTX Titan X as compared with the GTX 980 Ti, without offering a higher frame rate.

While we do see some evidence of a 4GB barrier on AMD cards that the NV hardware does not experience, provoking this problem in current-generation titles required us to use settings that rendered the games unplayable on any current GPU."


W1zzard obviously agrees here as he again echoes what others have said before when he writes:

"Much has been talked about VRAM size during NVIDIA's Ampere launch. The GeForce RTX 3080 has 10 GB VRAM, which I still think is plenty for today and the near future. Planning many years ahead with hardware purchases makes no sense, especially when it comes to VRAM—you'll run out of shading power long before memory becomes a problem. "

This kinda reminds me of the 960 / 380 era ... the 380x was $230 - 240ish ... the $ 60-70 jump to the 970 was just too small to ignore. There's nothing here to complain about it's a solid card .... however priced between the 3070 and 6800XT / 3080 .... AMD's primary competition here is itself ... At $450, it would make a lot more sense.
Your opinion, I disagree with parts of , but technically ANY card is more attractive at £130 cheaper, is it not.
 
Joined
Apr 30, 2011
Messages
2,754 (0.54/day)
Location
Greece
Processor AMD Ryzen 5 5600@80W
Motherboard MSI B550 Tomahawk
Cooling ZALMAN CNPS9X OPTIMA
Memory 2*8GB PATRIOT PVS416G400C9K@3733MT_C16
Video Card(s) Sapphire Radeon RX 6750 XT Pulse 12GB
Storage Sandisk SSD 128GB, Kingston A2000 NVMe 1TB, Samsung F1 1TB, WD Black 10TB
Display(s) AOC 27G2U/BK IPS 144Hz
Case SHARKOON M25-W 7.1 BLACK
Audio Device(s) Realtek 7.1 onboard
Power Supply Seasonic Core GC 500W
Mouse Sharkoon SHARK Force Black
Keyboard Trust GXT280
Software Win 7 Ultimate 64bit/Win 10 pro 64bit/Manjaro Linux
Top