• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD RDNA3 Offers Over 50% Perf/Watt Uplift Akin to RDNA2 vs. RDNA; RDNA4 Announced

Joined
Oct 27, 2020
Messages
798 (0.53/day)
Mess, usually when I uninstall the Radeon Software, it deleted the Dolby Audio software panel from my tray, as well. Needed to reinstall the entire Realtek High Definition Audio Driver package again..



No, it doesn't.
It is when I try to connect the notebook to the UHD 4K TV using the HDMI cable. There is never sound via that cable.



Yes, for cut down Navi 31 that would be terrible performance causing lack of sense to launch such a version, completely..

Fixed it for you:

View attachment 250628
lol my bad m) i edited navi 32 but i left the 4090Ti/4090 typo
Below the table without the MS Paint job (thanks btw!)
IMG_20220611_123257.jpg
 
Joined
Jul 21, 2016
Messages
144 (0.05/day)
Processor AMD Ryzen 5 5600
Motherboard MSI B450 Tomahawk
Cooling Alpenföhn Brocken 3 140mm
Memory Patriot Viper 4 - DDR4 3400 MHz 2x8 GB
Video Card(s) Radeon RX460 2 GB
Storage Samsung 970 EVO PLUS 500, Samsung 860 500 GB, 2x Western Digital RED 4 TB
Display(s) Dell UltraSharp U2312HM
Case be quiet! Pure Base 500 + Noiseblocker NB-eLoop B12 + 2x ARCTIC P14
Audio Device(s) Creative Sound Blaster ZxR,
Power Supply Seasonic Focus GX-650
Mouse Logitech G305
Keyboard Lenovo USB
I feel AMD managed to meet the perf/watt target with RDNA and RDNA2, but this does not apply across the board, i.e. RX 6700 XT being an example where I think it failed. The problem with the way AMD segments their card is how heavy handed they are in slashing the specs with their RX 6000 series. The second most jarring cut was on the RX 6700 XT, and the RX 6500/6400 taking the top spot. Taking the Navi22 (RX 6700 XT) again as example, the number of CUs got a nice 50% cut from the Navi 21. The nearest RX 6800 is quite a lot faster than the RX 6700 XT. The latter have to make up the difference in specs by pushing clockspeed hard. As a result, I feel it is inefficient for its target performance, relative to the RX 6800.
I agree. The RX 6800 and the RX 6600 are the most efficient cards today according to TPU's tests. Even though the RX 6800 has average 53% more performance in 1080p and average 107% more performance in 4k it is way more expensive. Currently they're asking 320€ for and RX6600 card, there isn't a card that performs better in that price category, not to mention its low power consumption / heat output / noise. Sadly though it is barely any faster than the RX 5600 XT was and it's 40€ more expensive than that card was.

Now if we could trully get 50% performance increase for the RX 7600 card we would get RX6800 perf for 1080p. But sadly the price remains the question still.
 
Joined
Jan 14, 2019
Messages
12,577 (5.80/day)
Location
Midlands, UK
System Name Nebulon B
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance DDR5-4800
Video Card(s) AMD Radeon RX 6750 XT 12 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Bazzite (Fedora Linux) KDE
Mess, usually when I uninstall the Radeon Software, it deleted the Dolby Audio software panel from my tray, as well. Needed to reinstall the entire Realtek High Definition Audio Driver package again..
That's weird. I've never had any issue like that (or any issue at all).
 
Joined
Oct 27, 2020
Messages
798 (0.53/day)
I agree. The RX 6800 and the RX 6600 are the most efficient cards today according to TPU's tests. Even though the RX 6800 has average 53% more performance in 1080p and average 107% more performance in 4k it is way more expensive. Currently they're asking 320€ for and RX6600 card, there isn't a card that performs better in that price category, not to mention its low power consumption / heat output / noise. Sadly though it is barely any faster than the RX 5600 XT was and it's 40€ more expensive than that card was.

Now if we could trully get 50% performance increase for the RX 7600 card we would get RX6800 perf for 1080p. But sadly the price remains the question still.
The rumors are for 6900XT QHD performance but with only 8GB it seems wasted imo.
We already know that Nvidia has better resiliency regarding memory related performance pressures and we know how many negative comments the (lower than 6900XT performance) RTX 3080 got and that was Q4 2020 with 10GB, so 8GB in 2022 don't seem wise for the performance level imo.
Also regarding price it will be between $399-479 imo, so 450€-540€ not on the same level as RX 6600 310€.
 
Joined
May 2, 2017
Messages
7,762 (2.78/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
No, it doesn't.
It is when I try to connect the notebook to the UHD 4K TV using the HDMI cable. There is never sound via that cable.
Then there's something wrong with either your ntoebook or your TV, as that should "just work". Whether it's a driver or hardware issue, that is not how this should work. Do you get a new audio device pop up when you connect the display?
Now if we could trully get 50% performance increase for the RX 7600 card we would get RX6800 perf for 1080p. But sadly the price remains the question still.
+>50% perf/W, not +50% performance. Those two are quite different things. It could result in lower TDPs per tier, higher performance per tier, or some mix of the two.
 
Joined
Apr 30, 2011
Messages
2,716 (0.54/day)
Location
Greece
Processor AMD Ryzen 5 5600@80W
Motherboard MSI B550 Tomahawk
Cooling ZALMAN CNPS9X OPTIMA
Memory 2*8GB PATRIOT PVS416G400C9K@3733MT_C16
Video Card(s) Sapphire Radeon RX 6750 XT Pulse 12GB
Storage Sandisk SSD 128GB, Kingston A2000 NVMe 1TB, Samsung F1 1TB, WD Black 10TB
Display(s) AOC 27G2U/BK IPS 144Hz
Case SHARKOON M25-W 7.1 BLACK
Audio Device(s) Realtek 7.1 onboard
Power Supply Seasonic Core GC 500W
Mouse Sharkoon SHARK Force Black
Keyboard Trust GXT280
Software Win 7 Ultimate 64bit/Win 10 pro 64bit/Manjaro Linux
The rumors are for 6900XT QHD performance but with only 8GB it seems wasted imo.
We already know that Nvidia has better resiliency regarding memory related performance pressures and we know how many negative comments the (lower than 6900XT performance) RTX 3080 got and that was Q4 2020 with 10GB, so 8GB in 2022 don't seem wise for the performance level imo.
Also regarding price it will be between $399-479 imo, so 450€-540€ not on the same level as RX 6600 310€.
Me thinks that 7700XT will equal or even surpass 6900XT for half the price. Not the 7600XT that will arrive much later if ever.
 

ARF

Joined
Jan 28, 2020
Messages
4,670 (2.61/day)
Location
Ex-usa | slava the trolls
Me thinks that 7700XT will equal or even surpass 6900XT for half the price. Not the 7600XT that will arrive much later if ever.

Well, this release schedule can be one of the weird ones.

You can expect something like the Navi 33 or Radeon RX 7700 XT launched in October.
And much later Radeon RX 8800 XT and Radeon RX 8900 XT as Navi 31.

Navi 32 could be named Radeon RX 8700 XT.

It all depends... we don't have a concrete confirmation that Navi 31 will be launched soon/this year. Look at my posts on the first page.
 
Joined
Oct 27, 2020
Messages
798 (0.53/day)
Me thinks that 7700XT will equal or even surpass 6900XT for half the price. Not the 7600XT that will arrive much later if ever.
About naming, at first i thought 7900 series for navi31 based cards, 7800 series for navi32 based cards and 7700 series for navi33 based cards, but i really don't know.
Maybe they will have a cut down navi32 with less infinity cache and memory vs full navi32 and name it instead 7700 something leaving 7600 series for navi33 based cards, maybe not.
The leakers that mentioned that Navi33 will be at 6900XT QHD performance level, said that they don't hear anything about a Navi34.(still after more than a year from the first RDNA3 rumours we still haven't heard anything about Navi 34)
I was replying based on the Navi23 replacement thinking Navi33, not model numbers which remain unknown for the time being.
 
Last edited:
Joined
Jul 21, 2016
Messages
144 (0.05/day)
Processor AMD Ryzen 5 5600
Motherboard MSI B450 Tomahawk
Cooling Alpenföhn Brocken 3 140mm
Memory Patriot Viper 4 - DDR4 3400 MHz 2x8 GB
Video Card(s) Radeon RX460 2 GB
Storage Samsung 970 EVO PLUS 500, Samsung 860 500 GB, 2x Western Digital RED 4 TB
Display(s) Dell UltraSharp U2312HM
Case be quiet! Pure Base 500 + Noiseblocker NB-eLoop B12 + 2x ARCTIC P14
Audio Device(s) Creative Sound Blaster ZxR,
Power Supply Seasonic Focus GX-650
Mouse Logitech G305
Keyboard Lenovo USB
+>50% perf/W, not +50% performance. Those two are quite different things. It could result in lower TDPs per tier, higher performance per tier, or some mix of the two.
At 1080p resoulution RX6600 + 35% performance = RX 6700 XT & RX6600 -15% power consumption = 100-105W in total while gaming. That result is also acceptable for me. :D

Though RX6600 level performance for 80-85W also would not be a bad thing if +50% p/p would apply.

About naming, at first i thought 7900 series for navi31 based cards, 7800 series for navi32 based cards and 7700 series for navi33 based cards, but i really don't know.
While i'm not against performance increase, but they went from RX5500XT 128bit 8GB to RX6600XT 128bit 8GB with a minimum of 100$/€ price increase, i can easily picture them releasing a a RX7700XT as 128bit 8GB card.
 

ARF

Joined
Jan 28, 2020
Messages
4,670 (2.61/day)
Location
Ex-usa | slava the trolls
While i'm not against performance increase, but they went from RX5500XT 128bit 8GB to RX6600XT 128bit 8GB with a minimum of 100$/€ price increase, i can easily picture them releasing a a RX7700XT as 128bit 8GB card.

No way, 8 GB of VRAM is too low, it won't be able to allow the shaders show their full potential.
Have you seen how the 10 GB RTX 3080 struggles and the in-games menus explicitly state please give me more VRAM?
It may be 16 GB, while Navi 31 cards could come with as much as 24 or 32 GB.
 
Joined
Oct 27, 2020
Messages
798 (0.53/day)
While i'm not against performance increase, but they went from RX5500XT 128bit 8GB to RX6600XT 128bit 8GB with a minimum of 100$/€ price increase, i can easily picture them releasing a a RX7700XT as 128bit 8GB card.
That's what the rumors are about, 128bit 8GB Navi33. (That doesn't exclude a 16GB version using clamshell configuration)
The thing is that the 8GB memory is too low for the performance target.
When we had 8GB in 2016 at $239 (RX 480), something doesn't feel right having the same memory on 2022 (at nearly double the price nonetheless...)
 
Joined
Jul 21, 2016
Messages
144 (0.05/day)
Processor AMD Ryzen 5 5600
Motherboard MSI B450 Tomahawk
Cooling Alpenföhn Brocken 3 140mm
Memory Patriot Viper 4 - DDR4 3400 MHz 2x8 GB
Video Card(s) Radeon RX460 2 GB
Storage Samsung 970 EVO PLUS 500, Samsung 860 500 GB, 2x Western Digital RED 4 TB
Display(s) Dell UltraSharp U2312HM
Case be quiet! Pure Base 500 + Noiseblocker NB-eLoop B12 + 2x ARCTIC P14
Audio Device(s) Creative Sound Blaster ZxR,
Power Supply Seasonic Focus GX-650
Mouse Logitech G305
Keyboard Lenovo USB
When we had 8GB in 2016 at $239 (RX 480), something doesn't feel right having the same memory on 2022 (at nearly double the price nonetheless...)
Sadly i understand where you're coming from, but if look at the current pricing they are still selling 8GB cards for 400+€, the cheapest RX 6600 XT starts from 410€ to 500€ easily, RX 6650 XT cards start from 430€ to 500€. And the RX 570 8Gb was a sub 200€ card. AMD announced the RX 6650 XT for 400$. So they're quite brazen with their pricing.

Truth to be told I'm not expecting great pricing from them anymore, i became disillusioned by them. But still, the RX6600 for 310-320€ isn't that bad, but could be better. Will see how this new gen and its pricing will develop.
 

ARF

Joined
Jan 28, 2020
Messages
4,670 (2.61/day)
Location
Ex-usa | slava the trolls
Sadly i understand where you're coming from, but if look at the current pricing they are still selling 8GB cards for 400+€, the cheapest RX 6600 XT starts from 410€ to 500€ easily, RX 6650 XT cards start from 430€ to 500€. And the RX 570 8Gb was a sub 200€ card. AMD announced the RX 6650 XT for 400$. So they're quite brazen with their pricing.

Truth to be told I'm not expecting great pricing from them anymore, i became disillusioned by them. But still, the RX6600 for 310-320€ isn't that bad, but could be better. Will see how this new gen and its pricing will develop.

They are selling the overall graphics performance, not the memory amount alone. The memory amount has nothing to do with the pricing..
 
Joined
May 2, 2017
Messages
7,762 (2.78/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
Me thinks that 7700XT will equal or even surpass 6900XT for half the price. Not the 7600XT that will arrive much later if ever.
That sounds ... optimistic. Considering chip prices have only been increasing lately, per-transistor costs on new nodes aren't dropping like they used to, and the 6700 XT already being a very high clocked card requiring a combination of a notable increase in IPC and CU count for that ~60% performance increase you're positing - that sounds like a stretch at half the price. A real stretch. The new node and arch update might get them ... let's say 80CU@2.25GHz performance with, say (and I'm just making up numbers here, clearly), 60CUs at 2.5GHz (nominal game clock) plus a notable "IPC" increase, but those 20 extra CUs won't come for free, nor will the larger IC.
About naming, at first i thought 7900 series for navi31 based cards, 7800 series for navi32 based cards and 7700 series for navi33 based cards, but i really don't know.
That's what the rumors have been saying all along, but to me that just doesn't make sense. The GPU industry has never worked that way (1 die per product tier), and this makes even less sense with the broadening range of GPUs with acceptable performance across the ever-wider range of resolutions and detail levels. It wouldn't leave them any room at all for binning and cut down dice - unless they're bringing out additional suffixes (anyone want a 7900 LE?).

I would expect something similar to the current range, with the larger chip at 79 and 78 tiers, a smaller for 77 and possibly 76 XT, a third tier for 76, and so on. Unless they are actually using multiple compute dice, of course, in which case they could do this but with 1x/2x configurations (79=2x large die, 78=2x medium die, 77=1x large die, 76=1x medium die, 75=1x small die - or something like that). Otherwise this would necessitate ... what, 5-6 GPU dice to be taped out and put into mass production to fill out the range? Has AMD ever had that many designs at one time, on a reasonably modern node? That sounds extremely expensive.
 
Joined
Oct 27, 2020
Messages
798 (0.53/day)
That's what the rumors have been saying all along, but to me that just doesn't make sense. The GPU industry has never worked that way (1 die per product tier), and this makes even less sense with the broadening range of GPUs with acceptable performance across the ever-wider range of resolutions and detail levels. It wouldn't leave them any room at all for binning and cut down dice - unless they're bringing out additional suffixes (anyone want a 7900 LE?).

I would expect something similar to the current range, with the larger chip at 79 and 78 tiers, a smaller for 77 and possibly 76 XT, a third tier for 76, and so on. Unless they are actually using multiple compute dice, of course, in which case they could do this but with 1x/2x configurations (79=2x large die, 78=2x medium die, 77=1x large die, 76=1x medium die, 75=1x small die - or something like that). Otherwise this would necessitate ... what, 5-6 GPU dice to be taped out and put into mass production to fill out the range? Has AMD ever had that many designs at one time, on a reasonably modern node? That sounds extremely expensive.
If the rumours regarding infinity cache sizes are true (128/256/384MB) then we are talking about big dies, Navi 32 compute die will be much bigger than Navi 21 335mm²(at least +20%)
I mention this regarding yields, so depending also on the maturity of N5 it is possible in a not so ideal scenario to have, 2-3 models based on navi31 and 2-3 for navi 32 and 2 for Navi 33.
The thing is Navi31 vs Navi32 is 1.5X unlike the 2X that we had in Navi21 vs Navi22 case, so i don't know how easy is to fit 3 models with 1.5X jump (in theory 1.5X, it will be less than the on paper difference) also regarding Navi32 the compute die won't be anywhere as big as Navi21 520mm² and i think N5 is nearly as mature as N7 was at 2020, so we may have only 2 Navi32 based models.
To tell you the truth i have no idea, only AMD knows the yields and how aggressively or not they want to position RDNA3 vs Nvidia's offerings (another unknown factor)

Sadly i understand where you're coming from, but if look at the current pricing they are still selling 8GB cards for 400+€, the cheapest RX 6600 XT starts from 410€ to 500€ easily, RX 6650 XT cards start from 430€ to 500€. And the RX 570 8Gb was a sub 200€ card. AMD announced the RX 6650 XT for 400$. So they're quite brazen with their pricing.
Truth to be told I'm not expecting great pricing from them anymore, i became disillusioned by them. But still, the RX6600 for 310-320€ isn't that bad, but could be better. Will see how this new gen and its pricing will develop.
Click to expand...


I agree that it's possible based on AMD's recent track record, what I'm debating is whether it should do it or not.
Since we haven't heard anything till now about Navi34, i would assume that AMD will compete most of next year in the lower than $400 with Navi2X parts (21 will be retired of course), I'm really curious to see how much the RX 6000 series prices will drop. (maybe AMD will attempt to influence the market towards inflation again)
 
Last edited:

Mussels

Freshwater Moderator
Joined
Oct 6, 2004
Messages
58,413 (7.91/day)
Location
Oystralia
System Name Rainbow Sparkles (Power efficient, <350W gaming load)
Processor Ryzen R7 5800x3D (Undervolted, 4.45GHz all core)
Motherboard Asus x570-F (BIOS Modded)
Cooling Alphacool Apex UV - Alphacool Eisblock XPX Aurora + EK Quantum ARGB 3090 w/ active backplate
Memory 2x32GB DDR4 3600 Corsair Vengeance RGB @3866 C18-22-22-22-42 TRFC704 (1.4V Hynix MJR - SoC 1.15V)
Video Card(s) Galax RTX 3090 SG 24GB: Underclocked to 1700Mhz 0.750v (375W down to 250W))
Storage 2TB WD SN850 NVME + 1TB Sasmsung 970 Pro NVME + 1TB Intel 6000P NVME USB 3.2
Display(s) Phillips 32 32M1N5800A (4k144), LG 32" (4K60) | Gigabyte G32QC (2k165) | Phillips 328m6fjrmb (2K144)
Case Fractal Design R6
Audio Device(s) Logitech G560 | Corsair Void pro RGB |Blue Yeti mic
Power Supply Fractal Ion+ 2 860W (Platinum) (This thing is God-tier. Silent and TINY)
Mouse Logitech G Pro wireless + Steelseries Prisma XL
Keyboard Razer Huntsman TE ( Sexy white keycaps)
VR HMD Oculus Rift S + Quest 2
Software Windows 11 pro x64 (Yes, it's genuinely a good OS) OpenRGB - ditch the branded bloatware!
Benchmark Scores Nyooom.
No, it doesn't.
It is when I try to connect the notebook to the UHD 4K TV using the HDMI cable. There is never sound via that cable.
Feel free to start a help thread on it, it absolutely should 'just work'

the only reason it wouldnt work is if your audio path is messed up - for example if you use a HDMI receiver and disable your TV's speakers, but connect the laptop to the TV and not the receiver
Even my monitors have the option to set the audio source to be different to the video source, as well as settings to use their internal speakers, external speakers, or both
 
Joined
Jan 24, 2011
Messages
180 (0.04/day)
lol my bad m) i edited navi 32 but i left the 4090Ti/4090 typo
Below the table without the MS Paint job (thanks btw!)
View attachment 250635
Do you seriously believe Lovelace will magically gain ~90% performance at same TBP? That would mean that the chip alone has gained >100% performance/W compared to Its predecessor Ampere.
The likelihood of this is pretty low.
 
Joined
Oct 27, 2020
Messages
798 (0.53/day)
Do you seriously believe Lovelace will magically gain ~90% performance at same TBP? That would mean that the chip alone has gained >100% performance/W compared to Its predecessor Ampere.
The likelihood of this is pretty low.
You are comparing the wrong pair, it's more indicative to compare the slightly cut down GA102 3090 vs cut down AD102 4090 and still this won't be exactly fair since the 4090 is much more heavily cut down part, 3090 is 1 TPC short of the full silicon (or 2 SM short) so the full silicon GA102 has only +2.4% more shaders vs 3090 while the full AD102 will have nearly 14.3% more vs 4090 and AD102 is a smaller chip vs GA102.
But it's more about frequency and how much the silicon is stressed out. (I think people will surprised with the frequency ADA parts can achieve)
And 3090Ti is much more stressed out, achieving only +8-9% more performance vs 3090 while the TBP difference is just shy of +30%.
If you compare 4090 vs 3090 you are getting 1.6X, so I propose 1.6%-1.45% (AD103-336 is only +48% vs 3090-328 at 350W in my assumption)
So Nvidia going from 8nm Samsung (slightly better than 10nm TSMC really) to N5 or N4 isn't going to bring 1.6X-1.45X but AMD going from N7 (2nd try at 7nm so completely mature 7nm product for RDNA2) to N5 compute + N6 for infinity/etc is going to bring 1.5X just fine?
What is more likely to happen from the two?
Edit: corrected numbers
Edit 2: Since the leap is going to be so big, maybe there is no reason for Nvidia to push the frequency very hard in order to leave room for a refresh (super), also the same is true for AMD, their Navi 31 performance will greatly improve imo if in a refresh they use 24Gbps Samsung GDDR6 memory which won't be available at launch, so in general the frequency may not be so high as I had in mind, below a slightly revised table (-3-4%):

IMG_20220612_164842.jpg
 
Last edited:

Mussels

Freshwater Moderator
Joined
Oct 6, 2004
Messages
58,413 (7.91/day)
Location
Oystralia
System Name Rainbow Sparkles (Power efficient, <350W gaming load)
Processor Ryzen R7 5800x3D (Undervolted, 4.45GHz all core)
Motherboard Asus x570-F (BIOS Modded)
Cooling Alphacool Apex UV - Alphacool Eisblock XPX Aurora + EK Quantum ARGB 3090 w/ active backplate
Memory 2x32GB DDR4 3600 Corsair Vengeance RGB @3866 C18-22-22-22-42 TRFC704 (1.4V Hynix MJR - SoC 1.15V)
Video Card(s) Galax RTX 3090 SG 24GB: Underclocked to 1700Mhz 0.750v (375W down to 250W))
Storage 2TB WD SN850 NVME + 1TB Sasmsung 970 Pro NVME + 1TB Intel 6000P NVME USB 3.2
Display(s) Phillips 32 32M1N5800A (4k144), LG 32" (4K60) | Gigabyte G32QC (2k165) | Phillips 328m6fjrmb (2K144)
Case Fractal Design R6
Audio Device(s) Logitech G560 | Corsair Void pro RGB |Blue Yeti mic
Power Supply Fractal Ion+ 2 860W (Platinum) (This thing is God-tier. Silent and TINY)
Mouse Logitech G Pro wireless + Steelseries Prisma XL
Keyboard Razer Huntsman TE ( Sexy white keycaps)
VR HMD Oculus Rift S + Quest 2
Software Windows 11 pro x64 (Yes, it's genuinely a good OS) OpenRGB - ditch the branded bloatware!
Benchmark Scores Nyooom.
Do you seriously believe Lovelace will magically gain ~90% performance at same TBP? That would mean that the chip alone has gained >100% performance/W compared to Its predecessor Ampere.
The likelihood of this is pretty low.
The only time claims like that are true, is when new tech is made to specifically make old hardware look slow - like raytracing.

Oh yeah 2x faster, now we get SIX FPS at 8K ultra with DLSS ultra extreme low quality (Exclusive to RTX 4000 GPU's)
 
Joined
May 31, 2016
Messages
4,440 (1.42/day)
Location
Currently Norway
System Name Bro2
Processor Ryzen 5800X
Motherboard Gigabyte X570 Aorus Elite
Cooling Corsair h115i pro rgb
Memory 32GB G.Skill Flare X 3200 CL14 @3800Mhz CL16
Video Card(s) Powercolor 6900 XT Red Devil 1.1v@2400Mhz
Storage M.2 Samsung 970 Evo Plus 500MB/ Samsung 860 Evo 1TB
Display(s) LG 27UD69 UHD / LG 27GN950
Case Fractal Design G
Audio Device(s) Realtec 5.1
Power Supply Seasonic 750W GOLD
Mouse Logitech G402
Keyboard Logitech slim
Software Windows 10 64 bit
Amd said a 50% increase per watt. So to have twice the performance, they increase the number of streaming processors as well as clock cycle.
I seriously doubt the RDNA3 top card will be twice as fast as the 6900XT and that is what I think you are implying
 
Joined
Aug 9, 2019
Messages
1,717 (0.87/day)
Processor 7800X3D 2x16GB CO
Motherboard Asrock B650m HDV
Cooling Peerless Assassin SE
Memory 2x16GB DR A-die@6000c30 tuned
Video Card(s) Asus 4070 dual OC 2610@915mv
Storage WD blue 1TB nvme
Display(s) Lenovo G24-10 144Hz
Case Corsair D4000 Airflow
Power Supply EVGA GQ 650W
Software Windows 10 home 64
Benchmark Scores Superposition 8k 5267 Aida64 58.5ns
Looking at Nvidia's near zero performance/Watt uplift from Turing to Ampere, then the new 1.21 Jiggawatt Lovelace architecture, I'm happy with any sort of efficiency increase at this point.
Some Amperecards are truly impressing on efficiency, most notibly 3060ti and 3070 which are far superior to 2060S and 2070 (25-30% more perf pr watt). The rest however were only marginally more efficient (5-10%) than previous gen. GDDR6X is the reason for this on 3070ti and up, 3050 and 3060 seems to scale poorly efficiencywise when BW/cores are reduced.



The node itself cannot be blamed since it is almost twice as dense as previous node on Turing (TSMC 12nm vs Samsung 8nm):

Tech Node name (MTr/mm²)

TSMC 5nm EUV 171.3
TSMC 7nm+ EUV 115.8
Intel 10nm 100.8
TSMC 7nm Mobile 96.5
Samsung 7nm EUV 95.3
TSMC 7nm HPC 66.7
Samsung 8nm 61.2
TSMC 10nm 60.3
Samsung 10nm 51.8
Intel 14nm 43.5
GF 12nm 36.7
TSMC 12nm 33.8
Samsung/GF 14nm 32.5
TSMC 16nm 28.2

Going for Lovelace the nodeshrink can be even greater than Turing to Ampere depending on what Nvidia goes for (TSMC 6 or 5nm?). Mranwhile AMD has had a huge advantage over Turing (TSMC 7nm hpc) on RDNA1 and slight advantage over Ampere (still TSMC 7nm hpc).
 
Last edited:
Joined
Oct 27, 2020
Messages
798 (0.53/day)
Some Amperecards are truly impressing on efficiency, most notibly 3060ti and 3070 which are far superior to 2060S and 2070 (25-30% more perf pr watt). The rest however were only marginally more efficient (5-10%) than previous gen. GDDR6X is the reason for this on 3070ti and up, 3050 and 3060 seems to scale poorly efficiencywise when BW/cores are reduced.



The node itself cannot be blamed since it is almost twice as dense as previous node on Turing (TSMC 12nm vs Samsung 8nm):

Tech Node name (MTr/mm²)

TSMC 5nm EUV 171.3
TSMC 7nm+ EUV 115.8
Intel 10nm 100.8
TSMC 7nm Mobile 96.5
Samsung 7nm EUV 95.3
TSMC 7nm HPC 66.7
Samsung 8nm 61.2
TSMC 10nm 60.3
Samsung 10nm 51.8
Intel 14nm 43.5
GF 12nm 36.7
TSMC 12nm 33.8
Samsung/GF 14nm 32.5
TSMC 16nm 28.2

Going for Lovelace the nodeshrink can be even greater than Turing to Ampere depending on what Nvidia goes for (TSMC 6 or 5nm?). Mranwhile AMD has had a huge advantage over Turing (TSMC 7nm hpc) on RDNA1 and slight advantage over Ampere (still TSMC 7nm hpc).
I wouldn't rely too much in the TPU chart that you quoted, it's either mix resolutions in a single chart which is just plain wrong, or more possibly just a 1080p comparison, which Ampere has a little bit worse scaling vs Turing but on the contrary AMD doesn't suffer the huge loss that we see in 4K due to infinity cache/bandwidth limitation problems. So probably this is based on 1080p Cyberpunk measurements, in 4K measurements the results would be different, just check the fps RTX 3060 and 6600XT are getting in 4K:
IMG_20220613_105054.jpg

I'm not saying that the comparison must be 4K Cyberpunk, I just want to point out that neither the game or the resolution is a good choice to indicate the efficiency curve between the cards.

Also while 8nm Samsung is close to N7 HPC in density, I think the result 61.2 MTr/mm² regarding Samsung is the revised estimation based on what Nvidia achieved with Ampere (the estimation before was around 51.82(10nm Samsung)/0.9=57.6 MTr/mm²) and I don't think 66.7MTr/mm² regarding TSMC 7nm HPC represents exactly the second 7nm AMDs RDNA iteration, it's a tweeked 7nm HPC iteration, but I nitpick really because the differences are very small anyway.
But despite 8nm Samsung and N7 HPC being relatively close in density, TSMC enjoys at least +30% more performance regarding frequency which is a huge deal also, so the process advantage that TSMC enjoys isn't slight at all...
 
Last edited:
Joined
Aug 9, 2019
Messages
1,717 (0.87/day)
Processor 7800X3D 2x16GB CO
Motherboard Asrock B650m HDV
Cooling Peerless Assassin SE
Memory 2x16GB DR A-die@6000c30 tuned
Video Card(s) Asus 4070 dual OC 2610@915mv
Storage WD blue 1TB nvme
Display(s) Lenovo G24-10 144Hz
Case Corsair D4000 Airflow
Power Supply EVGA GQ 650W
Software Windows 10 home 64
Benchmark Scores Superposition 8k 5267 Aida64 58.5ns
I wouldn't rely too much in the TPU chart that you quoted, it's either mix resolutions in a single chart which is just plain wrong, or more possibly just a 1080p comparison, which Ampere has a little bit worse scaling vs Turing but on the contrary AMD doesn't suffer the huge loss that we see in 4K due to infinity cache/bandwidth limitation problems. So probably this is based on 1080p Cyberpunk measurements, in 4K measurements the results would be different, just check the fps RTX 3060 and 6600XT are getting in 4K:
View attachment 250867
I'm not saying that the comparison must be 4K Cyberpunk, I just want to point out that neither the game or the resolution is a good choice to indicate the efficiency curve between the cards.

Also while 8nm Samsung is close to N7 HPC in density, I think the result 61.2 MTr/mm² regarding Samsung is the revised estimation based on what Nvidia achieved with Ampere (the estimation before was around 51.82(10nm Samsung)/0.9=57.6 MTr/mm²) and I don't think 66.7MTr/mm² regarding TSMC 7nm HPC represents exactly the second 7nm AMDs RDNA iteration, it's a tweeked 7nm HPC iteration, but I nitpick really because the differences are very small anyway.
But despite 8nm Samsung and N7 HPC being relatively close in density, TSMC enjoys at least +30% more performance regarding frequency which is a huge deal also, so the process advantage that TSMC enjoys isn't slight at all...
Yes, of course efficiency will be different depending on resolution etc (RDNA2 very efficent at 1080p, not at 4k), but especially 3060ti and 3070 is very efficient and superior to anything Turing has to offer, 3060 for instance is not impressive since it uses 7% more power than 2060, but only performs 20% better on avg.

A more exact power reading could be comparing two equally fast cards like 2080ti and 3070, founders editions use 250W vs 220W making 2080ti use 14% more power for same perf.

We don't know for sure what density TSMC 7nm hpc AMD-variant has, but it is probably close to the regular hpc.
 
Joined
Jan 14, 2019
Messages
12,577 (5.80/day)
Location
Midlands, UK
System Name Nebulon B
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance DDR5-4800
Video Card(s) AMD Radeon RX 6750 XT 12 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Bazzite (Fedora Linux) KDE
A more exact power reading could be comparing two equally fast cards like 2080ti and 3070, founders editions use 250W vs 220W making 2080ti use 14% more power for same perf.
That's exactly what I'm doing when I'm comparing the 1080, 2070 and 3060. They all have the same TDP (within 10 W from one another) and offer the same performance. Only that the RTX cards can do RT and DLSS.

Your 2080 Ti vs 3070 isn't a bad comparison. That's probably the only performance level where Ampere has any kind of advantage.

The 3050 performs like a 1660 Ti with a higher TDP. Above the 3070, power goes through the roof, and there are no equivalents in previous generations. Efficiency in those classes are irrelevant, unfortunately.
 
Joined
Aug 9, 2019
Messages
1,717 (0.87/day)
Processor 7800X3D 2x16GB CO
Motherboard Asrock B650m HDV
Cooling Peerless Assassin SE
Memory 2x16GB DR A-die@6000c30 tuned
Video Card(s) Asus 4070 dual OC 2610@915mv
Storage WD blue 1TB nvme
Display(s) Lenovo G24-10 144Hz
Case Corsair D4000 Airflow
Power Supply EVGA GQ 650W
Software Windows 10 home 64
Benchmark Scores Superposition 8k 5267 Aida64 58.5ns
That's exactly what I'm doing when I'm comparing the 1080, 2070 and 3060. They all have the same TDP (within 10 W from one another) and offer the same performance. Only that the RTX cards can do RT and DLSS.

Your 2080 Ti vs 3070 isn't a bad comparison. That's probably the only performance level where Ampere has any kind of advantage.

The 3050 performs like a 1660 Ti with a higher TDP. Above the 3070, power goes through the roof, and there are no equivalents in previous generations. Efficiency in those classes are irrelevant, unfortunately.
Yes, cards above 3070 has the issue of very power hungry GDDR6X, not the arciteture by itself, but the result is minor gains vs Turing in efficiency. 2060 super vs 3060ti is also a good comparison where 2060S uses 185W vs 200W on most 3060ti`s (except 2x8-pin variantsl ike MSI trio), performance is 32-45% higher depending on resolution (TPU chart 2060S FE vs 3060ti FE). Ampere scales very poorly powerwise at small cores like 3050 and 3060. Turing scaled better the larger the die grew (2080ti most efficient), with Ampere we will never know since GDDR6X destroys good powerscaling.
 
Top