• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Radeon RX 7900 XTX to Lead the RDNA3 Pack?

Joined
Apr 2, 2008
Messages
391 (0.07/day)
System Name -
Processor Ryzen 9 5900X
Motherboard MSI MEG X570
Cooling Arctic Liquid Freezer II 280 (4x140 push-pull)
Memory 32GB Patriot Steel DDR4 3733 (8GBx4)
Video Card(s) MSI RTX 4080 X-trio.
Storage Sabrent Rocket-Plus-G 2TB, Crucial P1 1TB, WD 1TB sata.
Display(s) LG Ultragear 34G750 nano-IPS 34" utrawide
Case Define R6
Audio Device(s) Xfi PCIe
Power Supply Fractal Design ION Gold 750W
Mouse Razer DeathAdder V2 Mini.
Keyboard Logitech K120
VR HMD Er no, pointless.
Software Windows 10 22H2
Benchmark Scores Timespy - 24522 | Crystalmark - 7100/6900 Seq. & 84/266 QD1 |
So what? :twitch: At least basic 8pin connectors do not catch fire like Nvidia's 12VHPWR connector.

Its NOT nVidia's connector, Intel along with PCI SIG created the connector. Intel are shock horrow to blame here, but just like 'USB Gen 3.2 2x2', 12VHPWER connector its going to be an ill fated standard.

WHY this undersized 12pin connector was even created with smaller pins, when existing 8pin PCI power connectors are capable of passing 300W/25A per cponnector. Corsair clearly demonstarted this with thier 2x 8pin to 12pin cables - https://www.corsair.com/uk/en/Categ...0-12VHPWR-Type-4-PSU-Power-Cable/p/CP-8920284

Guy had literally just bought it and had it running his system for a few hours. Not sure how you'd expect one to run it out of spec.
But as Jay pointed out the end user bent the cable wrong, but clearly Jay's comment was in jest. As this is just as bad as Steve Jobsworth telloing a custoemr they were holding the iphone 4 wrong...

I have always bent my cables for airs and tidyness, so depending on which card I upgrade too I think I will be buying a right angle adapter from Der8auer/Thermal Grizzly.
 
Joined
Jul 9, 2015
Messages
3,413 (1.03/day)
System Name M3401 notebook
Processor 5600H
Motherboard NA
Memory 16GB
Video Card(s) 3050
Storage 500GB SSD
Display(s) 14" OLED screen of the laptop
Software Windows 10
Benchmark Scores 3050 scores good 15-20% lower than average, despite ASUS's claims that it has uber cooling.
Really curious about the performance, the power draw which comes with the performance but mostly about the price, as this one will tell if AMD is following NV greedy behavior or is trying rather to reach for dominance or at least higher market share.
The main question is whether AMD has a superior product.
Superior i the sense that they can match or beat competitor, while spending less.

If yes, I'd expect CPU like offensive.
 
Joined
Oct 12, 2005
Messages
695 (0.10/day)
With Infinity Cache, AMD was able to compete on par while on much slower bus. I don't see why it would be different this time.
What is going to be different is this time, Nvidia have much larger L2 cache than it had. It's should actually be pretty close to the SKU without 3d v-cache.

There are still plenty of unknown regarding how decoupled MCD with IF cache on it will perform versus the monolithic die. I doubt that the SKU will only compete with the 4080, but it is very hard to estimate how it will perform versus the 4090. Only benchmark will be able to tell.

There is also the question of RT performance. I mean it won't be too much of an issue on low end sku but on a flagship sku, it should be there.
 
Joined
Feb 27, 2013
Messages
445 (0.11/day)
Location
Lithuania
If AMD does this, I would be flabbergasted. Last gen, they priced their cards based on performance relative to the Nvidia cards. I hope like hell they hit that $1,000 MSRP, but I just don't see it.
Yeah I expect a price raise too. Maybe 1200 for 7900XT.
 
Joined
Jun 2, 2017
Messages
8,475 (3.22/day)
System Name Best AMD Computer
Processor AMD 7900X3D
Motherboard Asus X670E E Strix
Cooling In Win SR36
Memory GSKILL DDR5 32GB 5200 30
Video Card(s) Sapphire Pulse 7900XT (Watercooled)
Storage Corsair MP 700, Seagate 530 2Tb, Adata SX8200 2TBx2, Kingston 2 TBx2, Micron 8 TB, WD AN 1500
Display(s) GIGABYTE FV43U
Case Corsair 7000D Airflow
Audio Device(s) Corsair Void Pro, Logitch Z523 5.1
Power Supply Deepcool 1000M
Mouse Logitech g7 gaming mouse
Keyboard Logitech G510
Software Windows 11 Pro 64 Steam. GOG, Uplay, Origin
Benchmark Scores Firestrike: 46183 Time Spy: 25121
What is going to be different is this time, Nvidia have much larger L2 cache than it had. It's should actually be pretty close to the SKU without 3d v-cache.

There are still plenty of unknown regarding how decoupled MCD with IF cache on it will perform versus the monolithic die. I doubt that the SKU will only compete with the 4080, but it is very hard to estimate how it will perform versus the 4090. Only benchmark will be able to tell.

There is also the question of RT performance. I mean it won't be too much of an issue on low end sku but on a flagship sku, it should be there.
The thing for me when I think about how they do this is taking it from Polaris and baking Crossfire's evolution into the card instead of the MB. That could mean something like the card would do something like 1 to 1 on the screen with the Cache serving as the brains to feed the information to the GPUs. We could have AFR with again the Cache serving as the brains to feed the GPUs. In terms of RT I am not worried as between Sony and Microsoft they will probably do more to drive RT than AMD or Nvidia. Regardless 50% more performance per watt (AMD's claim) is hard to rationalize when the 6500XT/6600/6700/6800XT/6900XT are so good already. If nothing else it should at least compete with the 4090 in Regular 4K Gaming.
 
Joined
Oct 12, 2005
Messages
695 (0.10/day)
The thing for me when I think about how they do this is taking it from Polaris and baking Crossfire's evolution into the card instead of the MB. That could mean something like the card would do something like 1 to 1 on the screen with the Cache serving as the brains to feed the information to the GPUs. We could have AFR with again the Cache serving as the brains to feed the GPUs. In terms of RT I am not worried as between Sony and Microsoft they will probably do more to drive RT than AMD or Nvidia. Regardless 50% more performance per watt (AMD's claim) is hard to rationalize when the 6500XT/6600/6700/6800XT/6900XT are so good already. If nothing else it should at least compete with the 4090 in Regular 4K Gaming.
I do not think any multi compute die imply using alternate frame rendering or scan line interleaving. From what i understand, there will be 1 Master tile with the main scheduler that will dispatch compute task to the second CCD and the OS will only see 1 GPU. (else you would need double the memory, ex 20 GB per GPU for a total of 40 GB).

The challenge is how you exchange data between the 2 GPU (Ex you run a shaders that need to reads pixels that were previously rendered on the other GPU). This is the main challenge. The master also need to be aware of the state of the second tile compute units to effectively dispatch jobs. Also, let say all your MCD are connected to the main tiles, it means the secondary tiles have to perform all those memory access using the link between the chips. If they split it 50/50, each tiles will have to perform a portion of their memory access on the other die. You will also have to map your memory across 2 die.

No matter what you do, the connection between the 2 compute tiles will need to be beefy.

This is easy when you have a single tiles but the challenge increase if you have to do it across chips. Note that AMD have an Hardware scheduler for quite some time and they might have improved it to be tile aware and schedule the load accordingly.

I suspect that it would be easier to load balance 2 larger dies that could do a big portion of their work themselves than a lot of smaller die that would need to exchange data frequently. But that may just be a theory with no value.

Alternate frame rendering could maybe be possible on multi die but the main issue remain frame pacing. How do you know when it's the best time to start rendering the next frame? for that you need to know how fast you will finish the current frame but you don't always know until it's done. And if you wait until it's done, it's already the time to start a new frame on the main GPU. They tried many tricks to try to fix frame pacing on alternate frame rendering without too much success and it's probably not worth the effort.

For SLI,(the original term, not the rebranding of multi gpu by Nvidia), the thing is shaders can affect block of pixels, how do you handle that if you only render half the line? you can't so it's a done tech that died with the coming of shaders.
 
Joined
Jun 2, 2017
Messages
8,475 (3.22/day)
System Name Best AMD Computer
Processor AMD 7900X3D
Motherboard Asus X670E E Strix
Cooling In Win SR36
Memory GSKILL DDR5 32GB 5200 30
Video Card(s) Sapphire Pulse 7900XT (Watercooled)
Storage Corsair MP 700, Seagate 530 2Tb, Adata SX8200 2TBx2, Kingston 2 TBx2, Micron 8 TB, WD AN 1500
Display(s) GIGABYTE FV43U
Case Corsair 7000D Airflow
Audio Device(s) Corsair Void Pro, Logitch Z523 5.1
Power Supply Deepcool 1000M
Mouse Logitech g7 gaming mouse
Keyboard Logitech G510
Software Windows 11 Pro 64 Steam. GOG, Uplay, Origin
Benchmark Scores Firestrike: 46183 Time Spy: 25121
I do not think any multi compute die imply using alternate frame rendering or scan line interleaving. From what i understand, there will be 1 Master tile with the main scheduler that will dispatch compute task to the second CCD and the OS will only see 1 GPU. (else you would need double the memory, ex 20 GB per GPU for a total of 40 GB).

The challenge is how you exchange data between the 2 GPU (Ex you run a shaders that need to reads pixels that were previously rendered on the other GPU). This is the main challenge. The master also need to be aware of the state of the second tile compute units to effectively dispatch jobs. Also, let say all your MCD are connected to the main tiles, it means the secondary tiles have to perform all those memory access using the link between the chips. If they split it 50/50, each tiles will have to perform a portion of their memory access on the other die. You will also have to map your memory across 2 die.

No matter what you do, the connection between the 2 compute tiles will need to be beefy.

This is easy when you have a single tiles but the challenge increase if you have to do it across chips. Note that AMD have an Hardware scheduler for quite some time and they might have improved it to be tile aware and schedule the load accordingly.

I suspect that it would be easier to load balance 2 larger dies that could do a big portion of their work themselves than a lot of smaller die that would need to exchange data frequently. But that may just be a theory with no value.

Alternate frame rendering could maybe be possible on multi die but the main issue remain frame pacing. How do you know when it's the best time to start rendering the next frame? for that you need to know how fast you will finish the current frame but you don't always know until it's done. And if you wait until it's done, it's already the time to start a new frame on the main GPU. They tried many tricks to try to fix frame pacing on alternate frame rendering without too much success and it's probably not worth the effort.

For SLI,(the original term, not the rebranding of multi gpu by Nvidia), the thing is shaders can affect block of pixels, how do you handle that if you only render half the line? you can't so it's a done tech that died with the coming of shaders.
I do understand what you are saying and can see it too. The thing is Polaris was different than any iteration of Crossfire (Multi GPU) and worked quite beautifully. We also did not have Freesync in those days either so frame pacing might be a non issue if done right. Like I said before we can extrapolate all we want but these GPUs will be different than anything seen before so we may both be right and wrong but it is fun discussing the possibilities.
 
Joined
Feb 8, 2022
Messages
268 (0.29/day)
Location
Georgia, United States
System Name LMDESKTOPv2
Processor Intel i9 10850K
Motherboard ASRock Z590 PG Velocita
Cooling Arctic Liquid Freezer II 240 w/ Maintenance Kit
Memory Corsair Vengeance DDR4 3600 CL18 2x16
Video Card(s) RTX 3080 Ti FE
Storage Intel Optane 900p 280GB, 1TB WD Blue SSD, 2TB Team Vulkan SSD, 2TB Seagate HDD, 4TB Team MP34 SSD
Display(s) HP Omen 27q, HP 25er
Case Fractal Design Meshify C Steel Panel
Audio Device(s) Sennheiser GSX 1000, Schiit Magni Heresy, Sennheiser HD560S
Power Supply Corsair HX850 V2
Mouse Logitech MX518 Legendary Edition
Keyboard Logitech G413 Carbon
VR HMD Oculus Quest 2 (w/ BOBO VR battery strap)
Software Win 10 Professional
Joined
Oct 12, 2005
Messages
695 (0.10/day)
I do understand what you are saying and can see it too. The thing is Polaris was different than any iteration of Crossfire (Multi GPU) and worked quite beautifully. We also did not have Freesync in those days either so frame pacing might be a non issue if done right. Like I said before we can extrapolate all we want but these GPUs will be different than anything seen before so we may both be right and wrong but it is fun discussing the possibilities.
Another thing that complicate the AFR is the utilisation of temporal effect (like TAA, temporal upscaling tech, etc.)

You can't use the data of a previously rendered frame if that frame isn't rendered yet.

AFR is probably dead, the benefits of reusing temporal data really outweight the benefits of AFR and multi GPU. And AFR is just the brute force way of doing things.
 
Joined
Jun 2, 2017
Messages
8,475 (3.22/day)
System Name Best AMD Computer
Processor AMD 7900X3D
Motherboard Asus X670E E Strix
Cooling In Win SR36
Memory GSKILL DDR5 32GB 5200 30
Video Card(s) Sapphire Pulse 7900XT (Watercooled)
Storage Corsair MP 700, Seagate 530 2Tb, Adata SX8200 2TBx2, Kingston 2 TBx2, Micron 8 TB, WD AN 1500
Display(s) GIGABYTE FV43U
Case Corsair 7000D Airflow
Audio Device(s) Corsair Void Pro, Logitch Z523 5.1
Power Supply Deepcool 1000M
Mouse Logitech g7 gaming mouse
Keyboard Logitech G510
Software Windows 11 Pro 64 Steam. GOG, Uplay, Origin
Benchmark Scores Firestrike: 46183 Time Spy: 25121
Another thing that complicate the AFR is the utilisation of temporal effect (like TAA, temporal upscaling tech, etc.)

You can't use the data of a previously rendered frame if that frame isn't rendered yet.

AFR is probably dead, the benefits of reusing temporal data really outweight the benefits of AFR and multi GPU. And AFR is just the brute force way of doing things.
Isn't FSR at the end of the pipeline? I am not sure. The benefits of it could be baked into the card as well. We just don't know but I know what you mean about the performance of Upscaling tech.
 
Joined
Apr 8, 2012
Messages
270 (0.06/day)
Location
Canada
System Name custom
Processor intel i7 9700
Motherboard asrock taichi z370
Cooling EK-AIO 360 D-RGB
Memory 24G Kingston HyperX Fury 2666mhz
Video Card(s) GTX 2080 Ti FE
Storage SSD 960GB crucial + 2 Crucial 500go SSD + 2TO crucial M2
Display(s) BENQ XL2420T
Case Lian-li o11 dynamic der8auer Edition
Audio Device(s) Asus Xonar Essence STX
Power Supply corsair ax1200i
Mouse MX518 legendary edition
Keyboard gigabyte Aivia Osmium
VR HMD PSVR2
Software windows 11
MCM + driver + ATI + game = ouff

watchout for bugs
 
Joined
Oct 12, 2005
Messages
695 (0.10/day)
Isn't FSR at the end of the pipeline? I am not sure. The benefits of it could be baked into the card as well. We just don't know but I know what you mean about the performance of Upscaling tech.
It would work without any problem with FSR 1.0 that is a spacial upscaler and doesn't need previous frame information.

For FSR 2.0, and other TAAU, yes, it's more at the end of the pipeline but generally before the post processing effects. You could in theory, start a second frame and it would have the frame buffer image when it need it. But that start to become really complicated. And that is just for upscaler.

Let's say you create and move particles using a shaders, the next frame would need to get the previous data to continue. You would have to wait and sync every frame making it very complicated. If you use shaders to do terrain or object deformation too. Those things would be way earlier in the pipeline. In the end, you just add multiple sync and wait to your image generation and those thing will kill your efficiency.

That is not worth the effort. AFR is just a stupid way of using 2 GPU that was fine when game were easier to run and simplier, but is no longer a good solutions for now. What multi-tiles GPU needs is a way to send the works intelligently across multiple dies and finding way to manage memory efficiently while doing it. Once you have that figured out, your solutions become way powerful and you don't need to deal with frame pacing issue, where thing happen in the frame rendering process, etc.
 
Joined
Jun 2, 2017
Messages
8,475 (3.22/day)
System Name Best AMD Computer
Processor AMD 7900X3D
Motherboard Asus X670E E Strix
Cooling In Win SR36
Memory GSKILL DDR5 32GB 5200 30
Video Card(s) Sapphire Pulse 7900XT (Watercooled)
Storage Corsair MP 700, Seagate 530 2Tb, Adata SX8200 2TBx2, Kingston 2 TBx2, Micron 8 TB, WD AN 1500
Display(s) GIGABYTE FV43U
Case Corsair 7000D Airflow
Audio Device(s) Corsair Void Pro, Logitch Z523 5.1
Power Supply Deepcool 1000M
Mouse Logitech g7 gaming mouse
Keyboard Logitech G510
Software Windows 11 Pro 64 Steam. GOG, Uplay, Origin
Benchmark Scores Firestrike: 46183 Time Spy: 25121
It would work without any problem with FSR 1.0 that is a spacial upscaler and doesn't need previous frame information.

For FSR 2.0, and other TAAU, yes, it's more at the end of the pipeline but generally before the post processing effects. You could in theory, start a second frame and it would have the frame buffer image when it need it. But that start to become really complicated. And that is just for upscaler.

Let's say you create and move particles using a shaders, the next frame would need to get the previous data to continue. You would have to wait and sync every frame making it very complicated. If you use shaders to do terrain or object deformation too. Those things would be way earlier in the pipeline. In the end, you just add multiple sync and wait to your image generation and those thing will kill your efficiency.

That is not worth the effort. AFR is just a stupid way of using 2 GPU that was fine when game were easier to run and simplier, but is no longer a good solutions for now. What multi-tiles GPU needs is a way to send the works intelligently across multiple dies and finding way to manage memory efficiently while doing it. Once you have that figured out, your solutions become way powerful and you don't need to deal with frame pacing issue, where thing happen in the frame rendering process, etc.
Truly excellent explanation on why AFR is not a solution. I can't wait to see what these Gpus can do.
 
Joined
Mar 16, 2017
Messages
1,942 (0.72/day)
Location
Tanagra
System Name Budget Box
Processor Xeon E5-2667v2
Motherboard ASUS P9X79 Pro
Cooling Some cheap tower cooler, I dunno
Memory 32GB 1866-DDR3 ECC
Video Card(s) XFX RX 5600XT
Storage WD NVME 1GB
Display(s) ASUS Pro Art 27"
Case Antec P7 Neo
Ever since AMD made a Vega card with dual GPUs connected through infinity fabric for the 2019 Mac Pro, I wondered if we’d see a return to MCM in the consumer space. Curious to see if they’ve overcome the past scaling issues, as I suspect this thing will present itself as a single block of resources to the OS.
 
Joined
Apr 21, 2005
Messages
174 (0.02/day)
Joined
Aug 20, 2007
Messages
21,078 (3.40/day)
System Name Pioneer
Processor Ryzen R9 7950X
Motherboard GIGABYTE Aorus Elite X670 AX
Cooling Noctua NH-D15 + A whole lotta Sunon and Corsair Maglev blower fans...
Memory 64GB (4x 16GB) G.Skill Flare X5 @ DDR5-6000 CL30
Video Card(s) XFX RX 7900 XTX Speedster Merc 310
Storage Intel 905p Optane 960GB boot, +2x Crucial P5 Plus 2TB PCIe 4.0 NVMe SSDs
Display(s) 55" LG 55" B9 OLED 4K Display
Case Thermaltake Core X31
Audio Device(s) TOSLINK->Schiit Modi MB->Asgard 2 DAC Amp->AKG Pro K712 Headphones or HDMI->B9 OLED
Power Supply FSP Hydro Ti Pro 850W
Mouse Logitech G305 Lightspeed Wireless
Keyboard WASD Code v3 with Cherry Green keyswitches + PBT DS keycaps
Software Gentoo Linux x64 / Windows 11 Enterprise IoT 2024
So what? :twitch: At least basic 8pin connectors do not catch fire like Nvidia's 12VHPWR connector.

Isn't AMD expected to use the same connectors?

And the hoopla around that is pretty questionable at this point.

Oooh, interesting, if a bit silly. But it will comfort some if nothing else.

WHY this undersized 12pin connector was even created with smaller pins, when existing 8pin PCI power connectors are capable of passing 300W/25A per cponnector.
Yeah, if you rewire them. And then people plug them in to the old ports and get fun fire and component hazards.

Yes you can key them, but that's not always enough for some, as history has shown.
 

wolf

Better Than Native
Joined
May 7, 2007
Messages
8,044 (1.27/day)
System Name MightyX
Processor Ryzen 5800X3D
Motherboard Gigabyte X570 I Aorus Pro WiFi
Cooling Scythe Fuma 2
Memory 32GB DDR4 3600 CL16
Video Card(s) Asus TUF RTX3080 Deshrouded
Storage WD Black SN850X 2TB
Display(s) LG 42C2 4K OLED
Case Coolermaster NR200P
Audio Device(s) LG SN5Y / Focal Clear
Power Supply Corsair SF750 Platinum
Mouse Corsair Dark Core RBG Pro SE
Keyboard Glorious GMMK Compact w/pudding
VR HMD Meta Quest 3
Software case populated with Artic P12's
Benchmark Scores 4k120 OLED Gsync bliss
some companies, unlike nVidia, care about their reputation
This is ridiculous, of course they care about their reputation, how do you think they have so damn much mindshare? They have a reputation for producing the fastest gaming graphics cards (fastest halo products sell a lot of lower SKU's), they have a rep for CUDA, Datacentres, professional cards like Quadro, stable drivers, the list goes on. This is very much a reputation they work hard to maintain.

Are they all totally valid points for every possible buyer? of course not. Just like what your insinuating about their reputation for underhanded tactics, not caring about gamers, ripping people off etc etc isn't a consideration, and at least not a deal breaker for the vast majority of their buyers, it's just how a vocal minority consider their reputation.
 
Joined
Sep 15, 2016
Messages
484 (0.17/day)
Of course AMD isn't using the 12VHPWR connector, who do you think is responsible for all of the negative PR surrounding it?
 

Iocedmyself

New Member
Joined
Oct 25, 2022
Messages
6 (0.01/day)
what titles does a 6900XT have superior raster performance (by a non insignificant margin, shall we say 10%+? ) and do that consuming 200w while a 3090 consumes 350w. I don't think I've ever seen that.
Well, metro exodus comes to mind, dirt 5, Forza 5 resident evil village, Red Dead Redemption to name a few that i know offer greater frames at 4k, and 1440p. I'm not saying their aren't Nvidia optimized titles (horizon zero dawn for instance) that offer better performance, but generally the 6900xt is within 10fps of the 3090 in non-RT scenarios, using 150w less power, at a much lower price point.
More materially to my point, the 6900XT enjoyed advantages in the metrics I quoted to the tune of 88% and 61%, I don't think it enjoys a single win over the 3090 to the tune of even 61%, let alone 88%, but I'm sure if you dig hard enough you might find an unrealistic niche example or two where that might be the case.
At the end of the day, the only things that really matter are how much you have to pay for given performance, and how much power is required to get there. The 3090 is 50% more expensive, with 50% higher power draw, to sometimes, in certain games, with certain settings perform marginally better.

From what I know, the 6900XT enjoyed a minor lead at 1080p (less than 10% on average), roughly par at 1440p, and the 3090 enjoyed a minor lead at 4k (less than 10% on average)
It's very much title dependant, some games are optimized in favor of one card or another, but overall the average findings favor the 6900xt.
That's the reality I remember, expect most publications don't really test DLSS, at least not in like for like testing, because then it wouldn't be like for like... so the 3090 trounces a 6900XT for RT, and then you have DLSS to help even more.
That was true prior to FSR being a thing, but it's more common place now, particularly when FSR2/2.1 can so easily be modded into any title with DLSS, and used on pretty much any GPU made in the past 5 years. Yes nvidia has the lead in RT, and yes i'm one of those idiots that spent stupid money buying a card during a pandemic in the EU specifically for that feature simply because when i started playing around with CAD apps 25 years ago, it took 36 hours to render a single RT light source on a blank backround, and while the nerd in me loves the fact that it's a thing, the reality is that it's RARELY noticeable in practice while gaming, beyond the fact that your performance has dropped by 2-3x. It's still in the realm of curiosity, though it has been gaining traction over the past year, i think it's going to be another year or two before we really see it's full potential.

If I were you I'd brace for AMD being all too happy to follow this trend, hell, it's already started.
From what i know the 79xx cards are going to have a power limit of 300-400w, with more than a 50% uplift in performance per watt and RT performance. In 2008, i had a pair of 4870x2's, each of which consumed 285-350w with 2.4 teraflops of rendering performance. AMD has managed to keep power consumption fairly consistent. Given that the 4090 costs $1600 if you can find it in stock and don't care about maybe setting your computer on fire, If AMD manages 75% of the performance for under $1000, they'll be in a very good position.

Now, despite everything i've said, if money were no object and i didn't consider anything beyond the raw performance numbers, i'd buy Nvidia in a heartbeat. The end result is incredibly impressive, the means of achieving it is just disappointing
 
Joined
May 31, 2016
Messages
4,412 (1.47/day)
Location
Currently Norway
System Name Bro2
Processor Ryzen 5800X
Motherboard Gigabyte X570 Aorus Elite
Cooling Corsair h115i pro rgb
Memory 32GB G.Skill Flare X 3200 CL14 @3800Mhz CL16
Video Card(s) Powercolor 6900 XT Red Devil 1.1v@2400Mhz
Storage M.2 Samsung 970 Evo Plus 500MB/ Samsung 860 Evo 1TB
Display(s) LG 27UD69 UHD / LG 27GN950
Case Fractal Design G
Audio Device(s) Realtec 5.1
Power Supply Seasonic 750W GOLD
Mouse Logitech G402
Keyboard Logitech slim
Software Windows 10 64 bit
The main question is whether AMD has a superior product.
Superior i the sense that they can match or beat competitor, while spending less.

If yes, I'd expect CPU like offensive.
4090 costs $1600 which is more than $2k in Norway and probably across EU. If the top SKU from AMD performs the same or even if slightly slower than 4090 but has a price around $1k it would have been a hit. Something tells me this is a fool's errand to think that way. If AMD can produce the SKU in a lower cost we will see but it does not mean the end price will reflect this cause it may easily not.
 
Joined
Nov 20, 2015
Messages
18 (0.01/day)
Location
Italy
As always waiting for the x8xx cards, always more money performance balanced.
 

Rainy'sLearning

New Member
Joined
Oct 26, 2022
Messages
1 (0.00/day)
Guys, do you think I should buy the 6900 xt now for its all time low of 750€ in amazon france? Or should i wait for the new releases first in order to get a better deal?
 
Joined
Jul 9, 2015
Messages
3,413 (1.03/day)
System Name M3401 notebook
Processor 5600H
Motherboard NA
Memory 16GB
Video Card(s) 3050
Storage 500GB SSD
Display(s) 14" OLED screen of the laptop
Software Windows 10
Benchmark Scores 3050 scores good 15-20% lower than average, despite ASUS's claims that it has uber cooling.
who do you think is responsible for all of the negative PR surrounding it?
Whoever did lousy job designing the connector, then TESTED it, figured out it was TERRIBLE and still went on launching 2000+ Euro cards with it?
Hm, who was that, let me think...

Guys, do you think I should buy the 6900 xt now for its all time low of 750€ in amazon france? Or should i wait for the new releases first in order to get a better deal?
I'd wait, but there is a bit of gambling either way.

If this card is coming from AIBs, they DO KNOW what is coming.

I mean, have we ever had older AMD GPUs become more expensive, after new gen is released? (planetary level cryptobazinga doesn't count)
 
Joined
Jun 8, 2022
Messages
368 (0.46/day)
Location
Ohio, USA
System Name Trackstar
Processor AMD Ryzen 7 5800X3D -30 All Core CO (on Corsair XC5 block)
Motherboard Gigabyte B550 AORUS Elite V2 Rev 1.0 (F17 BIOS)
Cooling Corsair XD5 pump / Corsair XR5 1x 360mm (front) + 1x 420mm (top) rads
Memory 32GB G.Skill DDR4-3600 CL14 1:1 (F4-3600C14Q-32GVKA kit)
Video Card(s) ASRock RX 6950XT OC Formula (on Bykski A-AR6900XTOCF-X block)
Storage WD_BLACK SN850X 2TB w/HS (FW ver. 620361WD)
Display(s) Dell S3222DGM 32" 1440p/165Hz FreeSync
Case Fractal Design Meshify S2
Audio Device(s) Realtek ALC1200 Integrated Audio
Power Supply Super Flower Leadex Platinum SE 1200W on Liebert GXT4-1500RT120 UPS
Mouse Corsair Nightsword RGB
Keyboard Corsair K60 RGB PRO
VR HMD N/A
Software Windows 11 Pro 23H2 (Build 22631.3958)
Benchmark Scores https://www.3dmark.com/sw/1131940 https://www.3dmark.com/fs/29315810
As always waiting for the x8xx cards, always more money performance balanced.
Oh yeah, got my Red Dragon 6800XT NIB for $549 about a month ago. With a mild overclock I'm getting numbers within 5% of a stock 6900XT. Absolutely killer 1440p performance and I was able to keep my 750W PSU :rockout:
 
Joined
Jan 24, 2011
Messages
274 (0.06/day)
Processor AMD Ryzen 5900X
Motherboard MSI MAG X570 Tomahawk
Cooling Dual custom loops
Memory 4x8GB G.SKILL Trident Z Neo 3200C14 B-Die
Video Card(s) AMD Radeon RX 6800XT Reference
Storage ADATA SX8200 480GB, Inland Premium 2TB, various HDDs
Display(s) MSI MAG341CQ
Case Meshify 2 XL
Audio Device(s) Schiit Fulla 3
Power Supply Super Flower Leadex Titanium SE 1000W
Mouse Glorious Model D
Keyboard Drop CTRL, lubed and filmed Halo Trues
Of course AMD isn't using the 12VHPWR connector, who do you think is responsible for all of the negative PR surrounding it?
It's definitely all of those AMD shills going out and buying $1600+ USD NVIDIA cards and intentionally destroying them.
 
Top