• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Radeon "Navi 3x" Could See 50% Increase in Shaders, Double the Cache Memory

ARF

Joined
Jan 28, 2020
Messages
4,670 (2.61/day)
Location
Ex-usa | slava the trolls
6900 XT is about 40% faster then 5700 XT

The Radeon RX 6900 XT is more than double the performance of RX 5700 XT. 201% vs 100%.

1640070272302.png

AMD Radeon RX 5700 XT Specs | TechPowerUp GPU Database
 
Joined
Sep 17, 2014
Messages
22,679 (6.05/day)
Location
The Washing Machine
System Name Tiny the White Yeti
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling CPU: Thermalright Peerless Assassin / Case: Phanteks T30-120 x3
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
VR HMD HD 420 - Green Edition ;)
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
If there was some simple way to do it, both nVidia and AMD (ATI) would have done it a long time ago.
No they wouldn't do it. They'd be cannibalizing on their own roadmap and profit margins.

Rather, the strategy is to postpone as many changes as possible to later generations, if the economic reality allows for such a thing. Look at GCN's development - you can conclude there weren't funds for targeted development, or you could say the priority wasn't there because 'AMD still had revenue'... and they still pissed away money. Look at the features that got postponed from Maxwell to Pascal - Nvidia simply didn't have to make a better 970 or 980ti and Maxwell was already a very strong gen 'in the market at the time' - but they had the Pascal technology on shelf already. Similarly, the move from Volta > Turing > Ampere, is a string of massively delayed releases. Its no coincidence these 'delays' happened around the same years for both competitors. Another big factor to stall is the console release roadmap - Nvidia is learning the hard way right now, as they gambled on pre-empting the consoles with their own RTX. In the wild, we now see them use those tensor/RT cores primarily for non-RT workloads like DLSS because devs are primarily console oriented, especially on big budget/multiplatform. So we get lackluster RT implementations on PC.

So no... both companies are and will always be balancing on the edge of what they must do at the bare minimum to keep selling product. They want to leave as much in the tank for later, and rather sell GPUs on 'new features' that are not hardware based. Software for example. Better drivers. Support for new APIs. Monitor technology. Shadowplay. Low Latency modes. New AA modes. Etc etc. None of this requires a new architecture, and there is nothing easier than just refining what you have. Its what Nvidia has been doing for so long now, and what kept them on top. Minor tweaks to architecture to support new tech, at best, and keep pushing the efficiency button.
 
Last edited:
Joined
Feb 3, 2017
Messages
3,823 (1.33/day)
Processor Ryzen 7800X3D
Motherboard ROG STRIX B650E-F GAMING WIFI
Memory 2x16GB G.Skill Flare X5 DDR5-6000 CL36 (F5-6000J3636F16GX2-FX5)
Video Card(s) INNO3D GeForce RTX™ 4070 Ti SUPER TWIN X2
Storage 2TB Samsung 980 PRO, 4TB WD Black SN850X
Display(s) 42" LG C2 OLED, 27" ASUS PG279Q
Case Thermaltake Core P5
Power Supply Fractal Design Ion+ Platinum 760W
Mouse Corsair Dark Core RGB Pro SE
Keyboard Corsair K100 RGB
VR HMD HTC Vive Cosmos
But that would mean the infinity fabric link between the I/O die and the chiplet is huge. right now, on die, AMD state that it's 16 x 64b for NAVI21. it would mean probably at least 12 x 2 x 64b for Navi 31. Not undoable but i wonder how it will be expensive to make with an interposer.
Not only expensive to make - if they go full interposer it probably does not matter all too much how many traces it has - but wide IF is not exactly power efficient.
 
Joined
Jan 24, 2011
Messages
289 (0.06/day)
Processor AMD Ryzen 5900X
Motherboard MSI MAG X570 Tomahawk
Cooling Dual custom loops
Memory 4x8GB G.SKILL Trident Z Neo 3200C14 B-Die
Video Card(s) AMD Radeon RX 6800XT Reference
Storage ADATA SX8200 480GB, Inland Premium 2TB, various HDDs
Display(s) MSI MAG341CQ
Case Meshify 2 XL
Audio Device(s) Schiit Fulla 3
Power Supply Super Flower Leadex Titanium SE 1000W
Mouse Glorious Model D
Keyboard Drop CTRL, lubed and filmed Halo Trues
No they wouldn't do it. They'd be cannibalizing on their own roadmap and profit margins.

Rather, the strategy is to postpone as many changes as possible to later generations, if the economic reality allows for such a thing. Look at GCN's development - you can conclude there weren't funds for targeted development, or you could say the priority wasn't there because 'AMD still had revenue'... and they still pissed away money. Look at the features that got postponed from Maxwell to Pascal - Nvidia simply didn't have to make a better 970 or 980ti and Maxwell was already a very strong gen 'in the market at the time' - but they had the Pascal technology on shelf already. Similarly, the move from Volta > Turing > Ampere, is a string of massively delayed releases. Its no coincidence these 'delays' happened around the same years for both competitors. Another big factor to stall is the console release roadmap - Nvidia is learning the hard way right now, as they gambled on pre-empting the consoles with their own RTX. In the wild, we now see them use those tensor/RT cores primarily for non-RT workloads like DLSS because devs are primarily console oriented, especially on big budget/multiplatform. So we get lackluster RT implementations on PC.

So no... both companies are and will always be balancing on the edge of what they must do at the bare minimum to keep selling product. They want to leave as much in the tank for later, and rather sell GPUs on 'new features' that are not hardware based. Software for example. Better drivers. Support for new APIs. Monitor technology. Shadowplay. Low Latency modes. New AA modes. Etc etc. None of this requires a new architecture, and there is nothing easier than just refining what you have. Its what Nvidia has been doing for so long now, and what kept them on top. Minor tweaks to architecture to support new tech, at best, and keep pushing the efficiency button.

Companies don't sit there and try to trim the fat to decide the bare minimum they can get away with. They design products that are the best they can be within other constraints such as power, die size, and within the allowable time. The trimming and compromising comes with the products further down the stack, which are all derivatives of the top, "halo" product. The reason you end up with delays and evolutionary products instead of constant revolution is because, shockingly, this shit is hard! It takes hundreds of engineers and tens of thousands of engineer-hours to get these products out the door even when the designs are "just" derivatives.

The obvious counterpoint to this would be Intel and their stagnation for the near-decade between the release of Sandy Bridge and the competitive changes that arrived with Ryzen, but even that isn't an example of what you claim. Intel was working in an environment where their more advanced 10 and 7 nm process nodes were MASSIVELY delayed, throwing off their entire design cycle. The result was engineers laboring under and entirely different set of constraints, with one of those being Intel's profit margin, but again, this isn't what you have been describing. It represents a ceiling for cost, but engineers do whatever they can within that constraint. The trimming and compromising comes as you move down the product stack where that same sort of margin must be maintained and you have other competitive concerns than "this is what we thought was possible given the constraints we are under."
Not only expensive to make - if they go full interposer it probably does not matter all too much how many traces it has - but wide IF is not exactly power efficient.

IF is only really expensive in terms of power when being pushed over the substrate. Utilizing an interposer or other technology like EFB (which is what will actually be used) reduces those power requirements tremendously.
 
Joined
Sep 17, 2014
Messages
22,679 (6.05/day)
Location
The Washing Machine
System Name Tiny the White Yeti
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling CPU: Thermalright Peerless Assassin / Case: Phanteks T30-120 x3
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
VR HMD HD 420 - Green Edition ;)
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
Companies don't sit there and try to trim the fat to decide the bare minimum they can get away with. They design products that are the best they can be within other constraints such as power, die size, and within the allowable time. The trimming and compromising comes with the products further down the stack, which are all derivatives of the top, "halo" product. The reason you end up with delays and evolutionary products instead of constant revolution is because, shockingly, this shit is hard! It takes hundreds of engineers and tens of thousands of engineer-hours to get these products out the door even when the designs are "just" derivatives.

The obvious counterpoint to this would be Intel and their stagnation for the near-decade between the release of Sandy Bridge and the competitive changes that arrived with Ryzen, but even that isn't an example of what you claim. Intel was working in an environment where their more advanced 10 and 7 nm process nodes were MASSIVELY delayed, throwing off their entire design cycle. The result was engineers laboring under and entirely different set of constraints, with one of those being Intel's profit margin, but again, this isn't what you have been describing. It represents a ceiling for cost, but engineers do whatever they can within that constraint. The trimming and compromising comes as you move down the product stack where that same sort of margin must be maintained and you have other competitive concerns than "this is what we thought was possible given the constraints we are under."


IF is only really expensive in terms of power when being pushed over the substrate. Utilizing an interposer or other technology like EFB (which is what will actually be used) reduces those power requirements tremendously.

I agree with your points too, dont get me wrong. But our arguments are not mutually exclusive. AMD and Nvidia serve the same market and they look closely at one another. They take an educated/informed guess and risk with the products they launch - as much as engineering and yield matters, marketing, time to market and the potential advance of the competition are factors just as well.

And the trimming certainly happens even on the top of the stack! Even now Nvidia is serving up incomplete GA102 dies in their halo product while enterprise gets the perfect ones. Maxwell, same story - and we know the 980ti was juuuuust a hair ahead of Fury X. Coincidence? Ofc not.

And in terms of delays... you say Intel. I say Nvidia (post-)Pascal. Maxwell had Pascal features cut and delayed... then Turing took its sweet time and barely moved price/perf forward... while AMD was still rebranding GCN and later formulating an answer to 1080ti performance. Coincidence?! ;)
 
Last edited:
Joined
Feb 24, 2009
Messages
3,516 (0.61/day)
System Name Money Hole
Processor Core i7 970
Motherboard Asus P6T6 WS Revolution
Cooling Noctua UH-D14
Memory 2133Mhz 12GB (3x4GB) Mushkin 998991
Video Card(s) Sapphire Tri-X OC R9 290X
Storage Samsung 1TB 850 Evo
Display(s) 3x Acer KG240A 144hz
Case CM HAF 932
Audio Device(s) ADI (onboard)
Power Supply Enermax Revolution 85+ 1050w
Mouse Logitech G602
Keyboard Logitech G710+
Software Windows 10 Professional x64
The Radeon RX 6900 XT is more than double the performance of RX 5700 XT. 201% vs 100%.

View attachment 229696
AMD Radeon RX 5700 XT Specs | TechPowerUp GPU Database

That's not relative to each other and one two different scales. See the note at the bottom.

I still stand by what I said. If the improvement is more then 50% at 4K from a 6900 XT to a 7900 XT, I'll be shocked (and so will almost everyone else).

Like I said earlier, I don't know and neither does any one else. It would be highly unlikely.
 
Joined
Sep 17, 2014
Messages
22,679 (6.05/day)
Location
The Washing Machine
System Name Tiny the White Yeti
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling CPU: Thermalright Peerless Assassin / Case: Phanteks T30-120 x3
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
VR HMD HD 420 - Green Edition ;)
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
That's not relative to each other and one two different scales. See the note at the bottom.

I still stand by what I said. If the improvement is more then 50% at 4K from a 6900 XT to a 7900 XT, I'll be shocked (and so will almost everyone else).

Like I said earlier, I don't know and neither does any one else. It would be highly unlikely.

But why would you be comparing relative perf between a 5700XT and a 6900XT? Both are positioned on the other end of the product stack. No shit sherlock, the difference is bigger between midrange SKU from last year and top end from the first real, full RDNA stack :D Its also certainly 100% between the two. The scale there is the relative scale, simple.

This comparison makes absolutely no sense and has zero relation to the discussion of per-gen improvements. Rather, compare to the same tier GPU like a 6700 XT... and there's your 27%.

As for your general statement you're absolutely correct, and if 6900XT > 7900XT is >50% I'll eat a virtual shoe.
 

ARF

Joined
Jan 28, 2020
Messages
4,670 (2.61/day)
Location
Ex-usa | slava the trolls
But why would you be comparing relative perf between a 5700XT and a 6900XT? Both are positioned on the other end of the product stack. No shit sherlock, the difference is bigger between midrange SKU from last year and top end from the first real, full RDNA stack :D Its also certainly 100% between the two. The scale there is the relative scale, simple.

This comparison makes absolutely no sense and has zero relation to the discussion of per-gen improvements. Rather, compare to the same tier GPU like a 6700 XT... and there's your 27%.

As for your general statement you're absolutely correct, and if 6900XT > 7900XT is >50% I'll eat a virtual shoe.

RX 6700 XT is on the same old N7 TSMC node as RX 5700 XT, that is why you don't see the normal generational improvement from the die shrink.
It is 251 sq. mm vs 335 sq. mm.

You have to compare either old 250 sq. mm N7 Navi 10 to new 250 sq. mm N5 Navi 33,
or 335 sq. mm old N7 Navi 22 to a new 335 sq. mm N5 Navi 32.
 
Joined
Feb 24, 2009
Messages
3,516 (0.61/day)
System Name Money Hole
Processor Core i7 970
Motherboard Asus P6T6 WS Revolution
Cooling Noctua UH-D14
Memory 2133Mhz 12GB (3x4GB) Mushkin 998991
Video Card(s) Sapphire Tri-X OC R9 290X
Storage Samsung 1TB 850 Evo
Display(s) 3x Acer KG240A 144hz
Case CM HAF 932
Audio Device(s) ADI (onboard)
Power Supply Enermax Revolution 85+ 1050w
Mouse Logitech G602
Keyboard Logitech G710+
Software Windows 10 Professional x64
But why would you be comparing relative perf between a 5700XT and a 6900XT? Both are positioned on the other end of the product stack. No shit sherlock, the difference is bigger between midrange SKU from last year and top end from the first real, full RDNA stack :D Its also certainly 100% between the two. The scale there is the relative scale, simple.

This comparison makes absolutely no sense and has zero relation to the discussion of per-gen improvements. Rather, compare to the same tier GPU like a 6700 XT... and there's your 27%.

As for your general statement you're absolutely correct, and if 6900XT > 7900XT is >50% I'll eat a virtual shoe.

I was using it to compare generational improvements as the 6900 XT is a doubling of the 5700 XT (except for the bus) when looking at the cores, TMUs, and ROPs. Outside of the clock speed difference, you're looking at the changes in architecture. Though I understand your point of the 6700 XT, but I didn't think they were out yet because the prices of GPUs are still at absurd levels I just have not been paying attention any more.

Oddly enough, I was looking through price rumors were for RDNA2 to see how close they were to reality to see what the odds of the RDNA3 rumors on prices being correct are.

What are the odds that AMD will pull an nVidia and show case the cards at 4k and say "see 2x improvement!"? Probably pretty good.
 
Joined
Mar 10, 2010
Messages
11,878 (2.20/day)
Location
Manchester uk
System Name RyzenGtEvo/ Asus strix scar II
Processor Amd R5 5900X/ Intel 8750H
Motherboard Crosshair hero8 impact/Asus
Cooling 360EK extreme rad+ 360$EK slim all push, cpu ek suprim Gpu full cover all EK
Memory Corsair Vengeance Rgb pro 3600cas14 16Gb in four sticks./16Gb/16GB
Video Card(s) Powercolour RX7900XT Reference/Rtx 2060
Storage Silicon power 2TB nvme/8Tb external/1Tb samsung Evo nvme 2Tb sata ssd/1Tb nvme
Display(s) Samsung UAE28"850R 4k freesync.dell shiter
Case Lianli 011 dynamic/strix scar2
Audio Device(s) Xfi creative 7.1 on board ,Yamaha dts av setup, corsair void pro headset
Power Supply corsair 1200Hxi/Asus stock
Mouse Roccat Kova/ Logitech G wireless
Keyboard Roccat Aimo 120
VR HMD Oculus rift
Software Win 10 Pro
Benchmark Scores 8726 vega 3dmark timespy/ laptop Timespy 6506
except money printing press are much more innocent, at least they don't burn electricity 24/7
Yes because the mice turning the wheel running the machine eat cheese not elecy?!, we are not in the 1800s I'm sure printers Do use elecy now.

I mean when you aren't printing money ,the machine is off and not using power but start printing money and it will use power ,no?!.


Where can I get a money printer anyway?!.

I need one to buy my next GPU anyway, clearly.
 
Joined
Feb 24, 2021
Messages
170 (0.12/day)
System Name Upgraded CyberpowerPC Ultra 5 Elite Gaming PC
Processor AMD Ryzen 7 5800X3D
Motherboard MSI B450M Pro-VDH Plus
Cooling Thermalright Peerless Assassin 120 SE
Memory CM4X8GD3000C16K4D (OC to CL14)
Video Card(s) XFX Speedster MERC RX 7800 XT
Storage TCSunbow X3 1TB, ADATA SU630 240GB, Seagate BarraCuda ST2000DM008 2TB
Display(s) AOC Agon AG241QX 1440p 144Hz
Case Cooler Master MasterBox MB520 (CyberpowerPC variant)
Power Supply 600W Cooler Master
The top Navi 31 part allegedly features 60 workgroup processors (WGPs), or 120 compute units. Assuming an RDNA3 CU still holds 64 stream processors, you're looking at 7,680 stream processors, a 50% increase over Navi 21.
The majority of leakers have said that the number of shaders per WGP is being doubled. It's 30WGPs, 120 CUs, or 7680SPs for a single Navi 31 die, and 60 WGPs, 240 CUs, or 15360SPs for the dual-die module.

Even when you had 5870 (which was a doubling of 4870) you didn't even see a 100% increase over the previous generation. You only saw (at best) a 50% increase in performance but that was at 2560x1600 with was on a $1,000 USD monitor that very few had. 40% increase at 1080p was the reality of that card
6900 XT is about 40% faster then 5700 XT.
That's not how percentages work.

100% is not 40% more than 60%, it's 66% more. The HD 4870 is 40% slower than the HD 5870, which means the HD 5870 is 66% faster than the HD 4870. Scaling isn't perfect of course, but much of the reason is because it's limited by memory bandwidth. If the HD 5870 had twice as much bandwidth than the HD 4870 rather than only 30% more, it would be closer to 80-90% faster. Navi 31 might have a similar issue to an extent, but the much larger infinity cache can make up for at least part of the bandwidth deficit, and Samsung's got new 24Gbps GDDR6 chips (50% faster than on the 6900 XT) .

But why would you be comparing relative perf between a 5700XT and a 6900XT? Both are positioned on the other end of the product stack.
Because AMD is doing the exact same thing again. The top dual-die Navi 31 card will be several tiers higher than the RX 6900 XT, just as the RX 6900 XT was several tiers higher than the RX 5700 XT.

As for your general statement you're absolutely correct, and if 6900XT > 7900XT is >50% I'll eat a virtual shoe.
I'll eat a virtual shoe if it isn't. AMD would have to be very stingy with their product segmentation for that to happen, for example using a single Navi 32 die or a heavily cut-down Navi 31.

And if the top RDNA 3 card (which might be called "RX 7950 XT", "RX 7900 X2", or possibly a completely new name similar to Nvidia's Titan series) isn't >50% faster than the 6900 XT, I'll eat a literal shoe. I expect it to be well over 100% faster, though 150% faster is debatable.

Honestly, I don't understand you guys. AMD is going to approximately double the die area (by using two dies of approximately the same size as Navi 21) while shrinking to N5. How is it not going to double performance? Why is this even a question? The top RDNA3 card is likely to have an MSRP of $2500, possibly even higher, but even if you're talking about performance at the same price point, 50% better is not unrealistic at all.
 
Last edited:
Joined
Sep 17, 2014
Messages
22,679 (6.05/day)
Location
The Washing Machine
System Name Tiny the White Yeti
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling CPU: Thermalright Peerless Assassin / Case: Phanteks T30-120 x3
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
VR HMD HD 420 - Green Edition ;)
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
The majority of leakers have said that the number of shaders per WGP is being doubled. It's 30WGPs, 120 CUs, or 7680SPs for a single Navi 31 die, and 60 WGPs, 240 CUs, or 15360SPs for the dual-die module.



That's not how percentages work.

100% is not 40% more than 60%, it's 66% more. The HD 4870 is 40% slower than the HD 5870, which means the HD 5870 is 66% faster than the HD 4870. Scaling isn't perfect of course, but much of the reason is because it's limited by memory bandwidth. If the HD 5870 had twice as much bandwidth than the HD 4870 rather than only 30% more, it would be closer to 80-90% faster. Navi 31 might have a similar issue to an extent, but the much larger infinity cache can make up for at least part of the bandwidth deficit, and Samsung's got new 24Gbps GDDR6 chips (50% faster than on the 6900 XT) .


Because AMD is doing the exact same thing again. The top dual-die Navi 31 card will be several tiers higher than the RX 6900 XT, just as the RX 6900 XT was several tiers higher than the RX 5700 XT.


I'll eat a virtual shoe if it isn't. AMD would have to be very stingy with their product segmentation for that to happen, for example using a single Navi 32 die or a heavily cut-down Navi 31.

And if the top RDNA 3 card (which might be called "RX 7950 XT", "RX 7900 X2", or possibly a completely new name similar to Nvidia's Titan series) isn't >50% faster than the 6900 XT, I'll eat a literal shoe. I expect it to be well over 100% faster, though 150% faster is debatable.

Honestly, I don't understand you guys. AMD is going to approximately double the die area (by using two dies of approximately the same size as Navi 21) while shrinking to N5. How is it not going to double performance? Why is this even a question? The top RDNA3 card is likely to have an MSRP of $2500, possibly even higher, but even if you're talking about performance at the same price point, 50% better is not unrealistic at all.

Okay. Lets see if 20 years worth of gpu history gets changed with AMDs next magical gen :D

Its not like we havent been at that notion before lmao

Be careful with the mix up of 'same price point' versus same tier. An important difference. Last time we saw +50% is when Nvidia introduced a price hike in the Pascal line up. As for unobtanium 2500 dollar GPUs, those are of zero relevance for a 'normal' consumer gaming stack. As are those of 1K.
 
Last edited:
Joined
Oct 12, 2005
Messages
713 (0.10/day)
I think the key there is before AMD and Nvidia tought the celling for GPU was around 500-700$. They now see they can sell a lot of GPU at 2500$.

I supect they will both just design GPU made to be sell at those price without the current markup. This way they can continue their performance wars. We will see if it will lead to increase performance at lower end of the price range...
 
Joined
Sep 17, 2014
Messages
22,679 (6.05/day)
Location
The Washing Machine
System Name Tiny the White Yeti
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling CPU: Thermalright Peerless Assassin / Case: Phanteks T30-120 x3
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
VR HMD HD 420 - Green Edition ;)
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
I think the key there is before AMD and Nvidia tought the celling for GPU was around 500-700$. They now see they can sell a lot of GPU at 2500$.

I supect they will both just design GPU made to be sell at those price without the current markup. This way they can continue their performance wars. We will see if it will lead to increase performance at lower end of the price range...

Crypto and a pandemic are selling higher price GPUs now. Gamers are generally left out in the cold here.
 
Joined
Nov 15, 2020
Messages
930 (0.62/day)
System Name 1. Glasshouse 2. Odin OneEye
Processor 1. Ryzen 9 5900X (manual PBO) 2. Ryzen 9 7900X
Motherboard 1. MSI x570 Tomahawk wifi 2. Gigabyte Aorus Extreme 670E
Cooling 1. Noctua NH D15 Chromax Black 2. Custom Loop 3x360mm (60mm) rads & T30 fans/Aquacomputer NEXT w/b
Memory 1. G Skill Neo 16GBx4 (3600MHz 16/16/16/36) 2. Kingston Fury 16GBx2 DDR5 CL36
Video Card(s) 1. Asus Strix Vega 64 2. Powercolor Liquid Devil 7900XTX
Storage 1. Corsair Force MP600 (1TB) & Sabrent Rocket 4 (2TB) 2. Kingston 3000 (1TB) and Hynix p41 (2TB)
Display(s) 1. Samsung U28E590 10bit 4K@60Hz 2. LG C2 42 inch 10bit 4K@120Hz
Case 1. Corsair Crystal 570X White 2. Cooler Master HAF 700 EVO
Audio Device(s) 1. Creative Speakers 2. Built in LG monitor speakers
Power Supply 1. Corsair RM850x 2. Superflower Titanium 1600W
Mouse 1. Microsoft IntelliMouse Pro (grey) 2. Microsoft IntelliMouse Pro (black)
Keyboard Leopold High End Mechanical
Software Windows 11
No they wouldn't do it. They'd be cannibalizing on their own roadmap and profit margins.

Rather, the strategy is to postpone as many changes as possible to later generations, if the economic reality allows for such a thing. Look at GCN's development - you can conclude there weren't funds for targeted development, or you could say the priority wasn't there because 'AMD still had revenue'... and they still pissed away money. Look at the features that got postponed from Maxwell to Pascal - Nvidia simply didn't have to make a better 970 or 980ti and Maxwell was already a very strong gen 'in the market at the time' - but they had the Pascal technology on shelf already. Similarly, the move from Volta > Turing > Ampere, is a string of massively delayed releases. Its no coincidence these 'delays' happened around the same years for both competitors. Another big factor to stall is the console release roadmap - Nvidia is learning the hard way right now, as they gambled on pre-empting the consoles with their own RTX. In the wild, we now see them use those tensor/RT cores primarily for non-RT workloads like DLSS because devs are primarily console oriented, especially on big budget/multiplatform. So we get lackluster RT implementations on PC.

So no... both companies are and will always be balancing on the edge of what they must do at the bare minimum to keep selling product. They want to leave as much in the tank for later, and rather sell GPUs on 'new features' that are not hardware based. Software for example. Better drivers. Support for new APIs. Monitor technology. Shadowplay. Low Latency modes. New AA modes. Etc etc. None of this requires a new architecture, and there is nothing easier than just refining what you have. Its what Nvidia has been doing for so long now, and what kept them on top. Minor tweaks to architecture to support new tech, at best, and keep pushing the efficiency button.
I respectfully disagree. AMD is coming from a different place. Their market share in the GPU space is small compared to the behemoth that is Nvidia. I suspect they know that to shake things up they need to produce a clearly superior product. They will be pulling out all stops in my view.
 
Joined
Mar 10, 2010
Messages
11,878 (2.20/day)
Location
Manchester uk
System Name RyzenGtEvo/ Asus strix scar II
Processor Amd R5 5900X/ Intel 8750H
Motherboard Crosshair hero8 impact/Asus
Cooling 360EK extreme rad+ 360$EK slim all push, cpu ek suprim Gpu full cover all EK
Memory Corsair Vengeance Rgb pro 3600cas14 16Gb in four sticks./16Gb/16GB
Video Card(s) Powercolour RX7900XT Reference/Rtx 2060
Storage Silicon power 2TB nvme/8Tb external/1Tb samsung Evo nvme 2Tb sata ssd/1Tb nvme
Display(s) Samsung UAE28"850R 4k freesync.dell shiter
Case Lianli 011 dynamic/strix scar2
Audio Device(s) Xfi creative 7.1 on board ,Yamaha dts av setup, corsair void pro headset
Power Supply corsair 1200Hxi/Asus stock
Mouse Roccat Kova/ Logitech G wireless
Keyboard Roccat Aimo 120
VR HMD Oculus rift
Software Win 10 Pro
Benchmark Scores 8726 vega 3dmark timespy/ laptop Timespy 6506
Crypto and a pandemic are selling higher price GPUs now. Gamers are generally left out in the cold here.
I would say I could always buy a GPU , but only bottom wrung or top with anything in the middle being priced at top in shops, but I could always get one, my thinking being, right, I'll pay more ,But I want more than what's offered in performance, like next generation hopefully.
 
Joined
Sep 17, 2014
Messages
22,679 (6.05/day)
Location
The Washing Machine
System Name Tiny the White Yeti
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling CPU: Thermalright Peerless Assassin / Case: Phanteks T30-120 x3
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
VR HMD HD 420 - Green Edition ;)
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
I respectfully disagree. AMD is coming from a different place. Their market share in the GPU space is small compared to the behemoth that is Nvidia. I suspect they know that to shake things up they need to produce a clearly superior product. They will be pulling out all stops in my view.

Maybe they will now? Wait and see mode. We have already seen AMD doesnt care enough about market share to start being serious about iGPU in Ryzen mobile or its APUs... we have also seen them rebrand GCN alot, and RDNA2 isnt exactly attempting to win market share either with competitive pricing OR a lot of them in stock. Nvidia still sells more GPUs just because it moves more units into the market. AMD still has a less effective sales pitch apparently or makes fewer GPUs altogether.

So... it would be a first to see AMD compete aggressively to steal market share from Nv. So far I havent seen the slightest move in that direction since post-Hawaii.
 
Joined
Oct 12, 2005
Messages
713 (0.10/day)
Crypto and a pandemic are selling higher price GPUs now. Gamers are generally left out in the cold here.
There are huge amount of GPU in the hands of non-gamers, but there are many gamers that had their hands on a 3090 or in a lesser manner 6900xt even at over inflated price.

Most gamers will probably not spend 2k+ on a graphic cards, but i am sure that there are many that would pay that if not more just to get the best of the best. I don't know exactly how many of those people there are, but i suspect there are now enough for making cards for that market specifically.

That do not change the fact that cards right now are over inflated, but that won't last forever and when this will be over, there will still be plenty of people buying 2K+ cards.

And in the end, it wouldn't really matter if the top card is now sell at those price if at a reasonable price point like 250$ we had real performance gain.

I recall hearing that Nvidia was surprised on how many Titan RTX were in the hands of gamers. They made the 3090 and gamer still ask for more. So why not pleasing the people with way too much money (Or way not enough other Hobbies than PC gaming....)
 
Top