- Joined
- Jan 28, 2020
- Messages
- 4,677 (2.44/day)
6900 XT is about 40% faster then 5700 XT
The Radeon RX 6900 XT is more than double the performance of RX 5700 XT. 201% vs 100%.
AMD Radeon RX 5700 XT Specs | TechPowerUp GPU Database
6900 XT is about 40% faster then 5700 XT
System Name | Tiny the White Yeti |
---|---|
Processor | 7800X3D |
Motherboard | MSI MAG Mortar b650m wifi |
Cooling | CPU: Thermalright Peerless Assassin / Case: Phanteks T30-120 x3 |
Memory | 32GB Corsair Vengeance 30CL6000 |
Video Card(s) | ASRock RX7900XT Phantom Gaming |
Storage | Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB |
Display(s) | Gigabyte G34QWC (3440x1440) |
Case | Lian Li A3 mATX White |
Audio Device(s) | Harman Kardon AVR137 + 2.1 |
Power Supply | EVGA Supernova G2 750W |
Mouse | Steelseries Aerox 5 |
Keyboard | Lenovo Thinkpad Trackpoint II |
VR HMD | HD 420 - Green Edition ;) |
Software | W11 IoT Enterprise LTSC |
Benchmark Scores | Over 9000 |
No they wouldn't do it. They'd be cannibalizing on their own roadmap and profit margins.If there was some simple way to do it, both nVidia and AMD (ATI) would have done it a long time ago.
Processor | Ryzen 7800X3D |
---|---|
Motherboard | ROG STRIX B650E-F GAMING WIFI |
Memory | 2x16GB G.Skill Flare X5 DDR5-6000 CL36 (F5-6000J3636F16GX2-FX5) |
Video Card(s) | INNO3D GeForce RTX™ 4070 Ti SUPER TWIN X2 |
Storage | 2TB Samsung 980 PRO, 4TB WD Black SN850X |
Display(s) | 42" LG C2 OLED, 27" ASUS PG279Q |
Case | Thermaltake Core P5 |
Power Supply | Fractal Design Ion+ Platinum 760W |
Mouse | Corsair Dark Core RGB Pro SE |
Keyboard | Corsair K100 RGB |
VR HMD | HTC Vive Cosmos |
Not only expensive to make - if they go full interposer it probably does not matter all too much how many traces it has - but wide IF is not exactly power efficient.But that would mean the infinity fabric link between the I/O die and the chiplet is huge. right now, on die, AMD state that it's 16 x 64b for NAVI21. it would mean probably at least 12 x 2 x 64b for Navi 31. Not undoable but i wonder how it will be expensive to make with an interposer.
Processor | AMD Ryzen 5900X |
---|---|
Motherboard | MSI MAG X570 Tomahawk |
Cooling | Dual custom loops |
Memory | 4x8GB G.SKILL Trident Z Neo 3200C14 B-Die |
Video Card(s) | AMD Radeon RX 6800XT Reference |
Storage | ADATA SX8200 480GB, Inland Premium 2TB, various HDDs |
Display(s) | MSI MAG341CQ |
Case | Meshify 2 XL |
Audio Device(s) | Schiit Fulla 3 |
Power Supply | Super Flower Leadex Titanium SE 1000W |
Mouse | Glorious Model D |
Keyboard | Drop CTRL, lubed and filmed Halo Trues |
No they wouldn't do it. They'd be cannibalizing on their own roadmap and profit margins.
Rather, the strategy is to postpone as many changes as possible to later generations, if the economic reality allows for such a thing. Look at GCN's development - you can conclude there weren't funds for targeted development, or you could say the priority wasn't there because 'AMD still had revenue'... and they still pissed away money. Look at the features that got postponed from Maxwell to Pascal - Nvidia simply didn't have to make a better 970 or 980ti and Maxwell was already a very strong gen 'in the market at the time' - but they had the Pascal technology on shelf already. Similarly, the move from Volta > Turing > Ampere, is a string of massively delayed releases. Its no coincidence these 'delays' happened around the same years for both competitors. Another big factor to stall is the console release roadmap - Nvidia is learning the hard way right now, as they gambled on pre-empting the consoles with their own RTX. In the wild, we now see them use those tensor/RT cores primarily for non-RT workloads like DLSS because devs are primarily console oriented, especially on big budget/multiplatform. So we get lackluster RT implementations on PC.
So no... both companies are and will always be balancing on the edge of what they must do at the bare minimum to keep selling product. They want to leave as much in the tank for later, and rather sell GPUs on 'new features' that are not hardware based. Software for example. Better drivers. Support for new APIs. Monitor technology. Shadowplay. Low Latency modes. New AA modes. Etc etc. None of this requires a new architecture, and there is nothing easier than just refining what you have. Its what Nvidia has been doing for so long now, and what kept them on top. Minor tweaks to architecture to support new tech, at best, and keep pushing the efficiency button.
Not only expensive to make - if they go full interposer it probably does not matter all too much how many traces it has - but wide IF is not exactly power efficient.
System Name | Tiny the White Yeti |
---|---|
Processor | 7800X3D |
Motherboard | MSI MAG Mortar b650m wifi |
Cooling | CPU: Thermalright Peerless Assassin / Case: Phanteks T30-120 x3 |
Memory | 32GB Corsair Vengeance 30CL6000 |
Video Card(s) | ASRock RX7900XT Phantom Gaming |
Storage | Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB |
Display(s) | Gigabyte G34QWC (3440x1440) |
Case | Lian Li A3 mATX White |
Audio Device(s) | Harman Kardon AVR137 + 2.1 |
Power Supply | EVGA Supernova G2 750W |
Mouse | Steelseries Aerox 5 |
Keyboard | Lenovo Thinkpad Trackpoint II |
VR HMD | HD 420 - Green Edition ;) |
Software | W11 IoT Enterprise LTSC |
Benchmark Scores | Over 9000 |
Companies don't sit there and try to trim the fat to decide the bare minimum they can get away with. They design products that are the best they can be within other constraints such as power, die size, and within the allowable time. The trimming and compromising comes with the products further down the stack, which are all derivatives of the top, "halo" product. The reason you end up with delays and evolutionary products instead of constant revolution is because, shockingly, this shit is hard! It takes hundreds of engineers and tens of thousands of engineer-hours to get these products out the door even when the designs are "just" derivatives.
The obvious counterpoint to this would be Intel and their stagnation for the near-decade between the release of Sandy Bridge and the competitive changes that arrived with Ryzen, but even that isn't an example of what you claim. Intel was working in an environment where their more advanced 10 and 7 nm process nodes were MASSIVELY delayed, throwing off their entire design cycle. The result was engineers laboring under and entirely different set of constraints, with one of those being Intel's profit margin, but again, this isn't what you have been describing. It represents a ceiling for cost, but engineers do whatever they can within that constraint. The trimming and compromising comes as you move down the product stack where that same sort of margin must be maintained and you have other competitive concerns than "this is what we thought was possible given the constraints we are under."
IF is only really expensive in terms of power when being pushed over the substrate. Utilizing an interposer or other technology like EFB (which is what will actually be used) reduces those power requirements tremendously.
System Name | Money Hole |
---|---|
Processor | Core i7 970 |
Motherboard | Asus P6T6 WS Revolution |
Cooling | Noctua UH-D14 |
Memory | 2133Mhz 12GB (3x4GB) Mushkin 998991 |
Video Card(s) | Sapphire Tri-X OC R9 290X |
Storage | Samsung 1TB 850 Evo |
Display(s) | 3x Acer KG240A 144hz |
Case | CM HAF 932 |
Audio Device(s) | ADI (onboard) |
Power Supply | Enermax Revolution 85+ 1050w |
Mouse | Logitech G602 |
Keyboard | Logitech G710+ |
Software | Windows 10 Professional x64 |
The Radeon RX 6900 XT is more than double the performance of RX 5700 XT. 201% vs 100%.
View attachment 229696
AMD Radeon RX 5700 XT Specs | TechPowerUp GPU Database
That's not relative to each other and one two different scales.
System Name | Tiny the White Yeti |
---|---|
Processor | 7800X3D |
Motherboard | MSI MAG Mortar b650m wifi |
Cooling | CPU: Thermalright Peerless Assassin / Case: Phanteks T30-120 x3 |
Memory | 32GB Corsair Vengeance 30CL6000 |
Video Card(s) | ASRock RX7900XT Phantom Gaming |
Storage | Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB |
Display(s) | Gigabyte G34QWC (3440x1440) |
Case | Lian Li A3 mATX White |
Audio Device(s) | Harman Kardon AVR137 + 2.1 |
Power Supply | EVGA Supernova G2 750W |
Mouse | Steelseries Aerox 5 |
Keyboard | Lenovo Thinkpad Trackpoint II |
VR HMD | HD 420 - Green Edition ;) |
Software | W11 IoT Enterprise LTSC |
Benchmark Scores | Over 9000 |
That's not relative to each other and one two different scales. See the note at the bottom.
I still stand by what I said. If the improvement is more then 50% at 4K from a 6900 XT to a 7900 XT, I'll be shocked (and so will almost everyone else).
Like I said earlier, I don't know and neither does any one else. It would be highly unlikely.
But why would you be comparing relative perf between a 5700XT and a 6900XT? Both are positioned on the other end of the product stack. No shit sherlock, the difference is bigger between midrange SKU from last year and top end from the first real, full RDNA stackIts also certainly 100% between the two. The scale there is the relative scale, simple.
This comparison makes absolutely no sense and has zero relation to the discussion of per-gen improvements. Rather, compare to the same tier GPU like a 6700 XT... and there's your 27%.
As for your general statement you're absolutely correct, and if 6900XT > 7900XT is >50% I'll eat a virtual shoe.
System Name | Money Hole |
---|---|
Processor | Core i7 970 |
Motherboard | Asus P6T6 WS Revolution |
Cooling | Noctua UH-D14 |
Memory | 2133Mhz 12GB (3x4GB) Mushkin 998991 |
Video Card(s) | Sapphire Tri-X OC R9 290X |
Storage | Samsung 1TB 850 Evo |
Display(s) | 3x Acer KG240A 144hz |
Case | CM HAF 932 |
Audio Device(s) | ADI (onboard) |
Power Supply | Enermax Revolution 85+ 1050w |
Mouse | Logitech G602 |
Keyboard | Logitech G710+ |
Software | Windows 10 Professional x64 |
But why would you be comparing relative perf between a 5700XT and a 6900XT? Both are positioned on the other end of the product stack. No shit sherlock, the difference is bigger between midrange SKU from last year and top end from the first real, full RDNA stackIts also certainly 100% between the two. The scale there is the relative scale, simple.
This comparison makes absolutely no sense and has zero relation to the discussion of per-gen improvements. Rather, compare to the same tier GPU like a 6700 XT... and there's your 27%.
As for your general statement you're absolutely correct, and if 6900XT > 7900XT is >50% I'll eat a virtual shoe.
except money printing press are much more innocent, at least they don't burn electricity 24/7And optimized for mining
aka the legal way to get your own money printing press
System Name | RyzenGtEvo/ Asus strix scar II |
---|---|
Processor | Amd R5 5900X/ Intel 8750H |
Motherboard | Crosshair hero8 impact/Asus |
Cooling | 360EK extreme rad+ 360$EK slim all push, cpu ek suprim Gpu full cover all EK |
Memory | Gskill Trident Z 3900cas18 32Gb in four sticks./16Gb/16GB |
Video Card(s) | Asus tuf RX7900XT /Rtx 2060 |
Storage | Silicon power 2TB nvme/8Tb external/1Tb samsung Evo nvme 2Tb sata ssd/1Tb nvme |
Display(s) | Samsung UAE28"850R 4k freesync.dell shiter |
Case | Lianli 011 dynamic/strix scar2 |
Audio Device(s) | Xfi creative 7.1 on board ,Yamaha dts av setup, corsair void pro headset |
Power Supply | corsair 1200Hxi/Asus stock |
Mouse | Roccat Kova/ Logitech G wireless |
Keyboard | Roccat Aimo 120 |
VR HMD | Oculus rift |
Software | Win 10 Pro |
Benchmark Scores | laptop Timespy 6506 |
Yes because the mice turning the wheel running the machine eat cheese not elecy?!, we are not in the 1800s I'm sure printers Do use elecy now.except money printing press are much more innocent, at least they don't burn electricity 24/7
System Name | Upgraded CyberpowerPC Ultra 5 Elite Gaming PC |
---|---|
Processor | AMD Ryzen 7 5800X3D |
Motherboard | MSI B450M Pro-VDH Plus |
Cooling | Thermalright Peerless Assassin 120 SE |
Memory | CM4X8GD3000C16K4D (OC to CL14) |
Video Card(s) | XFX Speedster MERC RX 7800 XT |
Storage | TCSunbow X3 1TB, ADATA SU630 240GB, Seagate BarraCuda ST2000DM008 2TB |
Display(s) | AOC Agon AG241QX 1440p 144Hz |
Case | Cooler Master MasterBox MB520 (CyberpowerPC variant) |
Power Supply | 600W Cooler Master |
The majority of leakers have said that the number of shaders per WGP is being doubled. It's 30WGPs, 120 CUs, or 7680SPs for a single Navi 31 die, and 60 WGPs, 240 CUs, or 15360SPs for the dual-die module.The top Navi 31 part allegedly features 60 workgroup processors (WGPs), or 120 compute units. Assuming an RDNA3 CU still holds 64 stream processors, you're looking at 7,680 stream processors, a 50% increase over Navi 21.
Even when you had 5870 (which was a doubling of 4870) you didn't even see a 100% increase over the previous generation. You only saw (at best) a 50% increase in performance but that was at 2560x1600 with was on a $1,000 USD monitor that very few had. 40% increase at 1080p was the reality of that card
That's not how percentages work.6900 XT is about 40% faster then 5700 XT.
Because AMD is doing the exact same thing again. The top dual-die Navi 31 card will be several tiers higher than the RX 6900 XT, just as the RX 6900 XT was several tiers higher than the RX 5700 XT.But why would you be comparing relative perf between a 5700XT and a 6900XT? Both are positioned on the other end of the product stack.
I'll eat a virtual shoe if it isn't. AMD would have to be very stingy with their product segmentation for that to happen, for example using a single Navi 32 die or a heavily cut-down Navi 31.As for your general statement you're absolutely correct, and if 6900XT > 7900XT is >50% I'll eat a virtual shoe.
System Name | Tiny the White Yeti |
---|---|
Processor | 7800X3D |
Motherboard | MSI MAG Mortar b650m wifi |
Cooling | CPU: Thermalright Peerless Assassin / Case: Phanteks T30-120 x3 |
Memory | 32GB Corsair Vengeance 30CL6000 |
Video Card(s) | ASRock RX7900XT Phantom Gaming |
Storage | Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB |
Display(s) | Gigabyte G34QWC (3440x1440) |
Case | Lian Li A3 mATX White |
Audio Device(s) | Harman Kardon AVR137 + 2.1 |
Power Supply | EVGA Supernova G2 750W |
Mouse | Steelseries Aerox 5 |
Keyboard | Lenovo Thinkpad Trackpoint II |
VR HMD | HD 420 - Green Edition ;) |
Software | W11 IoT Enterprise LTSC |
Benchmark Scores | Over 9000 |
The majority of leakers have said that the number of shaders per WGP is being doubled. It's 30WGPs, 120 CUs, or 7680SPs for a single Navi 31 die, and 60 WGPs, 240 CUs, or 15360SPs for the dual-die module.
That's not how percentages work.
100% is not 40% more than 60%, it's 66% more. The HD 4870 is 40% slower than the HD 5870, which means the HD 5870 is 66% faster than the HD 4870. Scaling isn't perfect of course, but much of the reason is because it's limited by memory bandwidth. If the HD 5870 had twice as much bandwidth than the HD 4870 rather than only 30% more, it would be closer to 80-90% faster. Navi 31 might have a similar issue to an extent, but the much larger infinity cache can make up for at least part of the bandwidth deficit, and Samsung's got new 24Gbps GDDR6 chips (50% faster than on the 6900 XT) .
Because AMD is doing the exact same thing again. The top dual-die Navi 31 card will be several tiers higher than the RX 6900 XT, just as the RX 6900 XT was several tiers higher than the RX 5700 XT.
I'll eat a virtual shoe if it isn't. AMD would have to be very stingy with their product segmentation for that to happen, for example using a single Navi 32 die or a heavily cut-down Navi 31.
And if the top RDNA 3 card (which might be called "RX 7950 XT", "RX 7900 X2", or possibly a completely new name similar to Nvidia's Titan series) isn't >50% faster than the 6900 XT, I'll eat a literal shoe. I expect it to be well over 100% faster, though 150% faster is debatable.
Honestly, I don't understand you guys. AMD is going to approximately double the die area (by using two dies of approximately the same size as Navi 21) while shrinking to N5. How is it not going to double performance? Why is this even a question? The top RDNA3 card is likely to have an MSRP of $2500, possibly even higher, but even if you're talking about performance at the same price point, 50% better is not unrealistic at all.
System Name | Tiny the White Yeti |
---|---|
Processor | 7800X3D |
Motherboard | MSI MAG Mortar b650m wifi |
Cooling | CPU: Thermalright Peerless Assassin / Case: Phanteks T30-120 x3 |
Memory | 32GB Corsair Vengeance 30CL6000 |
Video Card(s) | ASRock RX7900XT Phantom Gaming |
Storage | Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB |
Display(s) | Gigabyte G34QWC (3440x1440) |
Case | Lian Li A3 mATX White |
Audio Device(s) | Harman Kardon AVR137 + 2.1 |
Power Supply | EVGA Supernova G2 750W |
Mouse | Steelseries Aerox 5 |
Keyboard | Lenovo Thinkpad Trackpoint II |
VR HMD | HD 420 - Green Edition ;) |
Software | W11 IoT Enterprise LTSC |
Benchmark Scores | Over 9000 |
I think the key there is before AMD and Nvidia tought the celling for GPU was around 500-700$. They now see they can sell a lot of GPU at 2500$.
I supect they will both just design GPU made to be sell at those price without the current markup. This way they can continue their performance wars. We will see if it will lead to increase performance at lower end of the price range...
System Name | 1. Glasshouse 2. Odin OneEye |
---|---|
Processor | 1. Ryzen 9 5900X (manual PBO) 2. Ryzen 9 7900X |
Motherboard | 1. MSI x570 Tomahawk wifi 2. Gigabyte Aorus Extreme 670E |
Cooling | 1. Noctua NH D15 Chromax Black 2. Custom Loop 3x360mm (60mm) rads & T30 fans/Aquacomputer NEXT w/b |
Memory | 1. G Skill Neo 16GBx4 (3600MHz 16/16/16/36) 2. Kingston Fury 16GBx2 DDR5 CL36 |
Video Card(s) | 1. Asus Strix Vega 64 2. Powercolor Liquid Devil 7900XTX |
Storage | 1. Corsair Force MP600 (1TB) & Sabrent Rocket 4 (2TB) 2. Kingston 3000 (1TB) and Hynix p41 (2TB) |
Display(s) | 1. Samsung U28E590 10bit 4K@60Hz 2. LG C2 42 inch 10bit 4K@120Hz |
Case | 1. Corsair Crystal 570X White 2. Cooler Master HAF 700 EVO |
Audio Device(s) | 1. Creative Speakers 2. Built in LG monitor speakers |
Power Supply | 1. Corsair RM850x 2. Superflower Titanium 1600W |
Mouse | 1. Microsoft IntelliMouse Pro (grey) 2. Microsoft IntelliMouse Pro (black) |
Keyboard | Leopold High End Mechanical |
Software | Windows 11 |
I respectfully disagree. AMD is coming from a different place. Their market share in the GPU space is small compared to the behemoth that is Nvidia. I suspect they know that to shake things up they need to produce a clearly superior product. They will be pulling out all stops in my view.No they wouldn't do it. They'd be cannibalizing on their own roadmap and profit margins.
Rather, the strategy is to postpone as many changes as possible to later generations, if the economic reality allows for such a thing. Look at GCN's development - you can conclude there weren't funds for targeted development, or you could say the priority wasn't there because 'AMD still had revenue'... and they still pissed away money. Look at the features that got postponed from Maxwell to Pascal - Nvidia simply didn't have to make a better 970 or 980ti and Maxwell was already a very strong gen 'in the market at the time' - but they had the Pascal technology on shelf already. Similarly, the move from Volta > Turing > Ampere, is a string of massively delayed releases. Its no coincidence these 'delays' happened around the same years for both competitors. Another big factor to stall is the console release roadmap - Nvidia is learning the hard way right now, as they gambled on pre-empting the consoles with their own RTX. In the wild, we now see them use those tensor/RT cores primarily for non-RT workloads like DLSS because devs are primarily console oriented, especially on big budget/multiplatform. So we get lackluster RT implementations on PC.
So no... both companies are and will always be balancing on the edge of what they must do at the bare minimum to keep selling product. They want to leave as much in the tank for later, and rather sell GPUs on 'new features' that are not hardware based. Software for example. Better drivers. Support for new APIs. Monitor technology. Shadowplay. Low Latency modes. New AA modes. Etc etc. None of this requires a new architecture, and there is nothing easier than just refining what you have. Its what Nvidia has been doing for so long now, and what kept them on top. Minor tweaks to architecture to support new tech, at best, and keep pushing the efficiency button.
System Name | RyzenGtEvo/ Asus strix scar II |
---|---|
Processor | Amd R5 5900X/ Intel 8750H |
Motherboard | Crosshair hero8 impact/Asus |
Cooling | 360EK extreme rad+ 360$EK slim all push, cpu ek suprim Gpu full cover all EK |
Memory | Gskill Trident Z 3900cas18 32Gb in four sticks./16Gb/16GB |
Video Card(s) | Asus tuf RX7900XT /Rtx 2060 |
Storage | Silicon power 2TB nvme/8Tb external/1Tb samsung Evo nvme 2Tb sata ssd/1Tb nvme |
Display(s) | Samsung UAE28"850R 4k freesync.dell shiter |
Case | Lianli 011 dynamic/strix scar2 |
Audio Device(s) | Xfi creative 7.1 on board ,Yamaha dts av setup, corsair void pro headset |
Power Supply | corsair 1200Hxi/Asus stock |
Mouse | Roccat Kova/ Logitech G wireless |
Keyboard | Roccat Aimo 120 |
VR HMD | Oculus rift |
Software | Win 10 Pro |
Benchmark Scores | laptop Timespy 6506 |
I would say I could always buy a GPU , but only bottom wrung or top with anything in the middle being priced at top in shops, but I could always get one, my thinking being, right, I'll pay more ,But I want more than what's offered in performance, like next generation hopefully.Crypto and a pandemic are selling higher price GPUs now. Gamers are generally left out in the cold here.
System Name | Tiny the White Yeti |
---|---|
Processor | 7800X3D |
Motherboard | MSI MAG Mortar b650m wifi |
Cooling | CPU: Thermalright Peerless Assassin / Case: Phanteks T30-120 x3 |
Memory | 32GB Corsair Vengeance 30CL6000 |
Video Card(s) | ASRock RX7900XT Phantom Gaming |
Storage | Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB |
Display(s) | Gigabyte G34QWC (3440x1440) |
Case | Lian Li A3 mATX White |
Audio Device(s) | Harman Kardon AVR137 + 2.1 |
Power Supply | EVGA Supernova G2 750W |
Mouse | Steelseries Aerox 5 |
Keyboard | Lenovo Thinkpad Trackpoint II |
VR HMD | HD 420 - Green Edition ;) |
Software | W11 IoT Enterprise LTSC |
Benchmark Scores | Over 9000 |
I respectfully disagree. AMD is coming from a different place. Their market share in the GPU space is small compared to the behemoth that is Nvidia. I suspect they know that to shake things up they need to produce a clearly superior product. They will be pulling out all stops in my view.
There are huge amount of GPU in the hands of non-gamers, but there are many gamers that had their hands on a 3090 or in a lesser manner 6900xt even at over inflated price.Crypto and a pandemic are selling higher price GPUs now. Gamers are generally left out in the cold here.