• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD RDNA3 Navi 31 GPU Block Diagram Leaked, Confirmed to be PCIe Gen 4

TheLostSwede

News Editor
Joined
Nov 11, 2004
Messages
16,718 (2.33/day)
Location
Sweden
System Name Overlord Mk MLI
Processor AMD Ryzen 7 7800X3D
Motherboard Gigabyte X670E Aorus Master
Cooling Noctua NH-D15 SE with offsets
Memory 32GB Team T-Create Expert DDR5 6000 MHz @ CL30-34-34-68
Video Card(s) Gainward GeForce RTX 4080 Phantom GS
Storage 1TB Solidigm P44 Pro, 2 TB Corsair MP600 Pro, 2TB Kingston KC3000
Display(s) Acer XV272K LVbmiipruzx 4K@160Hz
Case Fractal Design Torrent Compact
Audio Device(s) Corsair Virtuoso SE
Power Supply be quiet! Pure Power 12 M 850 W
Mouse Logitech G502 Lightspeed
Keyboard Corsair K70 Max
Software Windows 10 Pro
Benchmark Scores https://valid.x86.fr/yfsd9w
This modern trend needs to die or I'm not updating my pc, simple.

Though to be fair x8 doesn't loose that much performance.

Also that's not right, you get 1x16 ,1x nvme and the southbridge unless you want two nvme attached direct to CPU no?!.
Intel only does x16 PCIe 5.0 lanes for the "GPU", but board makers are splitting it into x8 for the "GPU" and x4 for one M.2 slot on a lot of boards, with four lanes being lost. Some higher-end boards get two PCIe 5.0 M.2 slots.

As AMD has two times x4 "spare" PCIe lanes (plus four to the chipset), it's not an issue on most AM5 boards, but there are some odd boards still split the PCIe 5.0 lanes to the "GPU" and use them for M.2 slots.

You could have 2-3 NVMe drives in the same system. That said, you'd also need a motherboard that only wires 4-8 PCIe5 lanes to the GPU slot, freeing the rest for your SSDs.

And really, we've had this discussion since PCIe2: every time a new PCIe revision comes out, it offers more bandwidth than the GPUs need. It's ok to not use the latest and greatest.
Yes, x8 PCIe 5.0 lanes are in theory x16 PCIe 4.0 lanes, the issue is that a x16 PCIe 4.0 card, ends up as a PCIe 4.0 x8 card in a PCIe 5.0 x8 slot.
The PCIe spec doesn't allow for eight lanes to magically end up being 16 or 32 lanes of a lower speed grade, it requires an additional, costly chip.

As for the slot mix, with bifurcation it's not a problem to split the lanes on the fly, so as long as you don't use the M.2 slots, your x16 slot remains x16.

Completely seperate 1* 5.0 x16 for GPU, 2* 5.0 x4 for NVMe would be the best solution, whether it's AMD's or Intel's boards.
AMD allows that, Intel does not as yet.
 
Joined
Dec 30, 2010
Messages
2,147 (0.43/day)
Yes but PCIe 5.0 would come in handy in cases when you can't have all 16 lanes for the GPU, and these cases are not so rare in latest motherboards.

A good high end AM4 or Am5 board is capable of providing enough bandwidth.

If you like really need it Threadripper is your friend or even a decent EPYC platform. They all excell in more lanes.
 
Joined
Mar 10, 2010
Messages
11,878 (2.27/day)
Location
Manchester uk
System Name RyzenGtEvo/ Asus strix scar II
Processor Amd R5 5900X/ Intel 8750H
Motherboard Crosshair hero8 impact/Asus
Cooling 360EK extreme rad+ 360$EK slim all push, cpu ek suprim Gpu full cover all EK
Memory Corsair Vengeance Rgb pro 3600cas14 16Gb in four sticks./16Gb/16GB
Video Card(s) Powercolour RX7900XT Reference/Rtx 2060
Storage Silicon power 2TB nvme/8Tb external/1Tb samsung Evo nvme 2Tb sata ssd/1Tb nvme
Display(s) Samsung UAE28"850R 4k freesync.dell shiter
Case Lianli 011 dynamic/strix scar2
Audio Device(s) Xfi creative 7.1 on board ,Yamaha dts av setup, corsair void pro headset
Power Supply corsair 1200Hxi/Asus stock
Mouse Roccat Kova/ Logitech G wireless
Keyboard Roccat Aimo 120
VR HMD Oculus rift
Software Win 10 Pro
Benchmark Scores 8726 vega 3dmark timespy/ laptop Timespy 6506
Intel only does x16 PCIe 5.0 lanes for the "GPU", but board makers are splitting it into x8 for the "GPU" and x4 for one M.2 slot on a lot of boards, with four lanes being lost. Some higher-end boards get two PCIe 5.0 M.2 slots.

As AMD has two times x4 "spare" PCIe lanes (plus four to the chipset), it's not an issue on most AM5 boards, but there are some odd boards still split the PCIe 5.0 lanes to the "GPU" and use them for M.2 slots.


Yes, x8 PCIe 5.0 lanes are in theory x16 PCIe 4.0 lanes, the issue is that a x16 PCIe 4.0 card, ends up as a PCIe 4.0 x8 card in a PCIe 5.0 x8 slot.
The PCIe spec doesn't allow for eight lanes to magically end up being 16 or 32 lanes of a lower speed grade, it requires an additional, costly chip.

As for the slot mix, with bifurcation it's not a problem to split the lanes on the fly, so as long as you don't use the M.2 slots, your x16 slot remains x16.


AMD allows that, Intel does not as yet.
Fair enough, another reason Intel's part's are not good enough for me then.
 

Hxx

Joined
Dec 5, 2013
Messages
285 (0.07/day)
Since pcie 3 vs 4 difference is minimal for a 4090 I’m assuming same goes for AMD’s offerings am I missing something ?
 
  • Like
Reactions: bug

ARF

Joined
Jan 28, 2020
Messages
4,343 (2.66/day)
Location
Ex-usa | slava the trolls
8900 will be much better maybe with double cache (196 :eek:).

:kookoo:
If you wait for it, you might end with grey hairs, don't expect any 8900 before 2025 as earliest :D
You know Moore's law is dead, is extremely expensive to use state-of-the-art manufacturing processes IF they exist at all :D


But good design by AMD - don't waste your time putting ridiculous and needless PCIe version inflated digits :D
 

bug

Joined
May 22, 2015
Messages
13,457 (4.02/day)
Processor Intel i5-12600k
Motherboard Asus H670 TUF
Cooling Arctic Freezer 34
Memory 2x16GB DDR4 3600 G.Skill Ripjaws V
Video Card(s) EVGA GTX 1060 SC
Storage 500GB Samsung 970 EVO, 500GB Samsung 850 EVO, 1TB Crucial MX300 and 2TB Crucial MX500
Display(s) Dell U3219Q + HP ZR24w
Case Raijintek Thetis
Audio Device(s) Audioquest Dragonfly Red :D
Power Supply Seasonic 620W M12
Mouse Logitech G502 Proteus Core
Keyboard G.Skill KM780R
Software Arch Linux + Win10
But good design by AMD - don't waste your time putting ridiculous and needless PCIe version inflated digits :D
On the other hand, what kind of message do you send when you support PCIe5 in your CPUs, but not GPUs? Is it needed? Is it a gimmick? Is it cost-effective? Is it too expensive?
 

ARF

Joined
Jan 28, 2020
Messages
4,343 (2.66/day)
Location
Ex-usa | slava the trolls
On the other hand, what kind of message do you send when you support PCIe5 in your CPUs, but not GPUs? Is it needed? Is it a gimmick? Is it cost-effective? Is it too expensive?

That maybe one should start using water-cooled PCIe 5.0 SSDs :rolleyes:

1667837562252.png

Alphacool Unveils HDX Pro Water M.2 NVMe SSD Water Block | TechPowerUp
 
Joined
Oct 21, 2007
Messages
1,491 (0.24/day)
Location
Ozark, Alabama
System Name Mine, spare gamer, comp,
Processor 5800x, 5800x. 3600,
Motherboard Gigabyte X570 Elite wifi, Gigabyte B550m wifi, Asus x570 Gaming wifi
Cooling Ventoo V5, Coolermaster 240 mm AIO, Ventoo V5
Memory 32 GB 3600 ddr4, 32 GB 3600 ddr4, 16 GB 3200 ddr4
Video Card(s) 6800 XT reference, XFX 5700XT Thicc III, EVGA RTX 2080,
Storage Samsung 500 GB 980, Intel M.2 500 GB, Intel M.2 500 gb
Display(s) TCL 43 inch 4k, 27 inch 1440 Shimian, 27 Inch Dell 1920x1080,
Case 2-Antec 280, Matrexx 50
Audio Device(s) Bluetooth output to Sony STR-DH190 into Sony SS-CS5 speakers, same, onboard, same
Power Supply Thermaltake 1350 , Seasonic 650 Watt Platinum, Deepcool 750 Watt Gold
Mouse Corsair, Sentinel Advance, Zio Levetron, CMStorm, Transformer
Keyboard G15, , CMStorm, G15
Software Win 10, POP OS
Joined
Oct 12, 2005
Messages
685 (0.10/day)
The "advanced chiplets design" would only be "disruptive" vs Monolithic if AMD either
1. Passed on the cost savings to consumers (prices are same for xtx, and 7900xt is worse cu count % wise to top sku than 6800xt was to 6900xt)
2. Actually used the chiplets to have more cu, instead of just putting memory controllers on there. Thereby maybe actually competing in RTRT performance or vs the 4090.

AMD is neither taking advantage of their cost savings and going the value route, or competing vs their competition's best. They're releasing a product months later, with zero performance advantages vs the competition? Potential 4080ti will still be faster and they've just given nvidia free reign (once again) to charge whatever they want for flagships since there's zero competition.

7950xtx could have two 7900xtx dies, but I doubt it

It can be disruptive without being any of these case.

If AMD is able to sell massively the 7900XTX at 900$ with huge margin, they will become even more profitable, they will have more R&D budget for future gen, more budget to buy new wafer, that also, need less of them. They will be able to produce more cards for cheaper. They will be able to wage price wars and win long term.

Right now, Nvidia must ensure they keep their mindshare as intact as possible and must ensure to win flagship no matter the cost to preserve it. Their current chip is huge and that affect defect rate. With the currently advertised defect rate of TSMC for 5/4 nm, they probably have around 56% yield or 46 good die for 82 candidate. Depending on where the defect is, it's possible that the yield could be a bit higher if they can disable a portion of the chip but don't except something huge.


AMD on the other hand can produce 135 good die per wafer for every 180 candidate for a 74.6% yield. Then you need the MCD that are on a much cheaper node, On those, you would get 1556 good die for 1614 candidate for a 96.34% yield. Another advantages are that it's probably the same MCD for Navi 31 and Navi 32, allowing AMD to quickly switch them from where they are the most needed.

AMD can produce around 260 7950XTX with 2 5nm wafer and 1 6nm wafer.
Nvidia can produce around 150 4090 with 3 4nm wafer.

The best comparison would be with ADA 103, for 139 candidate, there would be 96 good candidate for a 69.3% yield. for 3 4nm wafer, that mean 288 cards.


Then we add the cost of each nodes, 5 and 4 nm cost way more than 6/7 nm. I was not able to find 2022/2023 accurate price, and in the end it would be based on what AMD and Nvidia negociated. But those days, people are moving to 5nm and there is capacity available on 7/6 nm. (It's also why they still producing Zen 3 in masses). Rumors are that TSMC 5 nm cost around 17k and 6nm cost around 7-9K but take this with a grain of salt. (and each vendor have their own negociated price too).

So right now, it look that except Ray Tracing, they are able to produce close to as much 4080 cards with one of the wafer on a much cheaper nodes. They can reuse a portion of that wafer and allocate it to other GPU model easily.

This is what is disruptive. It's disruptive for AMD, not for the end users. (sadly), but overtime those will add up and Nvidia will suffer. Their margin will lower, etc. But Nvidia is huge and rich. This will take many gen and Nvidia will probably have the time to get their own chiplet strategy by then.

Intel is a little bit more in hot water because money already drying and they are barely competitive in desktop platform, they struggle in laptop/notebook and they suffer in data centers. They still compete in performance but their margin have shrinked so much that they need to cancel product to stay afloat.

I think it's easy to miss that with all the hype and the clickbait titles we had. But also, it's normal for the end users to look for what will benefits the most.
 
Joined
Nov 26, 2021
Messages
1,416 (1.47/day)
Location
Mississauga, Canada
Processor Ryzen 7 5700X
Motherboard ASUS TUF Gaming X570-PRO (WiFi 6)
Cooling Noctua NH-C14S (two fans)
Memory 2x16GB DDR4 3200
Video Card(s) Reference Vega 64
Storage Intel 665p 1TB, WD Black SN850X 2TB, Crucial MX300 1TB SATA, Samsung 830 256 GB SATA
Display(s) Nixeus NX-EDG27, and Samsung S23A700
Case Fractal Design R5
Power Supply Seasonic PRIME TITANIUM 850W
Mouse Logitech
VR HMD Oculus Rift
Software Windows 11 Pro, and Ubuntu 20.04
The "advanced chiplets design" would only be "disruptive" vs Monolithic if AMD either
1. Passed on the cost savings to consumers (prices are same for xtx, and 7900xt is worse cu count % wise to top sku than 6800xt was to 6900xt)
2. Actually used the chiplets to have more cu, instead of just putting memory controllers on there. Thereby maybe actually competing in RTRT performance or vs the 4090.

AMD is neither taking advantage of their cost savings and going the value route, or competing vs their competition's best. They're releasing a product months later, with zero performance advantages vs the competition? Potential 4080ti will still be faster and they've just given nvidia free reign (once again) to charge whatever they want for flagships since there's zero competition.

7950xtx could have two 7900xtx dies, but I doubt it
Given the massive off-chip bandwidth, a dual GCD GPU would have been very viable. I'm puzzled by their reluctance to go down that route.
 

ARF

Joined
Jan 28, 2020
Messages
4,343 (2.66/day)
Location
Ex-usa | slava the trolls
Potential 4080ti will still be faster

Potential RTX 4080 Ti can be a further cut down AD102 which would render RTX 4090 needless.

The "advanced chiplets design" would only be "disruptive" vs Monolithic if AMD either
1. Passed on the cost savings to consumers (prices are same for xtx, and 7900xt is worse cu count % wise to top sku than 6800xt was to 6900xt)
2. Actually used the chiplets to have more cu, instead of just putting memory controllers on there. Thereby maybe actually competing in RTRT performance or vs the 4090.

Chiplets are not suitable for GPUs because they will introduce heavy latencies and all kinds of performance issues.
Actually we had "chiplets" called Radeon HD 3870 X2, Radeon HD 4870 X2, Radeon HD 6990, Radeon HD 7990, Radeon R9 295X2, Radeon Pro Duo. And their support was abandoned.
 
Joined
Nov 26, 2021
Messages
1,416 (1.47/day)
Location
Mississauga, Canada
Processor Ryzen 7 5700X
Motherboard ASUS TUF Gaming X570-PRO (WiFi 6)
Cooling Noctua NH-C14S (two fans)
Memory 2x16GB DDR4 3200
Video Card(s) Reference Vega 64
Storage Intel 665p 1TB, WD Black SN850X 2TB, Crucial MX300 1TB SATA, Samsung 830 256 GB SATA
Display(s) Nixeus NX-EDG27, and Samsung S23A700
Case Fractal Design R5
Power Supply Seasonic PRIME TITANIUM 850W
Mouse Logitech
VR HMD Oculus Rift
Software Windows 11 Pro, and Ubuntu 20.04
Potential RTX 4080 Ti can be a further cut down AD102 which would render RTX 4090 needless.



Chiplets are not suitable for GPUs because they will introduce heavy latencies and all kinds of performance issues.
Actually we had "chiplets" called Radeon HD 3870 X2, Radeon HD 4870 X2, Radeon HD 6990, Radeon HD 7990, Radeon R9 295X2, Radeon Pro Duo. And their support was abandoned.
Those had insignificant inter-die bandwidth compared to the numbers AMD has shown; about 3 orders of magnitude more than the 295X2. The interconnect between the MCD and the GCD is a game changer; I don't understand why they didn't go for the jugular with a dual die GCD.
 
Joined
Oct 12, 2005
Messages
685 (0.10/day)
Given the massive off-chip bandwidth, a dual GCD GPU would have been very viable. I'm puzzled by their reluctance to go down that route.

Potential RTX 4080 Ti can be a further cut down AD102 which would render RTX 4090 needless.



Chiplets are not suitable for GPUs because they will introduce heavy latencies and all kinds of performance issues.
Actually we had "chiplets" called Radeon HD 3870 X2, Radeon HD 4870 X2, Radeon HD 6990, Radeon HD 7990, Radeon R9 295X2, Radeon Pro Duo. And their support was abandoned.

The main problem is to have the system to see 1 GPU and have 1 memory domain.

ARF, all your example are actually just crossfire on a single board. They are not even chiplets.

But the main problem is how you make it so the system only see 1 GPU. On CPU, you can add as many core as you want and the OS will be able to use all those cores. With the I/O die, you can have a single memory domain, but even if you don't, Operating System support Non-Unified Memory Architecture (NUMA) for decades.

For it to be possible on a GPU, you would need to have 1 common front end, with compute die attached. I am not sure if you would need to have the MCD connected to the front end or to the compute die, but i am under the impression it would be to the front end.

The thing is you want edge space to communicate with your other die. MCD, Display output, PCIE-E. There is no more edge space available. The biggest space users is the compute units, not the front end. I think this is why they did it that way.

In the future, we could see a large front end with a significant amount of cache, with multiple independent compute die connected to it. Cache do not scale well with newer nodes so that would mean the front end could be made on a older nodes (if that is worth it. some area might need to be faster).

Anyway, Design are all about tradeoff. Multichips could be done, but the tradeoff are probably not worth it yet so they went with this unperfect design that still do not allow having multiple GCD.
 
Joined
Feb 20, 2019
Messages
7,651 (3.88/day)
System Name Bragging Rights
Processor Atom Z3735F 1.33GHz
Motherboard It has no markings but it's green
Cooling No, it's a 2.2W processor
Memory 2GB DDR3L-1333
Video Card(s) Gen7 Intel HD (4EU @ 311MHz)
Storage 32GB eMMC and 128GB Sandisk Extreme U3
Display(s) 10" IPS 1280x800 60Hz
Case Veddha T2
Audio Device(s) Apparently, yes
Power Supply Samsung 18W 5V fast-charger
Mouse MX Anywhere 2
Keyboard Logitech MX Keys (not Cherry MX at all)
VR HMD Samsung Oddyssey, not that I'd plug it into this though....
Software W10 21H1, barely
Benchmark Scores I once clocked a Celeron-300A to 564MHz on an Abit BE6 and it scored over 9000.
Actually we had "chiplets" called Radeon HD 3870 X2, Radeon HD 4870 X2, Radeon HD 6990, Radeon HD 7990, Radeon R9 295X2, Radeon Pro Duo. And their support was abandoned.
Those weren't chiplets. That was two completely independent graphics cards connected together on a single board by a PCIe switch.
It was called crossfire-on-a-stick because it was just that. You could actually expose each GPU separately in some OpenCL workloads and have them run independently.
 

ARF

Joined
Jan 28, 2020
Messages
4,343 (2.66/day)
Location
Ex-usa | slava the trolls
Those had insignificant inter-die bandwidth compared to the numbers AMD has shown; about 3 orders of magnitude more than the 295X2. The interconnect between the MCD and the GCD is a game changer; I don't understand why they didn't go for the jugular with a dual die GCD.

Ok, I will tell you straight - because it won't work. AMD didn't use it because it doesn't make sense to do so.
 

dgianstefani

TPU Proofreader
Staff member
Joined
Dec 29, 2017
Messages
4,677 (1.96/day)
Location
Swansea, Wales
System Name Silent
Processor Ryzen 7800X3D @ 5.15ghz BCLK OC, TG AM5 High Performance Heatspreader
Motherboard ASUS ROG Strix X670E-I, chipset fans removed
Cooling Optimus Block, HWLABS Copper 240/40 + 240/30, D5/Res, 4x Noctua A12x25, 2x A4x10, Mayhems Ultra Pure
Memory 32 GB Dominator Platinum 6150 MT 26-36-36-48, 56.6ns AIDA, 2050 FCLK, 160 ns tRFC, active cooled
Video Card(s) RTX 3080 Ti Founders Edition, Conductonaut Extreme, 18 W/mK MinusPad Extreme, Corsair XG7 Waterblock
Storage Intel Optane DC P1600X 118 GB, Samsung 990 Pro 2 TB
Display(s) 32" 240 Hz 1440p Samsung G7, 31.5" 165 Hz 1440p LG NanoIPS Ultragear
Case Sliger SM570 CNC Aluminium 13-Litre, 3D printed feet, custom front panel pump/res combo
Audio Device(s) Audeze Maxwell Ultraviolet, Razer Nommo Pro
Power Supply SF750 Plat, full transparent custom cables, Sentinel Pro 1500 Online Double Conversion UPS w/Noctua
Mouse Razer Viper Pro V2 8 KHz Mercury White w/Tiger Ice Skates & Pulsar Supergrip tape
Keyboard Wooting 60HE+ module, TOFU Redux Burgundy w/brass weight, Prismcaps White, Jellykey, lubed/modded
Software Windows 10 IoT Enterprise LTSC 19044.4046
Benchmark Scores Legendary
It can be disruptive without being any of these case.

If AMD is able to sell massively the 7900XTX at 900$ with huge margin, they will become even more profitable, they will have more R&D budget for future gen, more budget to buy new wafer, that also, need less of them. They will be able to produce more cards for cheaper. They will be able to wage price wars and win long term.
The XTX is $1000.
How well is AMDs "huge margins" going with the release of Zen 4? Remind me the economics of a product with 50% margin that sells 1000 units, compared to a product with 30% margin that sells 10,000 units.
Right now, Nvidia must ensure they keep their mindshare as intact as possible and must ensure to win flagship no matter the cost to preserve it. Their current chip is huge and that affect defect rate. With the currently advertised defect rate of TSMC for 5/4 nm, they probably have around 56% yield or 46 good die for 82 candidate. Depending on where the defect is, it's possible that the yield could be a bit higher if they can disable a portion of the chip but don't except something huge.
Defect 4090s can become 4080ti. Their defect rate also doesn't matter that much when they can charge whatever they like, due to no competition.
So right now, it look that except Ray Tracing, they are able to produce close to as much 4080 cards with one of the wafer on a much cheaper nodes. They can reuse a portion of that wafer and allocate it to other GPU model easily.

This is what is disruptive. It's disruptive for AMD, not for the end users. (sadly), but overtime those will add up and Nvidia will suffer. Their margin will lower, etc. But Nvidia is huge and rich. This will take many gen and Nvidia will probably have the time to get their own chiplet strategy by then.

Intel is a little bit more in hot water because money already drying and they are barely competitive in desktop platform, they struggle in laptop/notebook and they suffer in data centers. They still compete in performance but their margin have shrinked so much that they need to cancel product to stay afloat.

I think it's easy to miss that with all the hype and the clickbait titles we had. But also, it's normal for the end users to look for what will benefits the most.
Except for RTRT and a driver and software development team that is competent and huge (compared to AMD).

I guess we'll see what the performance is like when the reviews come out.

AMD doesn't seem to understand their position in the GPU market - they're the underdog, with worse development resources, technology (primarily software, but also dedicated hardware units) that is several years behind the competition, significantly lower market share etc. It's not a reasonable prediction to say "NVIDIA will suffer", it's AMD that needs to actually compete. They needed to go balls to the wall and release a multi die card that blew the 4090 out of the water, while being the same price or cheaper to change this "mindshare", instead they've released another generation of cards that on paper is "almost as good", while being a bit cheaper and without the NVIDIA software stack advantage.
 
Joined
Oct 26, 2022
Messages
57 (0.09/day)
AMD doesn't seem to understand their position in the GPU market - they're the underdog, with worse development resources, technology (primarily software, but also dedicated hardware units) that is several years behind the competition, significantly lower market share etc. It's not a reasonable prediction to say "NVIDIA will suffer", it's AMD that needs to actually compete. They needed to go balls to the wall and release a multi die card that blew the 4090 out of the water, while being the same price or cheaper to change this "mindshare", instead they've released another generation of cards that on paper is "almost as good", while being a bit cheaper and without the NVIDIA software stack advantage.
This is exactly what Coreteks stated in his latest video. AdoredTV presented a different narrative however.
 

ARF

Joined
Jan 28, 2020
Messages
4,343 (2.66/day)
Location
Ex-usa | slava the trolls
AMD doesn't seem to understand their position in the GPU market - they're the underdog, with worse development resources, technology (primarily software, but also dedicated hardware units) that is several years behind the competition, significantly lower market share etc. It's not a reasonable prediction to say "NVIDIA will suffer", it's AMD that needs to actually compete. They needed to go balls to the wall and release a multi die card that blew the 4090 out of the water, while being the same price or cheaper to change this "mindshare", instead they've released another generation of cards that on paper is "almost as good", while being a bit cheaper and without the NVIDIA software stack advantage.

Yes, speaking of how bad the AMD situation is. They have 0 design wins with 4K notebooks. Everything there is Intel / nvidia only.
 
Joined
Dec 26, 2006
Messages
3,637 (0.57/day)
Location
Northern Ontario Canada
Processor Ryzen 5700x
Motherboard Gigabyte X570S Aero G R1.1 BiosF5g
Cooling Noctua NH-C12P SE14 w/ NF-A15 HS-PWM Fan 1500rpm
Memory Micron DDR4-3200 2x32GB D.S. D.R. (CT2K32G4DFD832A)
Video Card(s) AMD RX 6800 - Asus Tuf
Storage Kingston KC3000 1TB & 2TB & 4TB Corsair MP600 Pro LPX
Display(s) LG 27UL550-W (27" 4k)
Case Be Quiet Pure Base 600 (no window)
Audio Device(s) Realtek ALC1220-VB
Power Supply SuperFlower Leadex V Gold Pro 850W ATX Ver2.52
Mouse Mionix Naos Pro
Keyboard Corsair Strafe with browns
Software W10 22H2 Pro x64
Meh. As long as the bottom cards have a good media engine…….unlike the 6400…….
 
Joined
Nov 26, 2021
Messages
1,416 (1.47/day)
Location
Mississauga, Canada
Processor Ryzen 7 5700X
Motherboard ASUS TUF Gaming X570-PRO (WiFi 6)
Cooling Noctua NH-C14S (two fans)
Memory 2x16GB DDR4 3200
Video Card(s) Reference Vega 64
Storage Intel 665p 1TB, WD Black SN850X 2TB, Crucial MX300 1TB SATA, Samsung 830 256 GB SATA
Display(s) Nixeus NX-EDG27, and Samsung S23A700
Case Fractal Design R5
Power Supply Seasonic PRIME TITANIUM 850W
Mouse Logitech
VR HMD Oculus Rift
Software Windows 11 Pro, and Ubuntu 20.04
Meh. As long as the bottom cards have a good media engine…….unlike the 6400…….
I'm not sure that we'll get a replacement for the 6400 any time soon; only Intel seems to be interested in the lower end GPUs.
 
Joined
Oct 12, 2005
Messages
685 (0.10/day)
Defect 4090s can become 4080ti. Their defect rate also doesn't matter that much when they can charge whatever they like, due to no competition.
Yes and no.

Not all defect are equal, If you have a slight deflect in one of the compute unit, you can just disable it, same thing with one of the memory controler. But if it's in area with no redundancy, you cannot reuse the chip. And that is there is only one defect per chip.

Cheaper SKU can also be just chip that cannot clock as high as other. Nothing say by example that a 6800 have defect, but they might have found that it didn't clocked high enough to be a 6800 XT or 6900XT/6950XT. It's also possible that the chip had one of the compute unit that really didn't clock very high so they had to disable it to reach clock (so a good chip without defect but not good enough).

That affect everyone, but bigger chip are more at risk of defect. in the end, the number can be taken at face value because the larger chip, the higher the chance something is wrong and that it's not reusable by deactivating some portion of it.
 
  • Like
Reactions: HTC

Emu

Joined
Jan 5, 2018
Messages
28 (0.01/day)
The "advanced chiplets design" would only be "disruptive" vs Monolithic if AMD either
1. Passed on the cost savings to consumers (prices are same for xtx, and 7900xt is worse cu count % wise to top sku than 6800xt was to 6900xt)
2. Actually used the chiplets to have more cu, instead of just putting memory controllers on there. Thereby maybe actually competing in RTRT performance or vs the 4090.

AMD is neither taking advantage of their cost savings and going the value route, or competing vs their competition's best. They're releasing a product months later, with zero performance advantages vs the competition? Potential 4080ti will still be faster and they've just given nvidia free reign (once again) to charge whatever they want for flagships since there's zero competition.

7950xtx could have two 7900xtx dies, but I doubt it
1. You don't think the cost savings have been passed on to consumers? Don't forget that TSMC has increased silicon prices by at least 15% in the past year. This means that AMD releasing the 7900XTX for the same price as the 6900XT released for is the cost savings being passed onto consumers. The "advanced chiplet design" lets AMD get more viable GPU dies per 300mm wafer which means that availablility of the 7900XTX is should be higher than the 4090 which has a die that is twice the size.

2. Perhaps the infinity fabric isn't quite there yet to let them have the CUs on multiple dies? Or perhaps the front end requires 5nm to perform effectively which means that AMD would have to do two different designs on 5nm wafers which would complicate their production schedule and increased costs?

In other words, AMD is taking advantage of their cost savings and releasing their top tier product at $600 less than Nvidia's top tier product and at $200 less than what the apparent competitive GPU is launching at. Things get even worse for Nvidia in markets outside of the USA. For example, here in Australia, if AMD does not price gouge us like Nvidia is doing then the 7900XTX will be roughly half the price of a 4090 and that price saving will be enough to build the rest of your PC (e.g. 5800X3D, 16GB/32GB DDR4, b550 motherboard, 850W PSU, $200 case).
 

dgianstefani

TPU Proofreader
Staff member
Joined
Dec 29, 2017
Messages
4,677 (1.96/day)
Location
Swansea, Wales
System Name Silent
Processor Ryzen 7800X3D @ 5.15ghz BCLK OC, TG AM5 High Performance Heatspreader
Motherboard ASUS ROG Strix X670E-I, chipset fans removed
Cooling Optimus Block, HWLABS Copper 240/40 + 240/30, D5/Res, 4x Noctua A12x25, 2x A4x10, Mayhems Ultra Pure
Memory 32 GB Dominator Platinum 6150 MT 26-36-36-48, 56.6ns AIDA, 2050 FCLK, 160 ns tRFC, active cooled
Video Card(s) RTX 3080 Ti Founders Edition, Conductonaut Extreme, 18 W/mK MinusPad Extreme, Corsair XG7 Waterblock
Storage Intel Optane DC P1600X 118 GB, Samsung 990 Pro 2 TB
Display(s) 32" 240 Hz 1440p Samsung G7, 31.5" 165 Hz 1440p LG NanoIPS Ultragear
Case Sliger SM570 CNC Aluminium 13-Litre, 3D printed feet, custom front panel pump/res combo
Audio Device(s) Audeze Maxwell Ultraviolet, Razer Nommo Pro
Power Supply SF750 Plat, full transparent custom cables, Sentinel Pro 1500 Online Double Conversion UPS w/Noctua
Mouse Razer Viper Pro V2 8 KHz Mercury White w/Tiger Ice Skates & Pulsar Supergrip tape
Keyboard Wooting 60HE+ module, TOFU Redux Burgundy w/brass weight, Prismcaps White, Jellykey, lubed/modded
Software Windows 10 IoT Enterprise LTSC 19044.4046
Benchmark Scores Legendary
1. You don't think the cost savings have been passed on to consumers? Don't forget that TSMC has increased silicon prices by at least 15% in the past year. This means that AMD releasing the 7900XTX for the same price as the 6900XT released for is the cost savings being passed onto consumers. The "advanced chiplet design" lets AMD get more viable GPU dies per 300mm wafer which means that availablility of the 7900XTX is should be higher than the 4090 which has a die that is twice the size.

2. Perhaps the infinity fabric isn't quite there yet to let them have the CUs on multiple dies? Or perhaps the front end requires 5nm to perform effectively which means that AMD would have to do two different designs on 5nm wafers which would complicate their production schedule and increased costs?

In other words, AMD is taking advantage of their cost savings and releasing their top tier product at $600 less than Nvidia's top tier product and at $200 less than what the apparent competitive GPU is launching at. Things get even worse for Nvidia in markets outside of the USA. For example, here in Australia, if AMD does not price gouge us like Nvidia is doing then the 7900XTX will be roughly half the price of a 4090 and that price saving will be enough to build the rest of your PC (e.g. 5800X3D, 16GB/32GB DDR4, b550 motherboard, 850W PSU, $200 case).
Availability is irrelevant when 80% of people still choose their competition.

Their top tier product competes in raster to Nvidia's 3rd/4th product down the stack (4090ti, 4090, 4080ti, 4080), therefore the fact it's cheaper is borderline irrelevant. That's without getting started on the non-raster advantages NVIDIA has.

In other words, the situation hasn't changed since the 6950xt and the 3090ti/6900 and 3090.
 
Last edited:
Joined
Dec 12, 2012
Messages
728 (0.17/day)
Location
Poland
System Name THU
Processor Intel Core i5-13600KF
Motherboard ASUS PRIME Z790-P D4
Cooling SilentiumPC Fortis 3 v2 + Arctic Cooling MX-2
Memory Crucial Ballistix 2x16 GB DDR4-3600 CL16 (dual rank)
Video Card(s) MSI GeForce RTX 4070 Ventus 3X OC 12 GB GDDR6X (2610/21000 @ 0.91 V)
Storage Lexar NM790 2 TB + Corsair MP510 960 GB + PNY XLR8 CS3030 500 GB + Toshiba E300 3 TB
Display(s) LG OLED C8 55" + ASUS VP229Q
Case Fractal Design Define R6
Audio Device(s) Yamaha RX-V381 + Monitor Audio Bronze 6 + Bronze FX | FiiO E10K-TC + Sony MDR-7506
Power Supply Corsair RM650
Mouse Logitech M705 Marathon
Keyboard Corsair K55 RGB PRO
Software Windows 10 Home
Benchmark Scores Benchmarks in 2024?
I'm not sure that we'll get a replacement for the 6400 any time soon; only Intel seems to be interested in the lower end GPUs.

I definitely hope they will care again, even though it seems unlikely.

Next year I am planning on building a separate strictly gaming PC and I want to convert my current build to an HTPC that will handle everything else, including recording with a capture card. I would definitely want a cheap graphics card with AV1 encoding. If NVIDIA or AMD do not offer such a product, I might go with Intel (A380 or whatever).
 
Top