• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Retreating from Enthusiast Graphics Segment with RDNA4?

Joined
Apr 14, 2018
Messages
649 (0.27/day)
I did read what you wrote. You are saying the upside to MCM is better performance and efficiency, without mentioning process and architecture improvements which are more important. Almost everything positive you attribute to MCM is more likely due to being on 5nm and being a new architecture. Convenient you used 4k data to show performance difference between 6900xt and 7900xtx where bus size and memory speed difference are more likely bigger factors than being MCM.

Also that in bold is not the truth, they have increased MSRP across every tier.

IMG_5061.jpeg
IMG_5062.jpeg


Come again? Exact same MSRP, images taken from TPU reference reviews for their respective release.

Claims I’m straw manning and starts putting words in other peoples mouths.
 
Joined
Jun 11, 2020
Messages
573 (0.35/day)
Location
Florida
Processor 5800x3d
Motherboard MSI Tomahawk x570
Cooling Thermalright
Memory 32 gb 3200mhz E die
Video Card(s) 3080
Storage 2tb nvme
Display(s) 165hz 1440p
Case Fractal Define R5
Power Supply Toughpower 850 platium
Mouse HyperX Hyperfire Pulse
Keyboard EVGA Z15
View attachment 308399View attachment 308400

Come again? Exact same MSRP, images taken from TPU reference reviews for their respective release.

Claims I’m straw manning and starts putting words in other peoples mouths.

Ok I am wrong about the MSRP of the top tier increasing. I thought it was 800 or 900. YOU GOT ME.

7900xt was an increase over 6800xt (both are 2nd fastest).

Look I'm not trying to say the 7900xtx is a bad card. Performance is great. As you point out, you get more for the same money they used to charge for 6900xt. AMD still gives consumers more value than Nvidia when you want gaming performance. MCMs are the future and AMD has positioned themselves as the clear leader in MCM tech for GPUs and this experience will no doubt help in the future. I'm sure we are all hoping for Zen to Zen 2 to Zen 3 types of performance and efficiency gains for RDNA. I do agree with alot of what you said. I just happen to think there have been bumps in the road with this transition and don't think moving to MCM was all beneficial especially since they could not secure node parity with nvidia which magnifies all its shortcomings.
 
Last edited:
Joined
Aug 10, 2023
Messages
341 (0.73/day)
Viper is saying MCM make more sense when you know, have actually more than 1 (graphics) chiplet. How is this not sinking in? If you don't believe that you are drowning in AMD kool aid brother.
The funny thing about that is, that AMD 100% tried to do multi-GPU chiplets for gaming (they already achieved this for computational data center with INSTINCT some time ago now), but because gaming is very very latency sensitive they couldn't pull it off - yet. Eventually it is probably doable, and I don't know who is gonna be able to do it faster. Nvidia has a way higher research budget than RTG. Nvidia solely has a huge upside over AMD for being, well, a GPU company first and foremost.

I thought it was 800 or 900.
AMD also shadow-reduced the price of the 7900 XT later down to 800$. It's pretty competitive now, and not longer overpriced compared to the XTX.
 
Last edited:
Joined
Apr 14, 2018
Messages
649 (0.27/day)
Ok I am wrong about the MSRP of the top tier increasing. I thought it was 800 or 900. YOU GOT ME.

7900xt was an increase over 6800xt (both are 2nd fastest).

Look I'm not trying to say the 7900xtx is a bad card. Performance is great. As you point out, you get more for the same money they used to charge for 6900xt. AMD still gives consumers more value than Nvidia when you want gaming performance. MCMs are the future and AMD has positioned themselves as the clear leader in MCM tech for GPUs and this experience will no doubt help in the future. I'm sure we are all hoping for Zen to Zen 2 to Zen 3 types of performance and efficiency gains for RDNA. I do agree with alot of what you said. I just happen to think there have been bumps in the road with this transition and don't think moving to MCM was all beneficial especially since they could not secure node parity with nvidia which magnifies all its shortcomings.

This has nothing to do with any GPU being bad. How many times do you want the move the goal post. Vipers argument, and to a lesser degree yours, was that AMD was wrong to design Navi31 as they did, that it provided no performance or efficiency improvements (regardless of which architectural changes attribute to those improvements), and with no evidence at all state a monolithic design was the cut and dry best choice.

Are you an engineer in the computing field? I know im not. It’d be great to have a chance to see how a monolithic rdna3 top end GPU would have panned out or in comparison. Again, Im not gonna be naive and say AMD, Nvidia, or any other tech monster was flat out wrong to design something as they did or choose a node with absolutely zero inside information on why or how those choices effect design and costs.

I don’t care which company we’re taking about, to say you otherwise know better or state as a fact something was the wrong business choice is foolish. If you have the technological knowledge and proprietary information to prove otherwise, be my guest.
 
Joined
Aug 10, 2023
Messages
341 (0.73/day)
Nvidia learns from mistakes, like not jumping on advanced nodes for turing and ampere. They've since corrected that mistake and likely won't make it again.
I also want to address this, as I overlooked it earlier:

- Ampere was chosen to be produced at Samsung FAB because of price savings, and they knew they would win anyway (which they did, even if not on all resolutions)

- Turing was produced on a very very mature node because TU102 was a gigantic chip by standards going for all times of gaming chips. The chip has a size of 754mm2. It's a monster and therefore needs a very mature node to be produced without losing too many chips, since they only sold a slightly deactivated variant (2080 Ti) and the full thing (TITAN RTX).

- Historically NV was a bit more cautious and conservative with new nodes than ATI/AMD was. This is rather a wise decision, but sometimes it paid off for ATI/AMD to be faster.

edit, to make the summary complete:

- ADA was again produced at TSMC with a modified 5nm node to be more competitive, and since Samsung FAB was simply not up to par. This is basically Nvidia being "serious" after Ampere was produced at a weaker node and barely being faster than RDNA 2 aside from Raytracing.
 
Last edited:
Joined
Jun 11, 2020
Messages
573 (0.35/day)
Location
Florida
Processor 5800x3d
Motherboard MSI Tomahawk x570
Cooling Thermalright
Memory 32 gb 3200mhz E die
Video Card(s) 3080
Storage 2tb nvme
Display(s) 165hz 1440p
Case Fractal Define R5
Power Supply Toughpower 850 platium
Mouse HyperX Hyperfire Pulse
Keyboard EVGA Z15
This has nothing to do with any GPU being bad. How many times do you want the move the goal post. Vipers argument, and to a lesser degree yours, was that AMD was wrong to design Navi31 as they did, that it provided no performance or efficiency improvements (regardless of which architectural changes attribute to those improvements), and with no evidence at all state a monolithic design was the cut and dry best choice.

Are you an engineer in the computing field? I know im not. It’d be great to have a chance to see how a monolithic rdna3 top end GPU would have panned out or in comparison. Again, Im not gonna be naive and say AMD, Nvidia, or any other tech monster was flat out wrong to design something as they did or choose a node with absolutely zero inside information on why or how those choice effect design and costs.

I don’t care which company we’re taking about, to say you otherwise know better or state as a fact something was the wrong business choice is foolish. If you have the technological knowledge and proprietary information to prove otherwise, be my guest.

No that's not my point at all, its just that it hasn't been perfect or gone exactly how AMD wanted. 7900xtx did not perform to their projections.

Where is your evidence that MCM is the reason for performance and efficiency advantages over the monolithic 6900xt? All you have are performance numbers that show, yes the newer more advanced product is more efficient and performant than the previous model. Nvidia did the same thing when you compare 3090 vs 4090. How did they do that without the move to MCM?! Looks like you are also guilty of being naive and saying something is happening with absolutely zero inside information on why or how. But if you have the technical knowledge to prove otherwise, be my guest.

I've been trying to find some common ground with you, but you just want to argue so ok. I understand you want to defend AMD, but its not unfair to say their implementation of MCM still needs work.
 
Last edited:
Joined
Aug 15, 2012
Messages
176 (0.04/day)
System Name For now
Processor Intel i7-4790K
Motherboard ASRock Extreme9 Z97
Cooling Corsair H100i
Memory Kingston HyperX
Video Card(s) Asus GTX 980 Strix x 2
Display(s) Overlord
Case Corsair 900D
Power Supply Corsair AX1200i
To me when the 7900XTX release with lackluster performance I felt it was the pain of going from mono to chiplet. Just like when Ryzen launched we had teething issue, this was also going to be it.
But didn't think they would go this far and step back and only produce lower end products like Polaris and have use wait two plus years before getting something like the 6900XT.

Sucks, this is just gonna get nVidia to be bold and straight up come out and say the RTX 5090 is 2k, 5080Ti 1800USD, etc etc kind of like what happened during Turing... ugh history repeating itself in such a short time frame sucks.
 
Joined
Aug 10, 2023
Messages
341 (0.73/day)
Sucks, this is just gonna get nVidia to be bold and straight up come out and say the RTX 5090 is 2k, 5080Ti 1800USD, etc etc kind of like what happened during Turing... ugh history repeating itself in such a short time frame sucks.
Nvidia did already say they are coming later with "ADA Next" which is what is the name currently for their next gen arch. 1 year later than usually (2025), this means RDNA 4 maybe doesn't even compete with next gen Nvidia arch at the beginning. And then when it does and the rumours are true, only at the performance level (aka up to 700$ level). Is this a problem, yes, less competition means higher pricing. Though I suspect that 4080 and 4090 are already more or less the maximum pricing Nvidia expects, after all competition of AMD is too weak to endanger either top end products of them. Worst case they increase the price again for another 100$ but i don't expect more, the market isn't in a good place anyway - increasing the pricing further does not help with this.
 
Joined
May 15, 2020
Messages
697 (0.42/day)
Location
France
System Name Home
Processor Ryzen 3600X
Motherboard MSI Tomahawk 450 MAX
Cooling Noctua NH-U14S
Memory 16GB Crucial Ballistix 3600 MHz DDR4 CAS 16
Video Card(s) MSI RX 5700XT EVOKE OC
Storage Samsung 970 PRO 512 GB
Display(s) ASUS VA326HR + MSI Optix G24C4
Case MSI - MAG Forge 100M
Power Supply Aerocool Lux RGB M 650W
Oh well, MLID confirms the rumour, so I just ordered a 7900 XTX Hellhound. Not going to wait till 2025 to upgrade...
 
Joined
Aug 25, 2021
Messages
1,170 (0.99/day)
I extracted his results from two reviews because he "omitted" to compare the results, they were too against AMD. How to compare in Cyberpunk the 34 fps obtained by the RX 7600 (igp disaster) with the 111 fps obtained by the 4060?
Video cards were tested only in 1080p
Recent extensive testing of 7600 against 4060 shows that Radeon card is:
In 1080p:
1. 7% slower overall (mixed RT and non-RT games)
2. 2% slower when RT titles are removed - raster is literally the same on both cards
3. 20% slower in RT only - the caveat here is that 4060 is chocked with RT on; hardware is slow (even the title in your video shows 4050...)
In 1440p:
1. 9% slower overall (mixed RT and non-RT games)
2. 2% slower when RT titles are removed - raster is literally the same on both cards
3. 28% slower in RT only - RT is uselss here, unless helped by DLSS and FG in a few titles; then latency becomes an issue...

There are already retailers in Europe selling 7600 below MSRP, for example for £230 in the UK + plus free Starfield, which is makes Radeon 7600 more attractive with game bundle, saving buyers over 100 bucks all together.
 
Joined
Jun 2, 2017
Messages
9,127 (3.34/day)
System Name Best AMD Computer
Processor AMD 7900X3D
Motherboard Asus X670E E Strix
Cooling In Win SR36
Memory GSKILL DDR5 32GB 5200 30
Video Card(s) Sapphire Pulse 7900XT (Watercooled)
Storage Corsair MP 700, Seagate 530 2Tb, Adata SX8200 2TBx2, Kingston 2 TBx2, Micron 8 TB, WD AN 1500
Display(s) GIGABYTE FV43U
Case Corsair 7000D Airflow
Audio Device(s) Corsair Void Pro, Logitch Z523 5.1
Power Supply Deepcool 1000M
Mouse Logitech g7 gaming mouse
Keyboard Logitech G510
Software Windows 11 Pro 64 Steam. GOG, Uplay, Origin
Benchmark Scores Firestrike: 46183 Time Spy: 25121
Ok I am wrong about the MSRP of the top tier increasing. I thought it was 800 or 900. YOU GOT ME.

7900xt was an increase over 6800xt (both are 2nd fastest).

Look I'm not trying to say the 7900xtx is a bad card. Performance is great. As you point out, you get more for the same money they used to charge for 6900xt. AMD still gives consumers more value than Nvidia when you want gaming performance. MCMs are the future and AMD has positioned themselves as the clear leader in MCM tech for GPUs and this experience will no doubt help in the future. I'm sure we are all hoping for Zen to Zen 2 to Zen 3 types of performance and efficiency gains for RDNA. I do agree with alot of what you said. I just happen to think there have been bumps in the road with this transition and don't think moving to MCM was all beneficial especially since they could not secure node parity with nvidia which magnifies all its shortcomings.
The 7900XT is no way a comparison to the 6800XT. The bit bus alone is 64 lanes greater. Even though there was never a 6850XT the 50 numbers showed a 200-300 bump in clocks for GPU and Memory. That is also true for 7000 and the 2898 clock my XT achieves is only 200 Mhz away from the highest clock for 7900XTX. I have a strange way of looking at chiplets. When I opened my 6500XT I was shocked at how small the chip was. To me the 7900 series are 6 6500XTs connected to an I/O die. I know it is simple but when you see this card play 4K anything you would understand my opinion. I am going to back into Exoprimal now.

Nvidia is truly ahead because they secured TSMCs next node before AMD. It doesn't matter though the narrative has it wrong for me. If AMD can and has released a 16 core chip for laptops chiplets will become the future in Computing. For me the 7940 has a 6500XT GPU inside of it. As a result I think AMD are doing a from the bottom plan with GPUs. Right now there is no card from Nvidia that will give you the performance of the 6800XT for $500+ as the 4060Ti vs 6800XT comparison is evidence of. I now have seen that the new 7700XT and 7800XT will have 4 chiplets to 6. When the I/O die goes to 5nm AMD will be even faster. The truth is though that people wanted AMD to challenge Nvidia and the release of the 7900XTX was meh at best but it was not that simple. There is no OC room to speak of on the 4090 because the card came turned up to 11. Meanwhile the XT and XTX cards did not get that treatment until the AIB models arrived but even then users get a 4-500 bump in boost with just one click. I am not saying that a 4090 is not better than a 7900 series as a Graphics Processing Unit but my focus is on Gaming and 4K is for the sweet visuals and you need a GPU with the grunt to make it enjoyable. I fully expect that there will always be an AMD variant that satisfies the high end but they are going from the bottom up with the next release. I feel that the 7600 may be an actual red herring so that the price of the 6700XT could be maintained. I expect that by the time Games are too tough for the 7900 series that I may look into FSR.
 
Joined
Mar 10, 2010
Messages
11,878 (2.21/day)
Location
Manchester uk
System Name RyzenGtEvo/ Asus strix scar II
Processor Amd R5 5900X/ Intel 8750H
Motherboard Crosshair hero8 impact/Asus
Cooling 360EK extreme rad+ 360$EK slim all push, cpu ek suprim Gpu full cover all EK
Memory Corsair Vengeance Rgb pro 3600cas14 16Gb in four sticks./16Gb/16GB
Video Card(s) Powercolour RX7900XT Reference/Rtx 2060
Storage Silicon power 2TB nvme/8Tb external/1Tb samsung Evo nvme 2Tb sata ssd/1Tb nvme
Display(s) Samsung UAE28"850R 4k freesync.dell shiter
Case Lianli 011 dynamic/strix scar2
Audio Device(s) Xfi creative 7.1 on board ,Yamaha dts av setup, corsair void pro headset
Power Supply corsair 1200Hxi/Asus stock
Mouse Roccat Kova/ Logitech G wireless
Keyboard Roccat Aimo 120
VR HMD Oculus rift
Software Win 10 Pro
Benchmark Scores 8726 vega 3dmark timespy/ laptop Timespy 6506

This Crazey news is refuted.

even a rebrand 7950XTX made more sense anyway.

Also, AMD is working on RDNA 5, now.
Also, AMD is working on PS5pro, Xbox refresh and generally node ports for old consoles, other custom SOCs like Steamdeck and teslatainment, Vast amounts of work on fpga stuff and also next-generation server parts.

probably a resource issue and a desire to up their AI game.
 
Joined
May 15, 2020
Messages
697 (0.42/day)
Location
France
System Name Home
Processor Ryzen 3600X
Motherboard MSI Tomahawk 450 MAX
Cooling Noctua NH-U14S
Memory 16GB Crucial Ballistix 3600 MHz DDR4 CAS 16
Video Card(s) MSI RX 5700XT EVOKE OC
Storage Samsung 970 PRO 512 GB
Display(s) ASUS VA326HR + MSI Optix G24C4
Case MSI - MAG Forge 100M
Power Supply Aerocool Lux RGB M 650W

This Crazey news is refuted.

even a rebrand 7950XTX made more sense anyway.

Also, AMD is working on RDNA 5, now.
Also, AMD is working on PS5pro, Xbox refresh and generally node ports for old consoles, other custom SOCs like Steamdeck and teslatainment, Vast amounts of work on fpga stuff and also next-generation server parts.

probably a resource issue and a desire to up their AI game.
Not really, it says clearly that RDNA 4 top end dev is going slowly and will most likely be skipped.
 
Joined
Aug 25, 2021
Messages
1,170 (0.99/day)
Agreeing that MCM is better at all for AMD (performance wise), that it's not just to save wafers in 5nm as half of the chip is produced on 6nm basically, I would say that they still didn't use the possibility to produce a simply bigger chip - especially if you consider that with MCM it's easier possible than with monolithic.
It was a decision to make, now or later. Entire industry is moving towards chiplets, as future EUV etching machines from ASML will use high-NA 0.55 EUV scanning, where the chip size could max out at ~400 mm2 on 2nm process in a few years. Don't forget that discrete GPU has been much smaller part of AMD's business than Nvidia's business, so their silicon decisions and strategy will always be different. In a long run, bigger chips will not necessarily be better. They might have decided to start adoptinig chiplets in client GPUs sooner, so that they could gain more experience and perfect it across generations, just like they did with Zen. This means that one or two initial generations are not going to be perfect, but mroe fruits may come later, just like they did with Zen3. This is what they hope to achieve, I would imagine.
So, I don't see a upside in MCM aside from, again, saving wafers, and that is only so needed for AMD because they share it with their CPUs - something which would not be the case if Radeon would still be ATI.
Everyone will be using chiplets, even if ATI was alive today, so such speculation does not make sense when you look at where ASML is taking their chip etching machines. All semiconductor companies will need to conform to new chip design norms and size constrains coming soon, if they want to stay on cutting edge nodes.
Nvidia will go for CHIPLET as soon as it makes SENSE for them, as in, better performance or efficiency. Both things which AMD did not achieve this time.
Nvidia willl be milking monolithic design on client GPUs until they are FORCED to move to chiplets, on new EUV high-NA machines that TSMC has already ordered from ASML. In datacenter, everybody has moved to chiplets. In client segment, once 600 mm2 dies are not possible anymore, Nvidia will have to move to chiplets. So, as the video below shows, it's not if but when. This will happen most probably after Blackwell, as first high-NA EUV machines are to be delivered to TSMC, Intel and Samsung ~2025.

The question for you as a customer is whether you are prepared to pay $1,500 for 5080 on another monolithic die? Currently, GPU prices are moving in a direction that will cause more people being outpriced from GPU market. Imagine 5080 is ~50% faster than 4080, which means ~20% faster than 4090 in 4K. Nvidia can easily try to sell you this performance uplift as a "good deal" and say it's $1,500, so less expensive than 4090, but still faster. Suddenly, you have 5080 which is twice as fast as 3080, but more than twice as expensive. Quite sick situation in comparison to CPU market, where doubling of performance over a few generations does not make CPUs twice as expensive.
 
Joined
Aug 10, 2023
Messages
341 (0.73/day)
Everyone will be using chiplets, even if ATI was alive today, so such speculation does not make sense when you look at where ASML is taking their chip etching machines. All semiconductor companies will need to conform to new chip design norms and size constrains coming soon, if they want to stay on cutting edge nodes.
Then you didn't understand my point at all, same as the other guy. MCM makes sense when its done right - I already explained that multiple times now. It does not make much sense if you only do it to save wafers. And you don't know what a standalone GPU company would've done - with more wafers at their disposal since they are NOT sharing it with CPUs like AMD does. So your answer makes no sense, not in the context of my speculation about a hypothetical ATI.
Nvidia willl be milking monolithic design on client GPUs until they are FORCED to move to chiplets, on new EUV high-NA machines that TSMC has already ordered from ASML. In datacenter, everybody has moved to chiplets. In client segment, once 600 mm2 dies are not possible anymore, Nvidia will have to move to chiplets. So, as the video below shows, it's not if but when. This will happen most probably after Blackwell, as first high-NA EUV machines are to be delivered to TSMC, Intel and Samsung ~2025.
600mm2 and even bigger dies are and always will be possible to achieve, this is clearly wrong. And also, it's wrong to assume Nvidia will be so conservative (and technologically behind) to move to chiplets once they are "forced" to - no, they will when it makes sense for them, $-wise, perf-wise.
 
Last edited:
Joined
Dec 26, 2006
Messages
3,828 (0.59/day)
Location
Northern Ontario Canada
Processor Ryzen 5700x
Motherboard Gigabyte X570S Aero G R1.1 BiosF5g
Cooling Noctua NH-C12P SE14 w/ NF-A15 HS-PWM Fan 1500rpm
Memory Micron DDR4-3200 2x32GB D.S. D.R. (CT2K32G4DFD832A)
Video Card(s) AMD RX 6800 - Asus Tuf
Storage Kingston KC3000 1TB & 2TB & 4TB Corsair MP600 Pro LPX
Display(s) LG 27UL550-W (27" 4k)
Case Be Quiet Pure Base 600 (no window)
Audio Device(s) Realtek ALC1220-VB
Power Supply SuperFlower Leadex V Gold Pro 850W ATX Ver2.52
Mouse Mionix Naos Pro
Keyboard Corsair Strafe with browns
Software W10 22H2 Pro x64
I would rather have Intel and Nvidia duke it out. Sell the graphics division, AMD.
I wasn't aware Intel was in the high end or had a 4090 compeditor???
 

ARF

Joined
Jan 28, 2020
Messages
4,670 (2.65/day)
Location
Ex-usa | slava the trolls
The question for you as a customer is whether you are prepared to pay $1,500 for 5080 on another monolithic die?

No.
But considering nvidia's mindset, they are able to charge 1500$ for a 100 mm^2 chiplet as well.

The chiplets improve only the yields, but increase the cost and complexity, and reduce the overall performance.

Currently, GPU prices are moving in a direction that will cause more people being outpriced from GPU market.

Correct. Hence, the GPU shipments are at an all-time low.
 
Joined
Nov 5, 2012
Messages
41 (0.01/day)
Location
France
System Name Game computer
Processor AMD RyZen 7 5800X3D 4.35GHZ
Motherboard ASRock X470 Taichi
Cooling AMD Fan Wraight
Memory 32768 Mo DDR4-3200 G-Skill CL16
Video Card(s) Intel ARC A770 16GB
Storage SSD Samsung 970 EVO M2 250 Go, Samsung 970 EVO M2 500 Go, Samsung 850 EVO SATA 500 Go, Toshiba 4 To
Display(s) AOC 24' 1440p 144 Hz DisplayPort + ACER KG251Q 24' 1080p 144 Hz DisplayPort
Case NZXT Phantom Black
Audio Device(s) Corsair Gaming VOID Pro RGB Wireless Special Edition
Power Supply BeQuiet Straight Power 11 1000W
Mouse Roccat Kone XTD
Keyboard BTC USB
Software Windows 10 Pro x64
Besides the MCM, I noticed that RDNA3 has a feature called 'Dual-Issue'.

It's supposed to double the shader throughput, but I feel like it's not exploited at all. Isn't that what prevents the 7900 XTX from doing as well as a 4090?
 
Joined
Aug 10, 2023
Messages
341 (0.73/day)
Besides the MCM, I noticed that RDNA3 has a feature called 'Dual-Issue'.

It's supposed to double the shader throughput, but I feel like it's not exploited at all. Isn't that what prevents the 7900 XTX from doing as well as a 4090?
A few games support it or exploit it and if it works its great:





Probably the best example:

https://www.techpowerup.com/review/amd-radeon-rx-7600/27.html (faster than 6700 XT, over 15% faster than 6650XT)

Also pretty good:


In those it is clearly faster than the 6650/6600XT, otherwise it's often not: https://www.techpowerup.com/review/amd-radeon-rx-7600/11.html

7900 XTX (look for 4K, not 1080, primarily compared to 4080):

Prime example:


Also good:


 
Last edited:

ARF

Joined
Jan 28, 2020
Messages
4,670 (2.65/day)
Location
Ex-usa | slava the trolls
A few games support it or exploit it and if it works its great:

How do you know? What if the game itself is optimised to work better on the AMD architecture?

1691836727182.png


:confused:

1691836861082.png


:confused:
 

ARF

Joined
Jan 28, 2020
Messages
4,670 (2.65/day)
Location
Ex-usa | slava the trolls
1080 isn't a good comparison due to several nvidia gpus choking at that resolution, this already happens since RTX 30 gen.

This is an intentional and conscious design choice since 1080p is already a very old resolution and there is no difference if the said GPUs render at 300 FPS or at 315 FPS.
You get plenty of performance.

Also, the users are kindly asked to move up to 2160p, and this is what I like about nvidia. It does the opposite of what AMD does. And AMD is wrong in this case.
 
Joined
Aug 25, 2021
Messages
1,170 (0.99/day)
And you don't know what a standalone GPU company would've done - with more wafers at their disposal since they are NOT sharing it with CPUs like AMD does. So your answer makes no sense, not in the context of my speculation about a hypothetical ATI.
Do you know then? No.
600mm2 and even bigger dies are and always will be possible to achieve, this is clearly wrong. And also, it's wrong to assume Nvidia will be so conservative (and technologically behind) to move to chiplets once they are "forced" to - no, they will when it makes sense for them, $-wise, perf-wise.
They are free to do whatever they want. I gave you parametres of incoming high-NA EUV machines that foundaries have ordered for 2025 and onwards. So, companies will have a choice to stay on older node with bigger chips or move to newer nodes with smaller dies, whatever works better for their chip designs.
 
Joined
Jun 6, 2022
Messages
622 (0.69/day)
Recent extensive testing of 7600 against 4060 shows that Radeon card is:
In 1080p:
1. 7% slower overall (mixed RT and non-RT games)
2. 2% slower when RT titles are removed - raster is literally the same on both cards
3. 20% slower in RT only - the caveat here is that 4060 is chocked with RT on; hardware is slow (even the title in your video shows 4050...)
In 1440p:
1. 9% slower overall (mixed RT and non-RT games)
2. 2% slower when RT titles are removed - raster is literally the same on both cards
3. 28% slower in RT only - RT is uselss here, unless helped by DLSS and FG in a few titles; then latency becomes an issue...

There are already retailers in Europe selling 7600 below MSRP, for example for £230 in the UK + plus free Starfield, which is makes Radeon 7600 more attractive with game bundle, saving buyers over 100 bucks all together.
Please, no HU. Their "demonstration" with the 6700 XT tortured at 55 FPS to show the value of the surplus of vRAM put a lid on me. I have yet to meet a game running 50-60 fps in which there are no scenes where the fps drops dramatically, and these drops happen precisely when you need as many frames per second as possible.

nVidia offers Overwatch 2 Invasion for 4060, AMD goes with Starfield. The problem with these games is whether they appeal to you or not. Otherwise, the price difference between them is only 25 euros in Romania and I deduce from the customer comments that there is no longer the enthusiasm met with the 6000 series. Among the complaints are high noise and/or high temperatures (cheap solutions, dubious quality).
The difference in consumption must also be taken into account (115W versus 155W), which for me means the average consumption of the i5-13500 processor in the games run with the 3070 Ti.
All in all, AMD sells cheaper because it is the only weapon with which it can fight nVidia. Imagine the same price for RTX 4090 and RX 7900 XTX. Who still buys RX?
 

yannus1

New Member
Joined
Jan 30, 2023
Messages
23 (0.03/day)
That's not that rumour was saying. The rumour was saying that Intel will do just one or 2 cards each generation, with very low ambitions, instead of having a complete lineup and competing at all levels. For all purposes it seems to have been true. But that doesn't mean that this rumor will turn out to be true, hopefully not.
False https://www.pcgamer.com/intel-arc-rumour-raja-koduri-twitter-response/

How about using one GPU only for raster, and one only for RT, similarly to the earliest physics accelerators before Nvidia bought Ageia, the company that made them (or 3DFX cards that didn't have 2D support). ATX is basically just a ton of unused space in a modern gaming PC, so why do raster and RT have to be on the same chip, or even the same card?
Reminds me of the good old time. The problem of what you suggest is the latency created by communcation between different chips. That's what lies between all the "monolithic chip vs chiplet" discussion.
 
Top