• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD RDNA4 Architecture to Build on Features Relevant to Gaming Performance, Doesn't Want to be Baited into an AI Feature Competition with NVIDIA

Joined
Oct 12, 2005
Messages
713 (0.10/day)
RDNA3 already have some AI units, RDNA4 will probably continue to have. The thing is AMD have their own CNDA cards and Xilinx chips that will always be better than RDNA at AI.

The thing also is on the gamers side: You don't need huge amount of raw "AI compute power". You just run the trained model on the actual data. You don't train it live. The learning pass require way more processing power.

The thing is Nvidia do not have a specialized compute/AI architecture like AMD. (They have a brand, but it use the same architecture as gamers GPU most of the time).

In the end, who know what strategy is the best. Nvidia with less monolithic chips and architecture but large space of die used for AI that is wasted most of the time for gamers or AMD with 2 architectures and a specialized gamers architecture more tailored for the current workload.

We have also to consider that there is ASIC in the loops that destroy GPU application.

In the end, i do not know. I am pretty sure those big boss of tech company sees things we do not see. But i don't think that AMD move is bad. They just say our gamers cards won't compete on RAW AI power vs Nvidia. They don't say their CNDA arch won't compete or that they won't have ai acceleration on RDNA GPU.

Just that it won't be a focus.
 
Joined
Mar 10, 2010
Messages
11,878 (2.20/day)
Location
Manchester uk
System Name RyzenGtEvo/ Asus strix scar II
Processor Amd R5 5900X/ Intel 8750H
Motherboard Crosshair hero8 impact/Asus
Cooling 360EK extreme rad+ 360$EK slim all push, cpu ek suprim Gpu full cover all EK
Memory Corsair Vengeance Rgb pro 3600cas14 16Gb in four sticks./16Gb/16GB
Video Card(s) Powercolour RX7900XT Reference/Rtx 2060
Storage Silicon power 2TB nvme/8Tb external/1Tb samsung Evo nvme 2Tb sata ssd/1Tb nvme
Display(s) Samsung UAE28"850R 4k freesync.dell shiter
Case Lianli 011 dynamic/strix scar2
Audio Device(s) Xfi creative 7.1 on board ,Yamaha dts av setup, corsair void pro headset
Power Supply corsair 1200Hxi/Asus stock
Mouse Roccat Kova/ Logitech G wireless
Keyboard Roccat Aimo 120
VR HMD Oculus rift
Software Win 10 Pro
Benchmark Scores 8726 vega 3dmark timespy/ laptop Timespy 6506
The fact that you are misquoting numbers on power draw tells me all I need to know. Maximum power draw is completely useless. In games as per tpu the 7900xt draws 50 more watts. In basic video playback it consumes 400% (lol) more power. 400 freaking percent. That numbers is insane.

Your derailing a thread , Again with fanboi bs, re read the OP does it compare the 7900XT with a 4070Ti, no not at all so wtaf are you on about and how. Is it you get away with this shit yet this post will be gone in a minute.

Just report it they say, I did, it didn't work now I'm here.

As for AI I fully align with AMD.

Use it for in game adversarial intelligence because using it to invent extra frames is arse IMHO especially given NPC and in game enemies are total shite.

Instead of an intelligent team to fight or a clever boss we get refurbished boss fights that are just about learning different attack patterns, Great.
 
Joined
Nov 11, 2016
Messages
3,459 (1.17/day)
System Name The de-ploughminator Mk-III
Processor 9800X3D
Motherboard Gigabyte X870E Aorus Master
Cooling DeepCool AK620
Memory 2x32GB G.SKill 6400MT Cas32
Video Card(s) Asus RTX4090 TUF
Storage 4TB Samsung 990 Pro
Display(s) 48" LG OLED C4
Case Corsair 5000D Air
Audio Device(s) KEF LSX II LT speakers + KEF KC62 Subwoofer
Power Supply Corsair HX850
Mouse Razor Death Adder v3
Keyboard Razor Huntsman V3 Pro TKL
Software win11
Oh well Nokia thought smart phone was gimmick until it was too late.

Companies don't get to decide what is gimmick, consumers will do that. Looks like AMD is following Nokia lead
 
Joined
May 13, 2015
Messages
632 (0.18/day)
Processor AMD Ryzen 3800X / AMD 8350
Motherboard ASRock X570 Phantom Gaming X / Gigabyte 990FXA-UD5 Revision 3.0
Cooling Stock / Corsair H100
Memory 32GB / 24GB
Video Card(s) Sapphire RX 6800 / AMD Radeon 290X (Toggling until 6950XT)
Storage C:\ 1TB SSD, D:\ RAID-1 1TB SSD, 2x4TB-RAID-1
Display(s) Samsung U32E850R
Case be quiet! Dark Base Pro 900 Black rev. 2 / Fractal Design
Audio Device(s) Creative Sound Blaster X-Fi
Power Supply EVGA Supernova 1300G2 / EVGA Supernova 850G+
Mouse Logitech M-U0007
Keyboard Logitech G110 / Logitech G110
Of course, the 7900xt was very reasonable, getting absolutely destroyed by the 4070ti. And their cpus? Oh those are insanely reasonable, they priced 6 cores at 300€ msrp when their competition asks 350 for 14cores that smack it around the room.
Newegg:
$839: 4070 Ti 16GB
$849: 7900 XT 20GB

TPU lists the 7900 XT as 9% faster than the 4070 Ti. I'm not saying it's a fantastic card but I don't think you looked at prices before making that odd argument.

Newegg:
$319: Intel Core i5-13600K
$324: AMD Ryzen 9 5900X

So Intel costs less up-front though what about the energy consumption? The 12900K or whatever had a 1GHz advantage over the 5800X3D, used a ton of more energy and still barely beat it by like what, 1% in some games? Plus that 13600K isn't optimized on Windows 10 because of the big/little core nonsense so who wants to willingly install Windows 11? Not that 10 is fantastic to begin with.

Frankly I don't see the point in getting upset about a general statement. No one business is going to have all winners that make sense in all scenarios. At some point you're not going to notice higher FPS improving a game or making any meaningful reduction in most software related cases with the exceptions of poorly written software (e.g. Cyberpunk 2077 is clearly not optimized and has a lot of bloat).

Probably not, but we all see the AMD Radeon numbers and Nvidia numbers, and it's easy to understand to everyone.

Nvidia needs competition, AMD Radeon today, unfortunately, it's not. AMD cards are not so cheaper to justify the lack of RT performance, for example.
I think the problem with AMD is that they really wanted to release before RDNA3 was ready and unlike Intel Nvidia had wiggle room to pull off easy ways to improve performance through brute-force (where as Intel uses their own foundries). That begs the question: why can't Intel just dump a shed-ton of cache on their own CPUs? AMD did it almost in an afterthought fashion though obviously there is still a lot of heavy tech involved.

Raytracing is slowly becoming more relevant and I think they should have aimed for something more along the lines of 80% instead of 50% because they should very well know that Nvidia will stop at nothing to be the whole "we're blindly #1 with our $53,000 video card so you'll get a winner when you buy a $200 card that gets 24% less performance than an exactly priced AMD card" nonsense that so many people fall for. They should have waited and let the developers mature the drivers a bit. In the end in 2023 it is at-best moderately relevant as the market share for those with a decent enough card for RT is very limited.

I'd really like to see AMD do two things. First: 8GB and 16GB on the lower cards (e.g. 7600XT or whatever) like how they did 4GB and 8GB on the RX 570s (I think, I went from 290X to RX 6800). GPU RAM needs to increase (and here is Nvidia decreasing it). Secondly I'd like to see them really push for that #1 bit and not gimp their second place cards in order to up-sell their best. I like the strategy of limiting their own cards to two 8-pin connectors but allowing their partners to go with three allowing them to get within striking distance of the 4090. If they waited like three months to let the drivers mature I think the reviews would have been more forgiving.
 
Last edited:
Joined
Sep 17, 2014
Messages
22,673 (6.05/day)
Location
The Washing Machine
System Name Tiny the White Yeti
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling CPU: Thermalright Peerless Assassin / Case: Phanteks T30-120 x3
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
VR HMD HD 420 - Green Edition ;)
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
twice the memory and it's still not utterly obliterating the 4070 Ti? What is going on here. It's as if bandwith and memory size are massively overhyped (mainly by the AMD crowd)
You've already been corrected, but the gist is that Nvidia uses memory and bandwidth to execute planned obscolescence, and while in past generations they had good - to sometimes fantastic - balance between VRAM and core, today you're looking at architectural changes that highly depend on the type of game and load you present to determine how much VRAM and bandwidth you need to keep the core at work.

Nvidia still has an incredibly good shader/SMX and lean architecture and they know this. The perf/watt is stellar on ada again - until you start hitting that VRAM wall. 10GB 3080's are already seeing it. And Ada cards at and below 12GB are also already seeing it. Sometimes the cache can alleviate a big part of it but this is highly game/engine specific. Basically, for good performance on Nvidia, they're pushing a lot of work to developers once again, as they've always done, supported by their own extensive engineering teams. Its the reason you get a game ready driver every odd week. Not because Nvidia is doing great aftersales... but because they're fine tuning to meet their perf targets.

This is how Nvidia carves out its competitive advantage. Its beyond a 'proprietary' approach. Basically they try to nudge the industry towards their best practices within the architecture. AMD does that too, except they've got a much longer-term plan going on: asynchronous shader support is a great example, along with their Vulkan/DX12 push and you can see how that pays off today in for example the 7970/280X. It absolutely runs circles around its equivalent 780/780ti of the time, with the same VRAM amount, in new APIs where the tech is used.

I'm not going to herald the fine wine nonsense, because that IS nonsense, but the nuances above do exist. AMD has a long term approach here, as per the title of this thread - and it seems to begin working for them, despite the immense favor in market share for Nvidia. But technically and from a competitive standpoint in terms of how they use their die and its size (which directly says something about margins/product pricing flexibility) and the way they can use the consoles and gaming push there to their advantage... they have a much stronger position than Nvidia, which is actually moving away from the consumer market and more into datacenter.

Oh well Nokia thought smart phone was gimmick until it was too late.

Companies don't get to decide what is gimmick, consumers will do that. Looks like AMD is following Nokia lead
This is absolutely true. Time will tell. So far, its too early. But at the same time: AMD runs RT fine and also does FSR fine without AI. And because of that, its easy for them to see that investing in it is counterproductive. They simply don't need it for anything in a GPU.
 
Joined
Jun 14, 2020
Messages
3,530 (2.14/day)
System Name Mean machine
Processor 12900k
Motherboard MSI Unify X
Cooling Noctua U12A
Memory 7600c34
Video Card(s) 4090 Gamerock oc
Storage 980 pro 2tb
Display(s) Samsung crg90
Case Fractal Torent
Audio Device(s) Hifiman Arya / a30 - d30 pro stack
Power Supply Be quiet dark power pro 1200
Mouse Viper ultimate
Keyboard Blackwidow 65%
Next thing is he is going to tell you he uses Vsync or frame cap at 60. I've seen those user who claim that 4090 is very efficient and use very little power with Vsync enabled or frame cap. Then they measure power consumption and according to their calculation it is very efficient. Utter crap but it is what it is. Countless of those posts everywhere.
Or even better. Downclock it 2000Mhz and then measure. But when they check how fast can it render then obviously no limits but then they do not bring the power consumption up since it is irrelevant. :laugh:
Then 4090 is in fact very efficient. Actually it is the most efficient card out there, especially for heavier workloads, not just gaming. I have it with a 320w power limit and it performs better than at stock, I can post you some record breaking numbers at just 320w.
 
Joined
Dec 6, 2018
Messages
342 (0.15/day)
Location
Hungary
Processor i5-9600K
Motherboard ASUS Prime Z390-A
Cooling Cooler Master Hyper 212 Black Edition PWM
Memory G.Skill DDR4 RipjawsV 3200MHz 16GB kit
Video Card(s) Asus RTX2060 ROG STRIX GAMING
Display(s) Samsung Odyssey G7 27"
Case Cooler Master MasterCase H500
Power Supply SUPER FLOWER Leadex Gold 650W
Mouse BenQ Zowie FK1+-B
Keyboard Cherry KC 1000
Software Win 10
You've already been corrected, but the gist is that Nvidia uses memory and bandwidth to execute planned obscolescence, and while in past generations they had good - to sometimes fantastic - balance between VRAM and core, today you're looking at architectural changes that highly depend on the type of game and load you present to determine how much VRAM and bandwidth you need to keep the core at work.

Nvidia still has an incredibly good shader/SMX and lean architecture and they know this. The perf/watt is stellar on ada again - until you start hitting that VRAM wall. 10GB 3080's are already seeing it. And Ada cards at and below 12GB are also already seeing it. Sometimes the cache can alleviate a big part of it but this is highly game/engine specific. Basically, for good performance on Nvidia, they're pushing a lot of work to developers once again, as they've always done, supported by their own extensive engineering teams. Its the reason you get a game ready driver every odd week. Not because Nvidia is doing great aftersales... but because they're fine tuning to meet their perf targets.

This is how Nvidia carves out its competitive advantage. Its beyond a 'proprietary' approach. Basically they try to nudge the industry towards their best practices within the architecture. AMD does that too, except they've got a much longer-term plan going on: asynchronous shader support is a great example, along with their Vulkan/DX12 push and you can see how that pays off today in for example the 7970/280X. It absolutely runs circles around its equivalent 780/780ti of the time, with the same VRAM amount, in new APIs where the tech is used.

I'm not going to herald the fine wine nonsense, because that IS nonsense, but the nuances above do exist. AMD has a long term approach here, as per the title of this thread - and it seems to begin working for them, despite the immense favor in market share for Nvidia. But technically and from a competitive standpoint in terms of how they use their die and its size (which directly says something about margins/product pricing flexibility) and the way they can use the consoles and gaming push there to their advantage... they have a much stronger position than Nvidia, which is actually moving away from the consumer market and more into datacenter.


This is absolutely true. Time will tell. So far, its too early. But at the same time: AMD runs RT fine and also does FSR fine without AI. And because of that, its easy for them to see that investing in it is counterproductive. They simply don't need it for anything in a GPU.
Nvidia improved memory bandwith by optimizing and redesigning circuitry, and also by sending some of the load to the CPU. That's how it's possible that the 4070 Ti is on par with the 3090 Ti which has double the VRAM and bandwith. Pretty impressive. I'm amazed with what Nvidia can do with such a narrow bus.

All the while AMD just keeps shoveling more and more ram onto the cards with zero innovation. People gobble it up, it works. Cheap trick, I would prefer AMD to actually innovate but I've been waiting for a decade and it's become stale. I'll just buy Nvidia and get over it.

AMD CPU's are awesome, but their GPU's are electronic trash.
 
Joined
Mar 10, 2010
Messages
11,878 (2.20/day)
Location
Manchester uk
System Name RyzenGtEvo/ Asus strix scar II
Processor Amd R5 5900X/ Intel 8750H
Motherboard Crosshair hero8 impact/Asus
Cooling 360EK extreme rad+ 360$EK slim all push, cpu ek suprim Gpu full cover all EK
Memory Corsair Vengeance Rgb pro 3600cas14 16Gb in four sticks./16Gb/16GB
Video Card(s) Powercolour RX7900XT Reference/Rtx 2060
Storage Silicon power 2TB nvme/8Tb external/1Tb samsung Evo nvme 2Tb sata ssd/1Tb nvme
Display(s) Samsung UAE28"850R 4k freesync.dell shiter
Case Lianli 011 dynamic/strix scar2
Audio Device(s) Xfi creative 7.1 on board ,Yamaha dts av setup, corsair void pro headset
Power Supply corsair 1200Hxi/Asus stock
Mouse Roccat Kova/ Logitech G wireless
Keyboard Roccat Aimo 120
VR HMD Oculus rift
Software Win 10 Pro
Benchmark Scores 8726 vega 3dmark timespy/ laptop Timespy 6506
Nvidia improved memory bandwith by optimizing and redesigning circuitry, and also by sending some of the load to the CPU. That's how it's possible that the 4070 Ti is on par with the 3090 Ti which has double the VRAM and bandwith. Pretty impressive. I'm amazed with what Nvidia can do with such a narrow bus.

All the while AMD just keeps shoveling more and more ram onto the cards with zero innovation. People gobble it up, it works. Cheap trick, I would prefer AMD to actually innovate but I've been waiting for a decade and it's become stale. I'll just buy Nvidia and get over it.

AMD CPU's are awesome, but their GPU's are electronic trash.
Innovation, GCD and MCD IS innovation what has Ada innovated RT gen 3 ,dlss3 err wait what now?!.
Ok now your clearly confused can we get back OT.
 
Joined
Jan 14, 2019
Messages
12,577 (5.80/day)
Location
Midlands, UK
System Name Nebulon B
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance DDR5-4800
Video Card(s) AMD Radeon RX 6750 XT 12 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Bazzite (Fedora Linux) KDE
I agree with AMD here... there's no need for AI in a consumer GPU. FSR is proof of that.
 
Joined
Oct 4, 2017
Messages
706 (0.27/day)
Location
France
Processor RYZEN 7 5800X3D
Motherboard Aorus B-550I Pro AX
Cooling HEATKILLER IV PRO , EKWB Vector FTW3 3080/3090 , Barrow res + Xylem DDC 4.2, SE 240 + Dabel 20b 240
Memory Viper Steel 4000 PVS416G400C6K
Video Card(s) EVGA 3080Ti FTW3
Storage XPG SX8200 Pro 512 GB NVMe + Samsung 980 1TB
Display(s) Dell S2721DGF
Case NR 200
Power Supply CORSAIR SF750
Mouse Logitech G PRO
Keyboard Meletrix Zoom 75 GT Silver
Software Windows 11 22H2
I wonder, do you use Tensor or Ray tracing cores anywhere?

Me ? Nope ... my GPU does though https://www.nvidia.com/en-us/geforce/news/nvidia-rtx-games-engines-apps/ :



I mean you guys need to wake up , it's 2023 we are well past 2018 , both ray tracing and machine learning anti aliasing have seen wide adoption and aren't going anywhere if anything else they are gaining importance over raster every year ... In risk of repeating myself , AMD is failing to read the room big time !
 
Last edited:
  • Like
Reactions: ixi
Joined
Jan 14, 2019
Messages
12,577 (5.80/day)
Location
Midlands, UK
System Name Nebulon B
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance DDR5-4800
Video Card(s) AMD Radeon RX 6750 XT 12 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Bazzite (Fedora Linux) KDE
All I read is: "We will not have an answer to DLSS 3.0 with RDNA 4". The speech seems entirely geared towards expectation management.
Seems AMD GPU division is happy to continue living with Nvidia scraps.
All I read is: "upscaling tech doesn't need AI, as we've proved it with FSR".
 
Joined
Jun 5, 2018
Messages
240 (0.10/day)
All I read is: "upscaling tech doesn't need AI, as we've proved it with FSR".

Ok but I am talking specifically frame generation here. AMD is implying it will not have a similar solution in its next gen, and I also continue to expect them to lag on upscaling image quality (FSR almost always looks worse), ray tracing, video editing, etc.

How many features are you willing to leave on the table before you decide that it just isn't worth it? I am pretty disappointed by RDNA 3 feature set vs Ada already, if they double down on not catching up to Nvidia, then they better be at least 30% cheaper on each level next time around.
 
Joined
Sep 17, 2014
Messages
22,673 (6.05/day)
Location
The Washing Machine
System Name Tiny the White Yeti
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling CPU: Thermalright Peerless Assassin / Case: Phanteks T30-120 x3
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
VR HMD HD 420 - Green Edition ;)
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
How many features are you willing to leave on the table before you decide that it just isn't worth it? I am pretty disappointed by RDNA 3 feature set vs Ada already, if they double down on not catching up to Nvidia, then they better be at least 30% cheaper on each level next time around.
I'm with you on that. The only reason I'm not going for 7900 series is the price. The fact IS that the featureset is less, whatever value AMD doesn't want to attribute to that, isn't quite relevant. Its clear they're not looking to flood the market with 7900s... Its there as proof of concept.

I hope lower down the stack the price will reflect the product. I think AMD banked on the specs being the product and priced along those lines relative to NV, but that's their typical limbo in marketing.
 
Joined
Oct 27, 2009
Messages
1,190 (0.21/day)
Location
Republic of Texas
System Name [H]arbringer
Processor 4x 61XX ES @3.5Ghz (48cores)
Motherboard SM GL
Cooling 3x xspc rx360, rx240, 4x DT G34 snipers, D5 pump.
Memory 16x gskill DDR3 1600 cas6 2gb
Video Card(s) blah bigadv folder no gfx needed
Storage 32GB Sammy SSD
Display(s) headless
Case Xigmatek Elysium (whats left of it)
Audio Device(s) yawn
Power Supply Antec 1200w HCP
Software Ubuntu 10.10
Benchmark Scores http://valid.canardpc.com/show_oc.php?id=1780855 http://www.hwbot.org/submission/2158678 http://ww
that is in 2021 when gaming GPU sales are being boosted significantly by crypto. look at nvidia numbers for Q3 2022. gaming sales is only half of that. gaming contribute less and less towards nvidia revenue.
They are trying to get sued 2 years in a row for misrepresenting finances... They know approximately how many were sold to miners based on driver download # and updates vs cards sold. Also they straight up sold batches of cards to miners.

I agree with AMD here... there's no need for AI in a consumer GPU. FSR is proof of that.
That is not what was said.
 
Joined
Sep 17, 2014
Messages
22,673 (6.05/day)
Location
The Washing Machine
System Name Tiny the White Yeti
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling CPU: Thermalright Peerless Assassin / Case: Phanteks T30-120 x3
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
VR HMD HD 420 - Green Edition ;)
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
that is in 2021 when gaming GPU sales are being boosted significantly by crypto. look at nvidia numbers for Q3 2022. gaming sales is only half of that. gaming contribute less and less towards nvidia revenue.
Relatively yes, but then again datacenter is just an emerging market for them. Its on top, not instead of gaming.
 
Joined
Jan 14, 2019
Messages
12,577 (5.80/day)
Location
Midlands, UK
System Name Nebulon B
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance DDR5-4800
Video Card(s) AMD Radeon RX 6750 XT 12 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Bazzite (Fedora Linux) KDE
Ok but I am talking specifically frame generation here. AMD is implying it will not have a similar solution in its next gen, and I also continue to expect them to lag on upscaling image quality (FSR almost always looks worse), ray tracing, video editing, etc.

How many features are you willing to leave on the table before you decide that it just isn't worth it? I am pretty disappointed by RDNA 3 feature set vs Ada already, if they double down on not catching up to Nvidia, then they better be at least 30% cheaper on each level next time around.
Ray tracing and video editing features have nothing to do with AI, and aren't mentioned in this article.

As for upscaling, I'm not a fan of it anyway. I run everything at native resolution as much as possible, and rather decrease some other image quality settings before resorting to upscaling.

I can't say much about frame generation, but since I can make any game run at 1080p 60 fps with my current hardware, and I'm not planning on a monitor upgrade, I don't have much need for it anyway.

Sure, Nvidia has more stuff in their GPUs, but whether you call them features or gimmicks is highly debatable.

That is not what was said.
Nope. That is what was implied.
 
Joined
Nov 11, 2016
Messages
3,459 (1.17/day)
System Name The de-ploughminator Mk-III
Processor 9800X3D
Motherboard Gigabyte X870E Aorus Master
Cooling DeepCool AK620
Memory 2x32GB G.SKill 6400MT Cas32
Video Card(s) Asus RTX4090 TUF
Storage 4TB Samsung 990 Pro
Display(s) 48" LG OLED C4
Case Corsair 5000D Air
Audio Device(s) KEF LSX II LT speakers + KEF KC62 Subwoofer
Power Supply Corsair HX850
Mouse Razor Death Adder v3
Keyboard Razor Huntsman V3 Pro TKL
Software win11
All I read is: "We will not have an answer to DLSS 3.0 with RDNA 4". The speech seems entirely geared towards expectation management.
Seems AMD GPU division is happy to continue living with Nvidia scraps.

Would be funny if Nvidia delivered an AI model for NPC and bots but run like crap on AMD (4090 has 5x the tensor ops throughput of 7900XTX), basically what David Wang said himself

he hopes that AI is leveraged in improving gameplay—such as procedural world generation, NPCs, bot AI, etc; to add the next level of complexity; rather than spending the hardware resources on image-processing

Then it will become gimmick according to some people :rolleyes:.
 
Joined
Jan 14, 2019
Messages
12,577 (5.80/day)
Location
Midlands, UK
System Name Nebulon B
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance DDR5-4800
Video Card(s) AMD Radeon RX 6750 XT 12 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Bazzite (Fedora Linux) KDE
Would be funny if Nvidia delivered an AI model for NPC and bots but run like crap on AMD (4090 has 5x the tensor ops throughput of 7900XTX), basically what David Wang said himself



Then it will become gimmick according to some people :rolleyes:.
There was an interesting discussion about this in another thread. Somebody said that if game AI was extremely clever, up to the point of learning the player's tactics and countering it to beat them, gaming wouldn't be fun. Nobody likes losing every time.

Edit: Not an opinion on my part, just food for thought. :)
 
Last edited:
Joined
Jun 5, 2018
Messages
240 (0.10/day)
Would be funny if Nvidia delivered an AI model for NPC and bots but run like crap on AMD (4090 has 5x the tensor ops throughput of 7900XTX), basically what David Wang said himself



Then it will become gimmick according to some people :rolleyes:.

Nvidia shares are massively trending up a riding a lot on the popularity of AI and chatgpt. Maybe AMD wants to focus its AI resources on the CDNA segment only, but it's just a fact that AI will be more and more the focus of computing on every segment from now on, and gaming will likely benefit greatly from this. AMD may ignore the needs of its customers by cutting corners on gaming GPUs, but they can't ignore their shareholders, which is why I think this strategy of keeping AI focus to a minimum will not work out for them.

1676911568552.png
 

Attachments

  • 1676911531241.png
    1676911531241.png
    30.9 KB · Views: 42
Joined
Nov 11, 2016
Messages
3,459 (1.17/day)
System Name The de-ploughminator Mk-III
Processor 9800X3D
Motherboard Gigabyte X870E Aorus Master
Cooling DeepCool AK620
Memory 2x32GB G.SKill 6400MT Cas32
Video Card(s) Asus RTX4090 TUF
Storage 4TB Samsung 990 Pro
Display(s) 48" LG OLED C4
Case Corsair 5000D Air
Audio Device(s) KEF LSX II LT speakers + KEF KC62 Subwoofer
Power Supply Corsair HX850
Mouse Razor Death Adder v3
Keyboard Razor Huntsman V3 Pro TKL
Software win11
There was an interesting discussion about this in another thread. Somebody said that if game AI was extremely clever, up to the point of learning the player's tactics and countering it to beat them, gaming wouldn't be fun. Nobody likes losing every time.

Modders already incorporated chatGPT into bannerlord game, AI enhanced storytelling will be neat

So yeah, AMD is way behind in everything
 
Joined
Mar 29, 2014
Messages
496 (0.13/day)
A lot of people are excited about "A.I democritizing creative/technical jobs", but not realizing that it's also going to oversaturate the market with low effort content. We are already finding faults on stuff that require a lot of money and willpower to do, A.I generated content is just going to make more of them.

We need to be carefull about how we use that tool, (who's becoming more than a tool) a few generation down the line, we might just end up with a society addicted to get instant results, and less interested to learn stuff. Studies shows that gen Z are really not that tech literate...because they don't need to understand how something actually work to use it, it's been simplified so much.
So in that sense I like AMD statement, we don't need to use A.I for every little thing. It's a wonderfull thing for sure, but overusing it might also have bad consequences.
You are wrong there. Society is already addicted to instant results, won't take a couple generations.
Would you jump of a cliff if you thought you could get a better TimeSpy score?
 
Joined
Oct 27, 2009
Messages
1,190 (0.21/day)
Location
Republic of Texas
System Name [H]arbringer
Processor 4x 61XX ES @3.5Ghz (48cores)
Motherboard SM GL
Cooling 3x xspc rx360, rx240, 4x DT G34 snipers, D5 pump.
Memory 16x gskill DDR3 1600 cas6 2gb
Video Card(s) blah bigadv folder no gfx needed
Storage 32GB Sammy SSD
Display(s) headless
Case Xigmatek Elysium (whats left of it)
Audio Device(s) yawn
Power Supply Antec 1200w HCP
Software Ubuntu 10.10
Benchmark Scores http://valid.canardpc.com/show_oc.php?id=1780855 http://www.hwbot.org/submission/2158678 http://ww
Would be funny if Nvidia delivered an AI model for NPC and bots but run like crap on AMD (4090 has 5x the tensor ops throughput of 7900XTX), basically what David Wang said himself



Then it will become gimmick according to some people :rolleyes:.
the 4090 is monstrously powerful for inference and training. AMD's mi300A is setting out to rectify Nvidia's Tensor feature set lead, currently AMD is very strong in high precision workloads and weak in inference and sparse. AMD has been very vague in what matrix/tensor cores RDNA3 has, if it has the same as the mi200 or a stripped or enhanced feature set.
The mi300 is supposed to get an 8x ai uplift over mi250x which is pretty great going from 560w to 600w, finally better tensor/matrix cores.

for RDNA3... we have ~61flops FP32,, what the matrix cores do on the instinct side is let FP64 be 1:1 with FP32 9... amd then half precision is... 4x single precision.
What AMD shows for RDNA3 is packed math... 61/123Tflops, no matrix performance listed yet.
We know they support Bfloat16, but not if it supports int4/8 and what acceleration performance it gets.
The slide below shows up to 2.7x Matrix speed increase, but is that over FP32 or int16? is it Bfloat16 164.7Tflops or 342.9 int16/8/4 tops?
IDK they have been tight lipped.

Nvidia is measuring the 4090 in
FP16 with FP32 accumulate = 165.2/330.42
FP8 at 660.6/1321.21*
int8 is the same
int4 1321.2/2642.4*
*sparse

page 30 has the numbers.
So, my guess is AMD is... AMD has Bfloat16/int16 at 342 ish, which is competative with the 4090, as they are with the instincts.... And then they just get f* decimated at int8/4 and sparce.
1676912402622.png
 
Joined
Mar 10, 2010
Messages
11,878 (2.20/day)
Location
Manchester uk
System Name RyzenGtEvo/ Asus strix scar II
Processor Amd R5 5900X/ Intel 8750H
Motherboard Crosshair hero8 impact/Asus
Cooling 360EK extreme rad+ 360$EK slim all push, cpu ek suprim Gpu full cover all EK
Memory Corsair Vengeance Rgb pro 3600cas14 16Gb in four sticks./16Gb/16GB
Video Card(s) Powercolour RX7900XT Reference/Rtx 2060
Storage Silicon power 2TB nvme/8Tb external/1Tb samsung Evo nvme 2Tb sata ssd/1Tb nvme
Display(s) Samsung UAE28"850R 4k freesync.dell shiter
Case Lianli 011 dynamic/strix scar2
Audio Device(s) Xfi creative 7.1 on board ,Yamaha dts av setup, corsair void pro headset
Power Supply corsair 1200Hxi/Asus stock
Mouse Roccat Kova/ Logitech G wireless
Keyboard Roccat Aimo 120
VR HMD Oculus rift
Software Win 10 Pro
Benchmark Scores 8726 vega 3dmark timespy/ laptop Timespy 6506
Modders already incorporated chatGPT into bannerlord game, AI enhanced storytelling will be neat

So yeah, AMD is way behind in everything
Yeah except server side chatgpt doesn't give a shit what GPU you have,. Sooo irrelevant.

AMD do have Ai hardware just not as much as Nvidia on gaming GPU cDNA beats Nvidia though.

And three years in what has Nvidia done with tensors, frame generation. .. . ... Oh yeah and RT that really needs em, oh wait.

no that's right you now need 4th gen and the tensors on 2#### and 3#### are good for little now.

RTX is clearly another driver of EWaste, something Nvidia are getting better at making.
 
Top