• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Claims Ryzen AI 9 HX 370 Outperforms Intel Core Ultra 7 258V by 75% in Gaming

AleksandarK

News Editor
Staff member
Joined
Aug 19, 2017
Messages
2,679 (0.99/day)
AMD has published a blog post about its latest AMD Ryzen AI 300 series processors, claiming they are changing the game for portable devices. To back these claims, Team Red has compared its Ryzen AI 9 HX 370 processor to Intel's latest Core Ultra 7 258V, using the following games: Assassin's Creed Mirage, Baldur's Gate 3, Borderlands 3, Call of Duty: Black Ops 6, Cyberpunk 2077, Doom Eternal, Dying Light 2 Stay Human, F1 24, Far Cry 6, Forza Horizon 5, Ghost of Tsushima, Hitman 3, Hogwarts Legacy, Shadow of the Tomb Raider, Spider-Man Remastered, and Tiny Tina's Wonderlands. The conclusion was that AMD's Ryzen AI 9 HX 370, with its integrated Radeon 890M graphics powerhouse, outperformed the Intel "Lunar Lake" Core Ultra 7 258V with Intel Arc Graphics 140V by 75% on average.

To support this performance leap, AMD also relies on software technologies, including FidelityFX Super Resolution 3 (FSR 3) and HYPR-RX, to unlock additional power and gaming efficiency. FSR 3 alone enhances visuals in over 95 games, while HYPR-RX, with features like AMD Fluid Motion Frames 2 (AFMF 2) and Radeon Anti-Lag, provides substantial performance boosts across thousands of games. The company has also compared its FSR/HYPR-RS combination with Intel's XeSS, which is available in around 130 games. AMD claims its broader suite supports 415+ games and is optimized for smoother gameplay. The AFMF 2 claims support with thousands of titles, while Intel's GPU software stack lacks a comparison point. Of course, these marketing claims are to be taken with a grain of salt, so independent testing is always the best to compare the two.




For comparison of pure specifications, the AMD Ryzen 9 HX 370 and Intel Core Ultra 7 258V processors each employ a hybrid core architecture, but AMD's design delivers more total cores and threads. The Ryzen 9 HX 370 boasts 12 cores (four performance and eight efficiency) with 24 threads, while the Core Ultra 7 258V features eight cores and eight threads. In terms of cache, AMD's Ryzen processor includes a substantial 24 MB of shared L3 cache, supported by 1 MB of L2 cache per core. The Lunar Lake chip has each P-core equipped with 192 KB of L1 cache and 2.5 MB of L2 cache while sharing a 12 MB L3 cache, and each E-core has 96 KB of L1 cache along with 4 MB of L2 cache per module.

For graphics, the Ryzen 9 HX 370 integrates the Radeon 890M, which uses RDNA 3.5 architecture with 16 compute units running up to 2.9 GHz. This delivers impressive graphics capabilities for an integrated GPU, in contrast to Intel's Core Ultra 7 258V, which includes Intel Xe-LPG graphics, a capable option but generally less optimized for games than AMD's graphics. Intel's Intel Arc Graphics 140V has eight Xe-LPG cores clocked at 1.95 GHz.

View at TechPowerUp Main Site | Source
 
Joined
Sep 6, 2013
Messages
3,439 (0.83/day)
Location
Athens, Greece
System Name 3 desktop systems: Gaming / Internet / HTPC
Processor Ryzen 5 7600 / Ryzen 5 4600G / Ryzen 5 5500
Motherboard X670E Gaming Plus WiFi / MSI X470 Gaming Plus Max (1) / MSI X470 Gaming Plus Max (2)
Cooling Aigo ICE 400SE / Segotep T4 / Νoctua U12S
Memory Kingston FURY Beast 32GB DDR5 6000 / 16GB JUHOR / 32GB G.Skill RIPJAWS 3600 + Aegis 3200
Video Card(s) ASRock RX 6600 / Vega 7 integrated / Radeon RX 580
Storage NVMes, ONLY NVMes / NVMes, SATA Storage / NVMe, SATA, external storage
Display(s) Philips 43PUS8857/12 UHD TV (120Hz, HDR, FreeSync Premium) / 19'' HP monitor + BlitzWolf BW-V5
Case Sharkoon Rebel 12 / CoolerMaster Elite 361 / Xigmatek Midguard
Audio Device(s) onboard
Power Supply Chieftec 850W / Silver Power 400W / Sharkoon 650W
Mouse CoolerMaster Devastator III Plus / CoolerMaster Devastator / Logitech
Keyboard CoolerMaster Devastator III Plus / CoolerMaster Devastator / Logitech
Software Windows 10 / Windows 10&Windows 11 / Windows 10
AMD is having a problem to make it's integrated solutions faster and Intel is catching up. AMD needs Frame Generation to stay ahead and this should be an alarm for them to wake up and find solutions to overcome known bottlenecks that iGPUs have to face. Or Intel might do that in the future and AMD could fall behind. AMD has a habit of losing an advantage against the competition and unfortunately they seem to be doing the same with integrated graphics.
 
Joined
Mar 15, 2017
Messages
73 (0.03/day)
Those are some pretty misleading graphs. Their raw gaming performance is pretty evenly matched, and if XESS is such a hog the Intel users could use FSR2/3 as well.
 
Joined
Aug 12, 2022
Messages
251 (0.29/day)
Testing from reviwers suggests to me that this claim is absurd. The 258V beats the 370 in many games and it might lose overall but only by a little. FSR may be an edge for AMD but FSR frame generation is highly situational; since frame generation introduces latency, it should always be turned off if you need a responsive game such as in Rocket League. Moreover, as I recall, XESS is better at upscaling than FSR; it's just lacking frame generation and wider support in games.

Lastly, why compare the 258V to the 370? The 370 is AMD's top model. The 268V and the 288V would both do better than the 258V. I think it'd be fair to compare the 258V to the 365.
 
Joined
Jun 14, 2020
Messages
3,647 (2.19/day)
System Name Mean machine
Processor 12900k
Motherboard MSI Unify X
Cooling Noctua U12A
Memory 7600c34
Video Card(s) 4090 Gamerock oc
Storage 980 pro 2tb
Display(s) Samsung crg90
Case Fractal Torent
Audio Device(s) Hifiman Arya / a30 - d30 pro stack
Power Supply Be quiet dark power pro 1200
Mouse Viper ultimate
Keyboard Blackwidow 65%
Testing from reviwers suggests to me that this claim is absurd. The 258V beats the 370 in many games and it might lose overall but only by a little. FSR may be an edge for AMD but FSR frame generation is highly situational; since frame generation introduces latency, it should always be turned off if you need a responsive game such as in Rocket League. Moreover, as I recall, XESS is better at upscaling than FSR; it's just lacking frame generation and wider support in games.

Lastly, why compare the 258V to the 370? The 370 is AMD's top model. The 268V and the 288V would both do better than the 258V. I think it'd be fair to compare the 258V to the 365.
AMD is using 2 framegenrations stacked on top of each other (FSR FG and AFMF) to get to that 75% they are claiming :kookoo:
 
Joined
Nov 26, 2021
Messages
1,709 (1.50/day)
Location
Mississauga, Canada
Processor Ryzen 7 5700X
Motherboard ASUS TUF Gaming X570-PRO (WiFi 6)
Cooling Noctua NH-C14S (two fans)
Memory 2x16GB DDR4 3200
Video Card(s) Reference Vega 64
Storage Intel 665p 1TB, WD Black SN850X 2TB, Crucial MX300 1TB SATA, Samsung 830 256 GB SATA
Display(s) Nixeus NX-EDG27, and Samsung S23A700
Case Fractal Design R5
Power Supply Seasonic PRIME TITANIUM 850W
Mouse Logitech
VR HMD Oculus Rift
Software Windows 11 Pro, and Ubuntu 20.04
AMD is having a problem to make it's integrated solutions faster and Intel is catching up. AMD needs Frame Generation to stay ahead and this should be an alarm for them to wake up and find solutions to overcome known bottlenecks that iGPUs have to face. Or Intel might do that in the future and AMD could fall behind. AMD has a habit of losing an advantage against the competition and unfortunately they seem to be doing the same with integrated graphics.
They have the solution; they just refuse to implement it. Strix Halo, in addition to its wider memory interface, has 32 MB of LLC (last level cache) dedicated to the GPU. A 16 MB LLC for the regular APUs would only increase die size by about 3% and improve performance significantly. As for this claim, it's really dishonest to include frame generation when only one of the products can use it.
 
Joined
Dec 28, 2012
Messages
3,991 (0.91/day)
System Name Skunkworks 3.0
Processor 5800x3d
Motherboard x570 unify
Cooling Noctua NH-U12A
Memory 32GB 3600 mhz
Video Card(s) asrock 6800xt challenger D
Storage Sabarent rocket 4.0 2TB, MX 500 2TB
Display(s) Asus 1440p144 27"
Case Old arse cooler master 932
Power Supply Corsair 1200w platinum
Mouse *squeak*
Keyboard Some old office thing
Software Manjaro
AMD is having a problem to make it's integrated solutions faster and Intel is catching up. AMD needs Frame Generation to stay ahead and this should be an alarm for them to wake up and find solutions to overcome known bottlenecks that iGPUs have to face. Or Intel might do that in the future and AMD could fall behind. AMD has a habit of losing an advantage against the competition and unfortunately they seem to be doing the same with integrated graphics.
Frame generation degrades image quality, ESPECIALLY FSR which still has significant issues with blurriness and latency. If that is the best solution AMD can come up with they're gonna get royally screwed.

the actual solution has already been presented. Wider memory busses, larger GPU core counts, and x3d cache. AMD simply refuses to implement these solutions en masse. We've seen the sheer difference x3d makes on the desktop chips,w here even the meager 2cu igpus see major performance increases. This will only escalate with larger chips.

AMD is using 2 framegenrations stacked on top of each other (FSR FG and AFMF) to get to that 75% they are claiming :kookoo:
God if I wanted to rub vaseline on my eyeballs and play at 640x480 I can just do that!
 
Joined
Apr 12, 2013
Messages
7,573 (1.77/day)
They're getting around to it, though they probably only have a 1-2 year window within which they can make a lot (?) of money from Halo. If the leather jacket guy releases the rumored ARM chips, also competing with QC, for Windows, then their (x86) advantage instantly vanishes!
Strix Halo, in addition to its wider memory interface, has 32 MB of LLC (last level cache) dedicated to the GPU.
They really should've thought about this ~5 years back. Many of us were talking about such chips since PS4 days but I guess it took a whole Apple to get those dumb arses seeing the light :ohwell:
 
Joined
Jan 8, 2017
Messages
9,525 (3.26/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
It's good seeing them plaster everything with FSR3 and AFMF, Nvidia started this idiotic marketing strategy and AMD needs to make the best out of it.

Screw it, have 1 gazillion FPS, we da best.
 
Last edited:
Joined
Feb 29, 2016
Messages
645 (0.20/day)
Location
Chile
System Name Shinano
Processor AMD Ryzen 9 5900X
Motherboard ROG Strix B550-F GAMING WIFI II
Cooling Thermalright Peerless Assassin 120SE
Memory 32GB DDR4 3200MHz
Video Card(s) XFX RX 6700 10GB
Storage 970 EVO Plus 1TB | A1000 480GB
Display(s) Lenovo G27q-20 (1440p, 165Hz)
Case NZXT H510
Audio Device(s) Sony WH-1000XM4 | Edifier R1000T4
Power Supply SuperFlower Leadex Gold III 850W
Mouse Logitech G305
Keyboard IK75 v3 (QMK) | HyperX Alloy Origins
This is so misleading lmao, I hate the trend of using frame generation as part of a comparison.
 
Joined
Jul 29, 2022
Messages
547 (0.61/day)
They have the solution; they just refuse to implement it. Strix Halo, in addition to its wider memory interface, has 32 MB of LLC (last level cache) dedicated to the GPU. A 16 MB LLC for the regular APUs would only increase die size by about 3% and improve performance significantly. As for this claim, it's really dishonest to include frame generation when only one of the products can use it.
I read somewhere that the APUs after Strix may feature X3D memory (on monolithic chips), which would be massive. If only they'd release new APUs faster than 2-3 years, but I guess there's more money in server chips they can repurpose for desktops.
 
Joined
Apr 12, 2013
Messages
7,573 (1.77/day)
Pretty sure top-end Halos would have great margins as well; you're replacing an Nvidia GPU+potentially an Intel CPU with this! It's win/win for OEM, at least till M4 (MBP) prices drop next year.

The great thing about servers is volume+margins, not necessarily just the latter.
 
Joined
Aug 10, 2020
Messages
344 (0.21/day)
AMD why don't you stay on our good side and not bullshit us we know you are using frame generation and other nonsense to lie about your FPS. Just put some V-cache on your mobile CPUs too they run way more efficient in gaming.
 
Joined
Jun 14, 2020
Messages
3,647 (2.19/day)
System Name Mean machine
Processor 12900k
Motherboard MSI Unify X
Cooling Noctua U12A
Memory 7600c34
Video Card(s) 4090 Gamerock oc
Storage 980 pro 2tb
Display(s) Samsung crg90
Case Fractal Torent
Audio Device(s) Hifiman Arya / a30 - d30 pro stack
Power Supply Be quiet dark power pro 1200
Mouse Viper ultimate
Keyboard Blackwidow 65%
AMD why don't you stay on our good side and not bullshit us we know you are using frame generation and other nonsense to lie about your FPS. Just put some V-cache on your mobile CPUs too they run way more efficient in gaming.
Bad idea. Unless you are talking about a desktop replacement laptop which will be plugged in 24/7 - their current monolithic mobile chips are way more efficient in gaming than their current desktop x3d chips.
 
Joined
Nov 26, 2021
Messages
1,709 (1.50/day)
Location
Mississauga, Canada
Processor Ryzen 7 5700X
Motherboard ASUS TUF Gaming X570-PRO (WiFi 6)
Cooling Noctua NH-C14S (two fans)
Memory 2x16GB DDR4 3200
Video Card(s) Reference Vega 64
Storage Intel 665p 1TB, WD Black SN850X 2TB, Crucial MX300 1TB SATA, Samsung 830 256 GB SATA
Display(s) Nixeus NX-EDG27, and Samsung S23A700
Case Fractal Design R5
Power Supply Seasonic PRIME TITANIUM 850W
Mouse Logitech
VR HMD Oculus Rift
Software Windows 11 Pro, and Ubuntu 20.04
They're getting around to it, though they probably only have a 1-2 year window within which they can make a lot (?) of money from Halo. If the leather jacket guy releases the rumored ARM chips, also competing with QC, for Windows, then their (x86) advantage instantly vanishes!

They really should've thought about this ~5 years back. Many of us were talking about such chips since PS4 days but I guess it took a whole Apple to get those dumb arses seeing the light :ohwell:
Ironically, AMD used a large dedicated cache for the GPU long before Apple: the Xbox One had 32 MB of on-chip memory for the IGP.
 
Joined
Apr 2, 2011
Messages
2,852 (0.57/day)
So...I think this is really scummy. AMD is living down to why some individuals level criticism, because using frame generation to create interpolations and claiming that's raw performance is absolutely lying in this instance. It's trying to claim your processors are magically better and sell based on a blatant lie.

All of this said, I think it will be scarier now when I only see low teens on the generational performance gaps...because it'll remind me exactly what shenanigans can be pulled behind the scenes. If AMD had claimed they were 20-30% faster I'd have bought it without immediately calling them...and it would have taken effort to prove them wrong. When they claim such a silly discrepancy it immediately raises the hackles and makes you feel cheated. That's just a bad showing.
 
Joined
Jan 27, 2024
Messages
394 (1.14/day)
Processor Ryzen AI
Motherboard MSI
Cooling Cool
Memory Fast
Video Card(s) Matrox Ultra high quality | Radeon
Storage Chinese
Display(s) 4K
Case Transparent left side window
Audio Device(s) Yes
Power Supply Chinese
Mouse Chinese
Keyboard Chinese
VR HMD No
Software Android | Yandex
Benchmark Scores Yes
Those are some pretty misleading graphs. Their raw gaming performance is pretty evenly matched, and if XESS is such a hog the Intel users could use FSR2/3 as well.

Because the bottleneck is in the RAM throughput, same for both platforms. Dual channel means slow graphics.

They have the solution; they just refuse to implement it. Strix Halo, in addition to its wider memory interface, has 32 MB of LLC (last level cache) dedicated to the GPU. A 16 MB LLC for the regular APUs would only increase die size by about 3% and improve performance significantly. As for this claim, it's really dishonest to include frame generation when only one of the products can use it.

I read somewhere that the APUs after Strix may feature X3D memory (on monolithic chips), which would be massive. If only they'd release new APUs faster than 2-3 years, but I guess there's more money in server chips they can repurpose for desktops.

They refuse, because they don't want to, who knows why on Earth they don't want to?
They need triple or quad channel DDR5, and more LLC, 64 MB, 96 MB or even 128 MB would suffice, and make the APU look not that bad.
 
Joined
Aug 27, 2024
Messages
50 (0.38/day)
Processor Ryzen 7 7800X3D
Motherboard Gigabyte B650 Gaming X AX
Cooling DeepCool AK620
Memory G.Skill Trident Z5 Neo DDR5 6000 32GB (2x16GB)
Video Card(s) XFX Speedster Merc 310 7900XTX
Storage Silicon Power 4TB XS70 PCIE4.0 + Silicon Power 4TB UD90 PCIE4.0+ Silicon Power 2TB A80 PCIE3.0 NVME
Display(s) Samsung 32" Odyssey Neo G7 4K UHD 165Hz
Case Phanteks Eclipse P600S
Power Supply Rosewill Gaming 80 Plus Gold 1200W
Software Windows 11 Pro
Ironically, AMD used a large dedicated cache for the GPU long before Apple: the Xbox One had 32 MB of on-chip memory for the IGP.
You further your point this philosophy extends back to the ATI days. The ATI designed GPU for the 360 had 10MB of on chip memory, and prior to that their GPU design for the Gamecube had 3MB on the chip. Come to think of it, IIRC the Wii U's GPU had 32MB of on chip memory as well.
 
Joined
Jul 29, 2022
Messages
547 (0.61/day)
You further your point this philosophy extends back to the ATI days. The ATI designed GPU for the 360 had 10MB of on chip memory, and prior to that their GPU design for the Gamecube had 3MB on the chip. Come to think of it, IIRC the Wii U's GPU had 32MB of on chip memory as well.
I don't know about the Nintendo ones but the X360 eDRAM was off chip. It was a separate die sitting next to the GPU.
 
Joined
Dec 26, 2006
Messages
3,879 (0.59/day)
Location
Northern Ontario Canada
Processor Ryzen 5700x
Motherboard Gigabyte X570S Aero G R1.1 BiosF5g
Cooling Noctua NH-C12P SE14 w/ NF-A15 HS-PWM Fan 1500rpm
Memory Micron DDR4-3200 2x32GB D.S. D.R. (CT2K32G4DFD832A)
Video Card(s) AMD RX 6800 - Asus Tuf
Storage Kingston KC3000 1TB & 2TB & 4TB Corsair MP600 Pro LPX
Display(s) LG 27UL550-W (27" 4k)
Case Be Quiet Pure Base 600 (no window)
Audio Device(s) Realtek ALC1220-VB
Power Supply SuperFlower Leadex V Gold Pro 850W ATX Ver2.52
Mouse Mionix Naos Pro
Keyboard Corsair Strafe with browns
Software W10 22H2 Pro x64
Last edited:
Joined
Apr 13, 2022
Messages
1,207 (1.21/day)
I don't know about the Nintendo ones but the X360 eDRAM was off chip. It was a separate die sitting next to the GPU.
  • 243 MHz graphics chip
  • 3 MB embedded GPU memory (eDRAM)
    • 2 MB dedicated to Z-buffer and framebuffer
    • 1 MB texture cache
  • 24 MB 1T-SRAM @ 486 MHz (3.9 GB/s) directly accessible for textures and other video data
  • Fixed function pipeline (no support for programmable vertex or pixel shaders in hardware)
  • Texture Environment Unit (TEV) - capable of combining up to 8 textures in up to 16 stages or "passes"
  • ~30 GB/s internal bandwidth^
  • ~18 million polygons/second^
  • 972Mpixels/sec peak pixel fillrate
 

Rubinhood

New Member
Joined
Feb 14, 2023
Messages
10 (0.01/day)
They refuse, because they don't want to, who knows why on Earth they don't want to?
They need triple or quad channel DDR5, and more LLC, 64 MB, 96 MB or even 128 MB would suffice, and make the APU look not that bad.
My guess:
They don't want to scare away their 2 top paying large volume customers - m$ and Sony - by making a competing product that is too similar. Maybe they even have some legal clause about that.

If that wasn't the case, instead of quad DDR5 and/or in addition to large cache, they could easily use GDDR[6-7]* for unified system memory.
 
Joined
Sep 19, 2023
Messages
29 (0.06/day)
System Name IdeaPad Gaming 3 15ARH7
Processor Ryzen 5 6600H
Memory 16GB DDR5 4800MHz CL34
Video Card(s) RTX 3050 4GB Laptop
Storage Samsung 980 Pro 1TB
Audio Device(s) Moondrop Quarks 3.5mm
Mouse Logitech G304
so without upscaling, the 370 can't keep up with the Intel 258V
lol, AMD should just make a cpu comparison instead of this messy slides

My guess:
They don't want to scare away their 2 top paying large volume customers - m$ and Sony - by making a competing product that is too similar. Maybe they even have some legal clause about that.

If that wasn't the case, instead of quad DDR5 and/or in addition to large cache, they could easily use GDDR[6-7]* for unified system memory.
wait, can the GDDR be used for the CPU?

i thought G is stand for Graphics and also these Graphics DDR often have stupidly higher latency than normal DDR
 
Joined
Apr 9, 2013
Messages
318 (0.07/day)
Location
Chippenham, UK
System Name Hulk
Processor 7800X3D
Motherboard Asus ROG Strix X670E-F Gaming Wi-Fi
Cooling Custom water
Memory 32GB 3600 CL18
Video Card(s) 4090
Display(s) LG 42C2 + Gigabyte Aorus FI32U 32" 4k 120Hz IPS
Case Corsair 750D
Power Supply beQuiet Dark Power Pro 1200W
Mouse SteelSeries Rival 700
Keyboard Logitech G815 GL-Tactile
VR HMD Quest 2
Jfc AMD. You have a good, competitive product, & you go & absolutely screw up the marketing AGAIN by creating the most misleading slide possible. This will only hurt them, the tech press are going to have a field day with this when they detail exactly why this is so consumer unfriendly. They deserve a massive grilling for this one, to prevent not only them but the other big players from using this BS approach to marketing again. Yes Nvidia started this with performance comparisons using DLSS on, & they deserved more of a grilling than they got for that, but from memory their graphs at least spelt out very clearly on every chart itself that they were using DLSS for those numbers. Compared to the foremost one here where the chart in isolation doesn't mention FSR/AFMF, it barely even hints at it in the key at the bottom! The other two charts are fine imho, they do clearly label how they got to their numbers, but that first one is just awful.
 

Rubinhood

New Member
Joined
Feb 14, 2023
Messages
10 (0.01/day)
wait, can the GDDR be used for the CPU?
Depends on which CPU. Standard desktop Intel / AMD processors can only drive DDR modules, while the CPU's in recent Playstations and XBoxes are designed to work with GDDR instead. So it's a question of CPU design.
To my knowledge, there's little to no *technical* reason that AMD couldn't make a PS/Xbox-style powerful APU for desktops.
 
Top