Thursday, November 14th 2024

AMD Claims Ryzen AI 9 HX 370 Outperforms Intel Core Ultra 7 258V by 75% in Gaming

AMD has published a blog post about its latest AMD Ryzen AI 300 series processors, claiming they are changing the game for portable devices. To back these claims, Team Red has compared its Ryzen AI 9 HX 370 processor to Intel's latest Core Ultra 7 258V, using the following games: Assassin's Creed Mirage, Baldur's Gate 3, Borderlands 3, Call of Duty: Black Ops 6, Cyberpunk 2077, Doom Eternal, Dying Light 2 Stay Human, F1 24, Far Cry 6, Forza Horizon 5, Ghost of Tsushima, Hitman 3, Hogwarts Legacy, Shadow of the Tomb Raider, Spider-Man Remastered, and Tiny Tina's Wonderlands. The conclusion was that AMD's Ryzen AI 9 HX 370, with its integrated Radeon 890M graphics powerhouse, outperformed the Intel "Lunar Lake" Core Ultra 7 258V with Intel Arc Graphics 140V by 75% on average.

To support this performance leap, AMD also relies on software technologies, including FidelityFX Super Resolution 3 (FSR 3) and HYPR-RX, to unlock additional power and gaming efficiency. FSR 3 alone enhances visuals in over 95 games, while HYPR-RX, with features like AMD Fluid Motion Frames 2 (AFMF 2) and Radeon Anti-Lag, provides substantial performance boosts across thousands of games. The company has also compared its FSR/HYPR-RS combination with Intel's XeSS, which is available in around 130 games. AMD claims its broader suite supports 415+ games and is optimized for smoother gameplay. The AFMF 2 claims support with thousands of titles, while Intel's GPU software stack lacks a comparison point. Of course, these marketing claims are to be taken with a grain of salt, so independent testing is always the best to compare the two.
For comparison of pure specifications, the AMD Ryzen 9 HX 370 and Intel Core Ultra 7 258V processors each employ a hybrid core architecture, but AMD's design delivers more total cores and threads. The Ryzen 9 HX 370 boasts 12 cores (four performance and eight efficiency) with 24 threads, while the Core Ultra 7 258V features eight cores and eight threads. In terms of cache, AMD's Ryzen processor includes a substantial 24 MB of shared L3 cache, supported by 1 MB of L2 cache per core. The Lunar Lake chip has each P-core equipped with 192 KB of L1 cache and 2.5 MB of L2 cache while sharing a 12 MB L3 cache, and each E-core has 96 KB of L1 cache along with 4 MB of L2 cache per module.

For graphics, the Ryzen 9 HX 370 integrates the Radeon 890M, which uses RDNA 3.5 architecture with 16 compute units running up to 2.9 GHz. This delivers impressive graphics capabilities for an integrated GPU, in contrast to Intel's Core Ultra 7 258V, which includes Intel Xe-LPG graphics, a capable option but generally less optimized for games than AMD's graphics. Intel's Intel Arc Graphics 140V has eight Xe-LPG cores clocked at 1.95 GHz.
Source: AMD
Add your own comment

21 Comments on AMD Claims Ryzen AI 9 HX 370 Outperforms Intel Core Ultra 7 258V by 75% in Gaming

#1
john_
AMD is having a problem to make it's integrated solutions faster and Intel is catching up. AMD needs Frame Generation to stay ahead and this should be an alarm for them to wake up and find solutions to overcome known bottlenecks that iGPUs have to face. Or Intel might do that in the future and AMD could fall behind. AMD has a habit of losing an advantage against the competition and unfortunately they seem to be doing the same with integrated graphics.
Posted on Reply
#2
Chane
Those are some pretty misleading graphs. Their raw gaming performance is pretty evenly matched, and if XESS is such a hog the Intel users could use FSR2/3 as well.
Posted on Reply
#3
Squared
Testing from reviwers suggests to me that this claim is absurd. The 258V beats the 370 in many games and it might lose overall but only by a little. FSR may be an edge for AMD but FSR frame generation is highly situational; since frame generation introduces latency, it should always be turned off if you need a responsive game such as in Rocket League. Moreover, as I recall, XESS is better at upscaling than FSR; it's just lacking frame generation and wider support in games.

Lastly, why compare the 258V to the 370? The 370 is AMD's top model. The 268V and the 288V would both do better than the 258V. I think it'd be fair to compare the 258V to the 365.
Posted on Reply
#4
fevgatos
SquaredTesting from reviwers suggests to me that this claim is absurd. The 258V beats the 370 in many games and it might lose overall but only by a little. FSR may be an edge for AMD but FSR frame generation is highly situational; since frame generation introduces latency, it should always be turned off if you need a responsive game such as in Rocket League. Moreover, as I recall, XESS is better at upscaling than FSR; it's just lacking frame generation and wider support in games.

Lastly, why compare the 258V to the 370? The 370 is AMD's top model. The 268V and the 288V would both do better than the 258V. I think it'd be fair to compare the 258V to the 365.
AMD is using 2 framegenrations stacked on top of each other (FSR FG and AFMF) to get to that 75% they are claiming :kookoo:
Posted on Reply
#5
AnotherReader
john_AMD is having a problem to make it's integrated solutions faster and Intel is catching up. AMD needs Frame Generation to stay ahead and this should be an alarm for them to wake up and find solutions to overcome known bottlenecks that iGPUs have to face. Or Intel might do that in the future and AMD could fall behind. AMD has a habit of losing an advantage against the competition and unfortunately they seem to be doing the same with integrated graphics.
They have the solution; they just refuse to implement it. Strix Halo, in addition to its wider memory interface, has 32 MB of LLC (last level cache) dedicated to the GPU. A 16 MB LLC for the regular APUs would only increase die size by about 3% and improve performance significantly. As for this claim, it's really dishonest to include frame generation when only one of the products can use it.
Posted on Reply
#6
TheinsanegamerN
john_AMD is having a problem to make it's integrated solutions faster and Intel is catching up. AMD needs Frame Generation to stay ahead and this should be an alarm for them to wake up and find solutions to overcome known bottlenecks that iGPUs have to face. Or Intel might do that in the future and AMD could fall behind. AMD has a habit of losing an advantage against the competition and unfortunately they seem to be doing the same with integrated graphics.
Frame generation degrades image quality, ESPECIALLY FSR which still has significant issues with blurriness and latency. If that is the best solution AMD can come up with they're gonna get royally screwed.

the actual solution has already been presented. Wider memory busses, larger GPU core counts, and x3d cache. AMD simply refuses to implement these solutions en masse. We've seen the sheer difference x3d makes on the desktop chips,w here even the meager 2cu igpus see major performance increases. This will only escalate with larger chips.
fevgatosAMD is using 2 framegenrations stacked on top of each other (FSR FG and AFMF) to get to that 75% they are claiming :kookoo:
God if I wanted to rub vaseline on my eyeballs and play at 640x480 I can just do that!
Posted on Reply
#7
R0H1T
They're getting around to it, though they probably only have a 1-2 year window within which they can make a lot (?) of money from Halo. If the leather jacket guy releases the rumored ARM chips, also competing with QC, for Windows, then their (x86) advantage instantly vanishes!
AnotherReaderStrix Halo, in addition to its wider memory interface, has 32 MB of LLC (last level cache) dedicated to the GPU.
They really should've thought about this ~5 years back. Many of us were talking about such chips since PS4 days but I guess it took a whole Apple to get those dumb arses seeing the light :ohwell:
Posted on Reply
#8
Vya Domus
It's good seeing them plaster everything with FSR3 and AFMF, Nvidia started this idiotic marketing strategy and AMD needs to make the best out of it.

Screw it, have 1 gazillion FPS, we da best.
Posted on Reply
#9
Kodehawa
This is so misleading lmao, I hate the trend of using frame generation as part of a comparison.
Posted on Reply
#10
ymdhis
AnotherReaderThey have the solution; they just refuse to implement it. Strix Halo, in addition to its wider memory interface, has 32 MB of LLC (last level cache) dedicated to the GPU. A 16 MB LLC for the regular APUs would only increase die size by about 3% and improve performance significantly. As for this claim, it's really dishonest to include frame generation when only one of the products can use it.
I read somewhere that the APUs after Strix may feature X3D memory (on monolithic chips), which would be massive. If only they'd release new APUs faster than 2-3 years, but I guess there's more money in server chips they can repurpose for desktops.
Posted on Reply
#11
R0H1T
Pretty sure top-end Halos would have great margins as well; you're replacing an Nvidia GPU+potentially an Intel CPU with this! It's win/win for OEM, at least till M4 (MBP) prices drop next year.

The great thing about servers is volume+margins, not necessarily just the latter.
Posted on Reply
#12
phints
AMD why don't you stay on our good side and not bullshit us we know you are using frame generation and other nonsense to lie about your FPS. Just put some V-cache on your mobile CPUs too they run way more efficient in gaming.
Posted on Reply
#13
fevgatos
phintsAMD why don't you stay on our good side and not bullshit us we know you are using frame generation and other nonsense to lie about your FPS. Just put some V-cache on your mobile CPUs too they run way more efficient in gaming.
Bad idea. Unless you are talking about a desktop replacement laptop which will be plugged in 24/7 - their current monolithic mobile chips are way more efficient in gaming than their current desktop x3d chips.
Posted on Reply
#14
AnotherReader
R0H1TThey're getting around to it, though they probably only have a 1-2 year window within which they can make a lot (?) of money from Halo. If the leather jacket guy releases the rumored ARM chips, also competing with QC, for Windows, then their (x86) advantage instantly vanishes!

They really should've thought about this ~5 years back. Many of us were talking about such chips since PS4 days but I guess it took a whole Apple to get those dumb arses seeing the light :ohwell:
Ironically, AMD used a large dedicated cache for the GPU long before Apple: the Xbox One had 32 MB of on-chip memory for the IGP.
Posted on Reply
#15
lilhasselhoffer
So...I think this is really scummy. AMD is living down to why some individuals level criticism, because using frame generation to create interpolations and claiming that's raw performance is absolutely lying in this instance. It's trying to claim your processors are magically better and sell based on a blatant lie.

All of this said, I think it will be scarier now when I only see low teens on the generational performance gaps...because it'll remind me exactly what shenanigans can be pulled behind the scenes. If AMD had claimed they were 20-30% faster I'd have bought it without immediately calling them...and it would have taken effort to prove them wrong. When they claim such a silly discrepancy it immediately raises the hackles and makes you feel cheated. That's just a bad showing.
Posted on Reply
#16
3valatzy
ChaneThose are some pretty misleading graphs. Their raw gaming performance is pretty evenly matched, and if XESS is such a hog the Intel users could use FSR2/3 as well.
Because the bottleneck is in the RAM throughput, same for both platforms. Dual channel means slow graphics.
AnotherReaderThey have the solution; they just refuse to implement it. Strix Halo, in addition to its wider memory interface, has 32 MB of LLC (last level cache) dedicated to the GPU. A 16 MB LLC for the regular APUs would only increase die size by about 3% and improve performance significantly. As for this claim, it's really dishonest to include frame generation when only one of the products can use it.
ymdhisI read somewhere that the APUs after Strix may feature X3D memory (on monolithic chips), which would be massive. If only they'd release new APUs faster than 2-3 years, but I guess there's more money in server chips they can repurpose for desktops.
They refuse, because they don't want to, who knows why on Earth they don't want to?
They need triple or quad channel DDR5, and more LLC, 64 MB, 96 MB or even 128 MB would suffice, and make the APU look not that bad.
Posted on Reply
#17
Darc Requiem
AnotherReaderIronically, AMD used a large dedicated cache for the GPU long before Apple: the Xbox One had 32 MB of on-chip memory for the IGP.
You further your point this philosophy extends back to the ATI days. The ATI designed GPU for the 360 had 10MB of on chip memory, and prior to that their GPU design for the Gamecube had 3MB on the chip. Come to think of it, IIRC the Wii U's GPU had 32MB of on chip memory as well.
Posted on Reply
#18
ymdhis
Darc RequiemYou further your point this philosophy extends back to the ATI days. The ATI designed GPU for the 360 had 10MB of on chip memory, and prior to that their GPU design for the Gamecube had 3MB on the chip. Come to think of it, IIRC the Wii U's GPU had 32MB of on chip memory as well.
I don't know about the Nintendo ones but the X360 eDRAM was off chip. It was a separate die sitting next to the GPU.
Posted on Reply
#20
SOAREVERSOR
ymdhisI don't know about the Nintendo ones but the X360 eDRAM was off chip. It was a separate die sitting next to the GPU.
  • 243 MHz graphics chip
  • 3 MB embedded GPU memory (eDRAM)
    • 2 MB dedicated to Z-buffer and framebuffer
    • 1 MB texture cache
  • 24 MB 1T-SRAM @ 486 MHz (3.9 GB/s) directly accessible for textures and other video data
  • Fixed function pipeline (no support for programmable vertex or pixel shaders in hardware)
  • Texture Environment Unit (TEV) - capable of combining up to 8 textures in up to 16 stages or "passes"
  • ~30 GB/s internal bandwidth^
  • ~18 million polygons/second^
  • 972Mpixels/sec peak pixel fillrate
Posted on Reply
#21
Rubinhood
3valatzyThey refuse, because they don't want to, who knows why on Earth they don't want to?
They need triple or quad channel DDR5, and more LLC, 64 MB, 96 MB or even 128 MB would suffice, and make the APU look not that bad.
My guess:
They don't want to scare away their 2 top paying large volume customers - m$ and Sony - by making a competing product that is too similar. Maybe they even have some legal clause about that.

If that wasn't the case, instead of quad DDR5 and/or in addition to large cache, they could easily use GDDR[6-7]* for unified system memory.
Posted on Reply
Add your own comment
Nov 14th, 2024 20:28 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts