Thursday, September 14th 2017

AMD Raven Ridge Ryzen 5 2500U with Vega Graphics APU Geekbench Scores Surface

A Geekbench page has just surfaced for AMD's upcoming Raven Ridge APUs, which bring both Vega graphics and Ryzen CPU cores to AMD's old "the future is Fusion" mantra. The APU in question is being tagged as AMD's Raven Ridge-based Ryzen 5 2500U, which leverages 4 Zen cores and 8 threads (via SMT) running at 2.0 GHz with AMD's Vega graphics.

According to Geekbench, the Ryzen APU scores 3,561 points in the single-core score, and 9,421 points in the multi-core score. Compared to AMD's A12-9800, which also leverages 4 cores (albeit being limited to 4 threads) running at almost double the frequency of this Ryzen 5 2500U (3.8 GHz vs the Ryzen's 2 GHz), that's 36% better single-core performance and 48% better multi-core performance. These results are really fantastic, and just show how much AMD has managed to improve their CPU (and in this case, APU) design over their Bulldozer-based iterations.
Source: Guru3D
Add your own comment

54 Comments on AMD Raven Ridge Ryzen 5 2500U with Vega Graphics APU Geekbench Scores Surface

#27
XiGMAKiD
Looks like it's gonna be a nice APU for laptop and AIO desktop
Posted on Reply
#28
chaosmassive
If only AMD could make room for L3 cache, it doesn't have to as large as on Ryzen counterpart
I believe APU performance could be improved even better
but their GPU cluster has to able fit inside silicon so its make sense to discard L3
Posted on Reply
#29
sergionography
Ok so heres some nerdy analysis/questions
The desktop Ryzen chip is around 192nm in die size, And this will have half the cores and half the L3 cache? Or will AMD skip on that as they always have on APUs.
Also is this the same Zen core revision or is it slightly improved also as Apus were in the past(half a generation/revision ahead)?
Also will AMD scrap their old heterogeneous apu interconnect now that they have infinity fabric? Or is infinity fabric the final result of all the experience and development from HSA. Im certain infinity fabric will be incorporsted as both vega and zen are compatible with it, but how does that go along with hsa?
Im the past the whole hsa interconnect seemed to take massive die space that could've been way more useful to end users had it been used for gpu cache/memory so I am hoping AMD made the right choices this time around
Posted on Reply
#30
Unregistered
sergionographyOk so heres some nerdy analysis/questions
The desktop Ryzen chip is around 192nm in die size, And this will have half the cores and half the L3 cache? Or will AMD skip on that as they always have on APUs.
Also is this the same Zen core revision or is it slightly improved also as Apus were in the past(half a generation/revision ahead)?
Also will AMD scrap their old heterogeneous apu interconnect now that they have infinity fabric? Or is infinity fabric the final result of all the experience and development from HSA. Im certain infinity fabric will be incorporsted as both vega and zen are compatible with it, but how does that go along with hsa?
Im the past the whole hsa interconnect seemed to take massive die space that could've been way more useful to end users had it been used for gpu cache/memory so I am hoping AMD made the right choices this time around
My guess is for a sub $150 series there won't be infinity fabric....
I do think they will do L3 but likely cut to something like 3mb...

I'm more excited about These than I am with the HEDT lines.
Posted on Edit | Reply
#31
Frick
Fishfaced Nincompoop
SteevoWhile I can't disagree that at very low resolutions and settings it seems to work fine, at 1080 it fell flat on its face consistantly compared to a 40W higher device the Zbox Magnus 970, which offered a minimum of 2X the performance at 1080 and usually much more than that. So are we talking about playing a game on a screen with terrible resolution and only basic settings? I have Iris in my new laptop Im posting this on, tried steam for a few games and hated it so much I went back to my phone.

My phone can play the games at 1080 (GTA3 for example) with HDMI interface to my TV or DLNA and offers the same performance in these light weight games, as do most tablets, again rendering the Intel graphics virtually pointless from a 3D standpoint.
To put this in perspective, the GTX 1080 pulls ~30W more than the GTX 1070. And the Zbox Magnus 970 has a GTX 960 in it. If you compare an IGP to any kind of dedicated GPU you're doing it wrong and will always be dissapointed. As for your questions, the answer is yes if it's a modern game. Obviously it's yes.

Also, saying a GPU in a a laptop is useless because your phone or tablet can play games too is ... I don't understand your point actually. What is your point? Can you play Fallout New Vegas on your phone? Can you play WoW or Overwatch on your phone? What about Dota? And I honestly don't believe a phone GPU is as powerful as a high end Iris GPU.
Posted on Reply
#32
Assimilator
Call me back when there are benchmarks of the GPU performance of this chip. The CPU being Zen-derived we all knew it was going to shit on AMD's previous APUs, but the Vega in Raven Ridge is a much bigger and more important question mark.
Posted on Reply
#33
R0H1T
GasarakiWhat about compared to Intel Iris?
The Iris sell in what $2k Macbooks? This isn't remotely comparable since there's isn't dedicated HBM or ed/sram to feed the GPU, also price is much lower.
This is the equivalent of an i5 on regular laptops, except that the IGP should be much faster.
AssimilatorCall me back when there are benchmarks of the GPU performance of this chip. The CPU being Zen-derived we all knew it was going to shit on AMD's previous APUs, but the Vega in Raven Ridge is a much bigger and more important question mark.
There's some in geekbench you can lookup ~
browser.geekbench.com/v4/compute/search?utf8=✓&q=2500u

It's comparable to some of the previous gen Iris parts ~
browser.geekbench.com/v4/compute/search?utf8=✓&q=Iris+graphics
Posted on Reply
#34
FairNando
Well this APU is 36% faster in single core performance and 48% faster in multi-threaded performance with just about the half of the 9800's clock speed. That means at the same clock speeds the new APU can almost triple the 9800's multi-threaded performance... wow
Posted on Reply
#35
Valantar
This is interesting, although that clock speed has me a little bit worried. Of course, this is an ES, but for RR to really work AMD has to roughly match the clock speeds of Intel's new 15W 4c8t chips - which score far higher in the same benchmark. Oddly, the difference is bigger in the MT results, which in my mind ought to be more similar as Intel typically has better 1-core boost, while all-core boost and base frequencies are more comparable.

As for the GPU, assuming it won't have IF seems odd to me. IF is integral to the design of Ryzen - it's PCIe lanes double as IF lanes, after all. Why on earth would they disable this and tack on an older GPU interconnect? That doesn't seem to make sense. Or are you saying that it would be cheaper/easier to entirely redesign the PCIe part of RR compared to Ryzen, to exclude IF? Again: that seems highly unlikely. For me, the only question is how wide the IF bus between CPU and GPU will be - will they go balls-to-the-wall, or tone it down to reduce power consumption? IF is supposedly very power efficient, so it could still theoretically be a very wide and fast bus.

Another question: as RR has Vega graphics, does it have a regular memory controller, or a HBCC? If GPU memory bandwidth and latency are negatively affected by having to route memory access through the CPU's memory controller and the CPU-GPU interconnect, wouldn't it then make sense to use a HBCC with a common interface to both parts of the chip (such as IF?)? Is the HBCC too power hungry or physically large to warrant use in an APU?
Posted on Reply
#36
SL2
sweetGeekbenches for mobile and PC are different things. Those titles like "iphone is a better PC" are pure clickbaits, fabricated only for tech-illiterate people.
Not in Geekbench 4.
Posted on Reply
#37
Frick
Fishfaced Nincompoop
ValantarThis is interesting, although that clock speed has me a little bit worried. Of course, this is an ES, but for RR to really work AMD has to roughly match the clock speeds of Intel's new 15W 4c8t chips - which score far higher in the same benchmark.
Remember that early Ryzens were clocked very low as well. And IMO the APUs - the faster ones anyway - don't have to directly compete with Intel in raw CPU speed. I'd rather them dedicating some of that TDP toward the GPU instead.
Posted on Reply
#38
TheinsanegamerN
FrickHonestly I don't think they did, depending on the model. The problem was they were almost exclusively used in power starved laptops with gimped memory so everyone has a bad experience from them. Yeah the CPU side was miles behind Intel but there was compelling APUs that would offer good all around performance ... if anyone would base a system around them. You could make decent small machines with some of their FM2+ APUs.
The memory gimping was also on AMD's shoulders. A reminder, AMD's best bristol ridge desktop chip only gets 11.2GB/s on 2400MHz dual channel DDR4. Intel gets over 30GB/s with the same memory kit. Why would manufacturers bother with dual channel memory when the APU couldnt handle it in the first place?

As for power, they were asked to run in the same envelope that intel runs at. Bulldozer is incapable of scaling with power properly, and we were left with garbage.
Also there's managing expectations. Obviously a dedicated used GPU will be faster and probably cheaper, but still.
Keep in mind that the dGPU in question was weaker in every way compared to AMD's APU. Fewer cores, lower clock speed, half the memory bus, slower vRAM.

Yer it was 60+% faster then the APU. It really showed how bad AMD's bulldozer gimped the iGPUs performance, and why OEMs just didnt bother with it, bulldozer was a junk chip.

None of this was an issue with llano.
Posted on Reply
#39
Vya Domus
TheinsanegamerNWhat Is known in that bulldozer APUs sucked HARD.
They didn't , they took a crap on Intel's integrated graphics so much that even with an i7 in many games those APUs outperformed it.
Posted on Reply
#40
_JP_
If the mobile versions of this have cTDP again... :shadedshu:
Posted on Reply
#41
TheinsanegamerN
Vya DomusThey didn't , they took a crap on Intel's integrated graphics so much that even with an i7 in many games those APUs outperformed it.
*In games that were not CPU dependent. Because any game that required a decent CPU was hamstrung on bulldozers. Especially in power limited laptops.

Or any games that required decent memory bandwidth (see RTS games in particular).

Any any non gaming task the bulldozers got destroyed. And battery life was far worse.
Posted on Reply
#42
Vya Domus
TheinsanegamerN*In games that were not CPU dependent. Because any game that required a decent CPU was hamstrung on bulldozers. Especially in power limited laptops.

Or any games that required decent memory bandwidth (see RTS games in particular).
Nope , none of that mattered. Intel's iGPUs are simply utter garbage , the GPU is by far the biggest bottleneck in these situations.

Take a look at this , GTA 5 a game that is know not only to use a lot of CPU but also blatantly favors Intel CPUs.



Yeah...

The GPUs AMD puts inside of their APUs are several orders of magnitude better , the gap in terms of GPU power is so big it didn't matter they had inferior CPUs and memory bandwidth.

That being said I expect the new APUs with Vega cores to bury Intel's iGPUs. Intel seriously needs to reconsider their strategy with these things , there is no point in dedicating so much die space on every single chip for something that is useless , just limit these things to basic display adapters.
Posted on Reply
#43
Valantar
FrickRemember that early Ryzens were clocked very low as well. And IMO the APUs - the faster ones anyway - don't have to directly compete with Intel in raw CPU speed. I'd rather them dedicating some of that TDP toward the GPU instead.
While you have a point, the context is different. Ryzen ES was new silicon from an unknown arch on an unknown node (although with a rather large TDP window). Conservative clocks make sense in that case simply to make sure everything is all right. And while RR is new silicon, the arch (at least for the CPU part and IF) is quite well tested by now, and the process is far more mature and well known. As such, it seems logical for RR ES to clock higher in my mind - although stricter power limits do make a point against this. On the other hand, if they can sustain 2GHz on 4 cores at 15W, that's amazing (certainly competitive with "8th" generation Core), but my question is then how high they can push 1-/2-core boost. I'd say ~3.2 is the minimum for this to do well, the closer to 4 the better.

Of course what I want most of all is a 25-35W cTDP-up mode (or just high TDP SKUs) that favor the GPU explicitly.
Posted on Reply
#44
Assimilator
Vya DomusIntel seriously needs to reconsider their strategy with these things , there is no point in dedicating so much die space on every single chip for something that is useless , just limit these things to basic display adapters.
"Ability to play most games at really low settings" is a vital marketing checkbox that Intel cannot afford to leave unticked, regardless of how poorly their iGPUs perform. Their CPU performance has always been superior enough to AMD's that they've got away with it so far, but Zen performs well enough to nullify that advantage and thus the heat is definitely going to be on Intel to up their iGPU game - assuming Raven Ridge is a repeat of Zen's triumph.

Considering that Vega is a massive GPU and getting it integrated into a single package with Zen is not going to be a simple process, it may very well end up that AMD has to run the clocks on both CPU + GPU at really low numbers to make them not suffer a meltdown when run together. Then there's the massive unanswered question of how Vega will perform when it has to take the massive latency and bandwidth hit of going to shared system memory, as opposed to dedicated HBM2.

AMD has thrown a left hook at Intel with Zen; making Raven Ridge work would be the right hook that could potentially floor the giant. At the very least, Intel would have to do something drastic about its iGPU... I wonder if they're already talking to NVIDIA?
Posted on Reply
#45
Vya Domus
Assimilator"Ability to play most games at really low settings"
Except their GPUs can't even do that properly in most cases. Look at the graph I posted , it illustrates how ridiculous the situation is , most games are simply unplayable on Intel's iGPUs. I am not saying they should ditch the integrated graphics , just reduce it to the bare minimum and be done with it , make room for more cores/cache.
AssimilatorConsidering that Vega is a massive GPU and getting it integrated into a single package with Zen is not going to be a simple process, it may very well end up that AMD has to run the clocks on both CPU + GPU at really low numbers to make them not suffer a meltdown when run together.
Vega 10 is massive , whatever is going to be inside Raven Ridge wont be , 512 shaders would only occupy ~60-70 mm^2. Vega is actually , just as Polaris , very power efficient at lower clocks , 512 NCUs at ~1400mhz is totally feasible. As far as memory bandwidth goes, you got to remember that this GPU will have a fraction of the instruction throughput of Vega 10 so they can get away with much lower memory speed requirements.
AssimilatorI wonder if they're already talking to NVIDIA?
Highly doubt it , Nvidia's biggest rival right now is Intel not AMD. They wont license anything to them any time soon. A capable GPU based off Nvidia's designs would mean , potentially , a better compute GPU which Intel might use against them.

Simply put , I don't see how Intel can come up with competitive integrated graphics , they haven't being able to do it for ages. They lack the know-how most likely , GPU architects are very scarce and most end up being snatched by AMD and Nvidia anyway.

I mean just look at this , die shot of a i7 6700. That GPU occupies an insane amount of space , yet it's performance is so bad in comparison. The efficiency in using die space for their GPUs is rock bottom compared to both AMD and Nvidia.

Posted on Reply
#46
GoldenX
FX APUs run the CPU as low as under 2GHz during heavy IGP usage, and still they were better at gaming that a single 7700K + IGP, so I don't think a low speed Ryzen is a problem for future APUs.

The only way Nvidia would sit in a table with Intel is with x86 licenses at the center.
Posted on Reply
#47
EarthDog
Come again on the size comparatively? Seems similar. But alas, you said efficiency or performance out of that size. :)



Posted on Reply
#48
Vya Domus
EarthDogCome again on the size comparatively? Seems similar. But alas, you said efficiency or performance out of that size
That was my point , similar size (relative to the entire chip) but not performance. Hence worse efficiency in using die space.
Posted on Reply
#49
Assimilator
Vya DomusHighly doubt it , Nvidia's biggest rival right now is Intel not AMD. They wont license anything to them any time soon. A capable GPU based off Nvidia's designs would mean , potentially , a better compute GPU which Intel might use against them.
Have to disagree with you on this. NVIDIA's ARM-based CPUs are playing in a completely different space than Intel's x86 ones, plus there is already significant synergy between the two in terms of the absolute fastest systems being Intel CPUs coupled with NVIDIA GPUs.
GoldenXThe only way Nvidia would sit in a table with Intel is with x86 licenses at the center.
Maybe, maybe not. At this point NVIDIA has invested so much into ARM that x86 simply may not make sense for them anymore. Certainly, even if they were to acquire an x86 license, they would be starting at absolutely rock bottom and whatever x86 CPU they designed would take multiple generations to merely be competitive with Intel or AMD. That's a lot of time and money to invest.

I was more thinking a straight licensing situation, whereby Intel CPUs are allowed to integrate NVIDIA's GT 1030 GPU under the following conditions:

* NVIDIA won't allow the CPU-with-GT 1030-iGPU to go into production unless they are satisfied with the performance (in other words, they are sure that it won't damage their brand name)
* If the hybrid chip does enter production, Intel has to shutter their own integrated grpahics divison for good, to preclude any potential theft of NVIDIA's GT 1030 intellectual property
* Intel has to share all modifications/optimisations they make to GT 1030 to make it work as an iGPU
* Intel is not allowed to use or sell anything they learn from integrating GT 1030 into their CPU
* Intel has to put NVIDIA GeForce branding everywhere (website, CPU boxes, hell probably even a GeForce logo etched onto the heatspreader)
* And of course, Intel has to NVIDIA pay massive royalties for every GT 1030 iGPU they produce, which means that Intel will have to potentially take a loss on each CPU they sell just to be competitive on price

Unless AMD is feeling spectacularly suicidal and decides to license the Vega iGPU to their archrival, NVIDIA is literally the only option Intel's got. Which means that NVIDIA gets to dictate the terms of the agreement.
Posted on Reply
#50
Vya Domus
AssimilatorHave to disagree with you on this. NVIDIA's ARM-based CPUs are playing in a completely different space than Intel's x86 ones, plus there is already significant synergy between the two in terms of the absolute fastest systems being Intel CPUs coupled with NVIDIA GPUs.
That wasn't what I was talking about , notice I mentioned compute not ARM. I am talking about datacenters and more precisely , datacenters used in fields such as AI and Big Data (or even the automotive industry, though the situation there is a little bit different) where Nvidia has managed to garner an impressive amount of market share in just the last 2-3 years. These are multi-billion dollar industries that keep on growing by the year. Intel is at heart of the server market and here you have another company digging away what could have been their sales. They are trying to remain somewhat relevant with products such as Xeon Phi or by marketing their iGPUs as being capable of compute and heterogeneous computing. But their offerings still can't come even close to what Nvidia has.

Instead of having a customer buy 1000 Intel CPU nodes they now buy just 100 because they can spend the rest of the money on Nvidia GPUs and have a computer cluster that is many times more powerful and efficient and cheaper.

That hurts Intel badly , trust me. They aren't happy about that synergy at all. This is why Nvidia wont licence any of their GPU IPs , because it is the only thing that manages to keep them ahead by miles in these particular sectors.
Posted on Reply
Add your own comment
Dec 25th, 2024 19:44 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts