Wednesday, June 10th 2015

GFXBench Validation Confirms Stream Processor Count of Radeon Fury

Someone with access to an AMD Radeon Fury sample put it through the compute performance test of GFXBench, and submitted its score to the suite's online score database. Running on pre-launch drivers, the sample is read as simply "AMD Radeon Graphics Processor." Since a GPGPU app is aware of how many compute units (CUs) a GPU has (so it could schedule its parallel processing workloads accordingly), GFXBench was able to put out a plausible-sounding CU count of 64. Since Radeon Fury is based on Graphics CoreNext, and since each CU holds 64 stream processors, the stream processor count on the chip works out to be 4,096.
Source: VideoCardz
Add your own comment

39 Comments on GFXBench Validation Confirms Stream Processor Count of Radeon Fury

#26
Initialised
crazyeyesreaperThey dont really have a choice. They are still stuck at 28nm which has been the mainstay for over 3 years. The process node is stagnant because the jump to 20nm never happened. Thus Even Nvidia can do nothing truly new untill 2016 probably around xmass. It is what it is. With a drop to 16nm AMD can redesign all cards to use the same GCN version and move forward but untill then we are stuck with rebrands because it does not make sense to waste R&D cash on a new GPU that performs exactly the same as a previous gen product.
If they have to rebrand I'd rather see the HBM card slot in at the top as a 390X and everything trickle down a peg so that when the charts come out it looks like there are performance gains at each segment.
290X -> 380X
290 -> 380
285 -> 370
270X -> 360
Posted on Reply
#27
BiggieShady
FrustratedGarrettThat is incorrect. GCN is more suited for general puropose is because scheduling is almost entirely driver independent. Nvidia since Fermi has moved most of the scheduling to their drivers and according to them, scheduling takes up less then 4.5% of a Maxwell SM cluster area.

The reason the particle simulation test runs better on AMD is because it's a probably a port from the CUDA version of that test.
Let's not mix things here, driver issues draw calls, moves data to VRAM and back, and loads compiled shaders. The fact that ASM code of a shader can be optimized by a driver and/or a compiler is irrelevant once it's on GPU in instruction cache. Driver doesn't constantly schedule instructions, it would be ridiculously slow.
There are different kinds of schedulers here, maybe you mean work group scheduling (which threadblock/workgroup goes to which SMM) that can be done in driver. I'm talking specifically about warp instruction scheduler (that 4.5% of SMM)
You are right about GCN scheduler being totally independent ... that doesn't change the fact that you can have a real world problem only solvable by code that can't feed those big vector processors in GCN core optimally, for example something with lot's of scalar instructions with calculations interdependent of each other + lots of branching in the mix ... in that case opencl source or cuda port, it doesn't really matter those frameworks are very similar.
What does matter is optimizing specifically for one architecture over another, and I can't really say if chosen benches are biased that way.
Posted on Reply
#28
ThE_MaD_ShOt
chinmitoo bad there's no love for amd in this world. even most tpu member is nvidia user.
the world will be a better place without amd. we'll gonna be just fine with only nvidia and intel.
I agree I haven't seen a Amd user around here in like forever. :rolleyes:o_O
Posted on Reply
#29
xorbe
There are 2 shades of green in the charts. What do they represent? (Median score, and best submitted score?)
Posted on Reply
#30
GreiverBlade
chinmitoo bad there's no love for amd in this world. even most tpu member is nvidia user.
the world will be a better place without amd. we'll gonna be just fine with only nvidia and intel.
on two comment i did read from you on 2 different AMD news ... both where awfully wrong ... well ... although they are amusing
Tatty_OneI particularly like that most of us are NVidia users, 7 of us that have posted so far in this thread are AMD users..... don't feed the Trolls people! Ooops I just did o_O
ARGH... oh well too late i wrote it i post it ... i hate wasting
Posted on Reply
#31
yogurt_21
So while the sp has been confirmed we essentially have to pick our poison here? In 2 benches its faster, in 2 it's quite a bit slower. Hopefully standard reviews will shed some light.

At any rate it seems my R9 290 at 2 years old (chip wise, I only got it last july) is still relevent enough for amd to simply bump the clocks, add memory and rebrand?

I get nvidia famously took the 8800GTS 512MB and rebranded it twice, but that was over a 2 year period. They could at least drop these down to 380X and 380, otherwise wtf is Fury (XT) going to be named?
Posted on Reply
#32
arbiter
yogurt_21So while the sp has been confirmed we essentially have to pick our poison here? In 2 benches its faster, in 2 it's quite a bit slower. Hopefully standard reviews will shed some light.

At any rate it seems my R9 290 at 2 years old (chip wise, I only got it last july) is still relevent enough for amd to simply bump the clocks, add memory and rebrand?

I get nvidia famously took the 8800GTS 512MB and rebranded it twice, but that was over a 2 year period. They could at least drop these down to 380X and 380, otherwise wtf is Fury (XT) going to be named?
AMD cards since least 7000 series been better at opencl, in some causes a lot faster at it. Problem is, that doesn't mean crap in gaming. So its one those take it with a grain of salt kinda benchmarks. if you look at how amd compares their APU's to intel cpu's in laptops. Most benchmarks they use are ones that can be gpu accelerated. Looks great for ppl that don't understand but looks terrible for people that understand the benchmarks they used are not real work use.

Its same has how politician's pick and choose their statements, they say things that are technically correct, til you start looking at things as a whole to know they nit picked things in their favor.

had a look at 8800gtx, g92 65nm core was only rebranded to 9800gtx, the gtx+ was g92b which was 55nm.
Posted on Reply
#33
Bad Bad Bear
Tatty_OneI particularly like that most of us are NVidia users, 7 of us that have posted so far in this thread are AMD users..... don't feed the Trolls people! Ooops I just did o_O
iGPU FTW ! What about us iGPU campers ? :D
Posted on Reply
#34
semantics
yogurt_21So while the sp has been confirmed we essentially have to pick our poison here? In 2 benches its faster, in 2 it's quite a bit slower. Hopefully standard reviews will shed some light.

At any rate it seems my R9 290 at 2 years old (chip wise, I only got it last july) is still relevent enough for amd to simply bump the clocks, add memory and rebrand?

I get nvidia famously took the 8800GTS 512MB and rebranded it twice, but that was over a 2 year period. They could at least drop these down to 380X and 380, otherwise wtf is Fury (XT) going to be named?
Well the G92 got a die shrink to extend it's life, this rebrand is just spinning with pretty much the final 28nm nodes you pretty much don't expect much outside of what the 285 already brought. That being said there are few straight up rebands outside of OEM.
Posted on Reply
#35
xfia
i read months ago that it would be as strong as a 295x2. was laughing when i seen that post that it would be weaker than a 980ti. that hbm dont play.. as in there needs to be a little balancing going on. there isnt much point putting it on something like a 280x or 290x in its current form. may be different for zen if they use it in a integrated apu form. only the top tier pascal will beat it all around but not for long.
Posted on Reply
#37
H82LUZ73
WTF guys why all the crying about the naming ? FURY XT will be called just that The AMD FURY XT .......It is to establish itself from the Radeon X390 line........And these benches seem to lend an answer as to why the HBM cards will be Fury Pro and Fury XT.
Posted on Reply
#38
xorbe
jigar2speedFury-X 3D Mark scores leaked - Faster than Titan X @ 4k and slower at lower resolution - videocardz.com/56225/amd-radeon-fury-x-3dmark-performance
This seems "typical" of Radeons. They seem to chug through the pixels at high res, but driver efficiency isn't there for crazy high fps @ low res. At least that is my take.
Posted on Reply
#39
Caring1
xorbeThis seems "typical" of Radeons. They seem to chug through the pixels at high res, but driver efficiency isn't there for crazy high fps @ low res. At least that is my take.
That's what I have noticed through the years too.
Posted on Reply
Add your own comment
Feb 16th, 2025 14:21 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts