• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

XFX RX 480 GTR Performance Results

sure, in multi threaded apps. I'm not so sure in gaming though, since my 3770 is @4.1

It's barely that much faster in single thread.

10% -/+ per core vs Sandy so in reality it will make maybe a 1-2 FPS difference.
 
53478.png


53477.png


The difference in architectures is most plain to see in our single thread test – both the X5690 and E5-2690 will be applying maximum turbo (3.73 GHz and 3.8 GHz respectively) to similar clocks, meaning the IPC improvements of Sandy Bridge-E give it a 2.5% increase overall despite a mild (1.8%) clock increase.
 
It's barely that much faster in single thread.

10% -/+ per core vs Sandy so in reality it will make maybe a 1-2 FPS difference.
is that so... so is the 480 then ?
I'm thinking of jumping ship to the team red atm, and 480 seems nice in terms of price / performance in some of the reviews. But still have my doubts with amd ...
 
is that so... so is the 480 then ?
I'm thinking of jumping ship to the team red atm, and 480 seems nice in terms of price / performance in some of the reviews. But still have my doubts with amd ...

He tested both 1080P & 1440P from what i could read of it.
 
53478.png


53477.png


The difference in architectures is most plain to see in our single thread test – both the X5690 and E5-2690 will be applying maximum turbo (3.73 GHz and 3.8 GHz respectively) to similar clocks, meaning the IPC improvements of Sandy Bridge-E give it a 2.5% increase overall despite a mild (1.8%) clock increase.
Sorry to break your argument, but SB is 10%+ faster than Core 1st gen, clock for clock. There are reviews with more than 2 sheets that show it pretty clearly - cherry picking is pretty senseless. Also a 3770 isn't SB, it's Ivy. That's 10-15% IPC difference compared to that Xeon. It is faster, even with the lower clocks probably.

Still, that must not be the reason why the min FPS is so low. I think it's maybe the difference in thread count - a 6 core with HTT enabled has lower min fps than without HTT (tested it myself).
 
I take it this is on 1080p right ? that min fps is a bit low compare to my 970, is the 480 just not up to it or your X5650 is holding it back ?
1080P & 1440P

I can't really say what's the cause of the minimums. I'm leaning toward HDDs reading and seeking. GPU usage was 99% never breaking 3GB VRAM usage.
 
1080P & 1440P

I can't really say what's the cause of the minimums. I'm leaning toward HDDs reading and seeking. GPU usage was 99% never breaking 3GB VRAM usage.
Try disabling HTT and do it again - I bet the Min FPS, even AVG will rise. I made such a test on my 6 core too - GTA 5. Min FPS were about 6 FPS higher without HTT.
 
Sorry to break your argument, but SB is 10%+ faster than Core 1st gen, clock for clock. There are reviews with more than 2 sheets that show it pretty clearly - cherry picking is pretty senseless. Also a 3770 isn't SB, it's Ivy. That's 10-15% IPC difference compared to that Xeon. It is faster, even with the lower clocks probably.

Still, that must not be the reason why the min FPS is so low - I think it's maybe the difference in thread count - a 6 core with HTT enabled has lower min fps than without HTT (tested it myself).

Notice that the XEON is facing off vs Sandy-E? consumer grade Ivy is very diffeent and outperforms Sandy-E clock for clock, also Westmere performs worse in 2P setup for single thread due to worse divisor and an inferior memory system which is what was used in those graphs.

OP has a single 6 core Westmere on a consumer high end board, i have a 4 core varient.

At 4.2ghz my E5640 was well ahead of even a 3960X stock, the single core per clock was within 10% margin.


And the guy mentioning it may be bottlenecking is using a non K 3770 so i assumed it was stock frequency so i was not really wrong with what i stated.

All Sandy & Ivy chips have the capability of going passed 4.4ghz but not without some serious cooling. Did 5.2ghz on a H100 on my 2500k, and that's a Golden chip.
 
Try disabling HTT and do it again - I bet the Min FPS, even AVG will rise. I made such a test on my 6 core too - GTA 5. Min FPS were about 6 FPS higher without HTT.
Ehhh performance is still good so no ise in going and disabling HT considering i ise it for rendering projects which benefit from the extra threads.
 
Notice that the XEON is facing off vs Sandy-E? consumer grade Ivy is very diffeent and outperforms Sandy-E clock for clock, also Westmere performs worse in 2P setup for single thread due to worse divisor and an inferior memory system which is what was used in those graphs.
Yes of course I noticed that. 4960X, my sister CPU, is 2% faster than 3960X. SB-E is faster than SB. Quad Channel etc.

If you want a comparison take averages of a whole suite - no cherry picking please. I read a lot of reviews I know the IPC differences anyway. Nehalem is over 5% slower IPC wise, peaking up to 20%.

OP has a single 6 core Westmere on a consumer high end board, i have a 4 core varient.

At 4.2ghz my E5640 was well ahead of even a 3960X stock, the single core per clock was within 10% margin.
Stock. The 3960X stock runs at much lower clocks than your CPU. I can clock my 3960X at 4.2 GHz and then you'll see that it's faster. There's no point in comparing with different clocks if you want to know the IPC.

And the guy mentioning it may be bottlenecking is using a non K 3770 so i assumed it was stock frequency so i was not really wrong with what i stated.
Yeah, as I said it's rather because of HTT than the differences. between these CPU - because we're talking Min FPS and HTT decreases that de facto.

All Sandy & Ivy chips have the capability of going passed 4.4ghz but not without some serious cooling. Did 5.2ghz on a H100 on my 2500k, and that's a Golden chip.
Yeah my 3960X does over 5 GHz easily, I just don't have the watercooler for it. With my NH-D14 4.8 GHz is the maximum, after that the PC shuts down to prevent damage. Btw. what happened to that 2500k?

@Durvelle27 I said for only one test, not all the time obviously. But whatever - don't do it then.
 
Westmere is a refresh on 32nm, it's not comparable to Nehalem, even clock for clock it's slightly ahead of Nehalem and has larger cache.

I sold my Sandy system due to money issues over 2 years back now.

gh7Uu6k.jpg


2ebetqs.jpg
 
4 G's vs 4 G's

Westmere-EP vs Sandy-E

gvGeKK7.png


cpu-z-screens-png.77340


Westmere-EP at 4.46ghz with very high temps on a ITX cooler.

untitled-png.77156
 
4 G's vs 4 G's

Westmere-EP vs Sandy-E

gvGeKK7.png


cpu-z-screens-png.77340


Westmere-EP at 4.46ghz with very high temps on a ITX cooler.

untitled-png.77156
Sb-E has a far better ipc than westmare/1st gen core arch. Cpu z is of no conclusive relevance btw. You're just cherry picking again.

Let's have a look at some proper benchmarking:

Link 1:
https://us.hardware.info/reviews/62...l-ivy-bridge-sandy-bridge-and-nehalem-results

Oops, 10-20% ipc win for sb-e. And btw NO westmare is same architecture as nehalem just with more cache. It's a "tock" (same architecture with smaller node) it's comparable to ivy bridge vs sandy - it's maybe a very small tad faster than nehalem. Like ivy vs sandy or Broadwell vs haswell. Everyone knows that.

Link 2:

http://m.3dcenter.org/news/ipc-gewinne-zwischen-den-verschiedenen-intel-architekturen

10% ipc win for sb-e again.

Link 3:

http://www.tomshardware.com/reviews/processor-architecture-benchmark,2974-11.html

Even the 4 Core sandys win easily vs 1st gen 6! Cores. And that's consumer vs elite platform, sb-e is even faster powered with quad channel has nearly double the bandwidth of the triple chan of lga1366. Nehalem/westmare has no chance.

So I hope this is enough. I won't even talk about the old platform that lacks pci-e 3.0.
 
Did i not say all this time 10%?

I swear sometimes people do this to piss me off.

Tell us more about PCI-E 3.0 with GPU's that can't even saturate x16 1.0 :rolleyes::laugh:

Nice unheard of website though. 3Dcenter.

1 core beating inferior arch at 1 core, learn to read.
 
Last edited:
Did i not say all this time 10%?

I swear sometimes people do this to piss me off.

Tell us more about PCI-E 3.0 with GPU's that can't even saturate x16 1.0 :rolleyes::laugh:

Nice unheard of website though. 3Dcenter.

1 core beating inferior arch at 1 core, learn to read.
Just that it's more than 10%. You're pretending like they are the same but sb-e is a new architecture. It's clearly better.

I don't care to piss you off I thought you are trying to piss me off though.

Uhm pci-e 2.0 is 1-5% slower than 3.0 (both 16x) depending on game and gpu. Pci-e 1.0 is simply not fast enough. And I bought my platform for 5 years or more maybe, I certainly don't want pci-e 2.0 with that in mind. You're just trying to downplay sb-e so that your own platform is better in comparison. That's what pisses me off - emotions have no place in a tech discussion it's just distracting and non constructive.

There are more websites and even if one of my links is nonsense there are still two others. Face it your point is lost. If you want to keep up this childish bullshit I'll simply quit and laugh my ass off while enjoying my clearly superior system. Even my old 3820 was far better than your xeon. I'm so sorry.
 
LOL clearly missed my point, came in just to justify his own system over me saying it's 10% -/+ which means it can go below or above 10%..


Here is the original post.

It's barely that much faster in single thread.

10% -/+ per core vs Sandy so in reality it will make maybe a 1-2 FPS difference.


But that's ok, it matters to you obviously. I used Sandy you seem to forget that lol.
 
LOL clearly missed my point, came in just to justify his own system over me saying it's 10% -/+ which means it can go below or above 10%..


Here is the original post.




But that's ok, it matters to you obviously. I used Sandy you seem to forget that lol.
Just to explain it to you: if something is ipc wise 10-20% faster it doesn't matter if it's single or multi core. And because games do use 2-8 cores I couldn't even care less about single core performance.

Your system is good I didn't want to say it's bad. I'm just not accepting it if you start a discussion and spread wrong informations.

Yeah but you seem to miss it so much so that you want to argue your xeon is the same. Well maybe the HTT does make it even - at best. I still think a sb at 5.2ghz is superior 98% of games.
 
Games are hugely GPU limited until you start hitting up 144hz refresh rates, the OP was questioned if his CPU is bottlenecking the RX 480, i made educated answers to it, the difference between the guys CPU (3770 Ivy which i thought was stock because it is a non K SKU) and said it's faster than your CPU, he returned with the answer that it is at 4.1ghz, so yeah it's about par or better since Ivy is around 5% better than Sandy, i just made a point that those numbers in real time don't show a huge difference.
 
As long as the cpu has high clocks yes (and basically is a intel core series). Just saw a gaming comparison between i5 4460 and i5 6400 - the 6400 was overall a tad better despite lower clocks because of ipc improvements but both were not exactly perfect - and that was only with a r9 380. A 980ti would have had a even worse bottleneck. Cpus are still important their performance does matter - even more so with high end Gpus.
 
For shits and giggles I ran BF4 @3200x1800 to see how it performs

Battlefield 4 64MP Team Death match Ultra w/2xMSAA

2016-09-28 10:43:21 - bf4
Frames: 14813 - Time: 300000ms - Avg: 49.377 - Min: 26 - Max: 75


GPU usage was pegged at 100% the full match
GPU VRAM Usage was 4.2GB
CPU Usage hovered between 50%-60%
System RAM was a little over 7GB

using Windows 10 as well
 
For shits and giggles I ran BF4 @3200x1800 to see how it performs

Battlefield 4 64MP Team Death match Ultra w/2xMSAA

2016-09-28 10:43:21 - bf4
Frames: 14813 - Time: 300000ms - Avg: 49.377 - Min: 26 - Max: 75


GPU usage was pegged at 100% the full match
GPU VRAM Usage was 4.2GB
CPU Usage hovered between 50%-60%
System RAM was a little over 7GB

using Windows 10 as well
You should really do that non - HTT test one time just to see it. ;) you can activate it again afterward.
 
@Durvelle27

Sweet card :) . If I was in the market for a RX 480 to me the GTR seems the one, like the cooler/black plate, VRM is great and better power sourcing than ref PCB.

I would concur with Kanan that do some benches with HT disabled. From when I was thinking shall I get an i7 or i5 a lot of "stuff" I read pointed towards games don't benefit from HT and for some games it has a negative impact on performance. I understand you use rig for rendering but IIRC it's just a case of switch on/off HT in mobo bios.
 
Crysis 3 MP Skyline Very High Settings w/SMAA 2TX @2560x1440

2016-09-28 15:00:46 - crysis3
Frames: 16475 - Time: 300000ms - Avg: 54.917 - Min: 44 - Max: 69

One thing I noticed after enabling in game monitoring, even though my GPU is at 100% Usage the clocks never hit the clocks I set in MSI Afterburner which is the stock boost clock of 1338/2000. GPU clocks seem to sit around 1200/2000 and temps are below70*C. Trying to figure out why this is happening.
 
PowerLimit.

Let's say you even mod bios to have increased PL, the problem is the driver has an upper limit of PL, The Stilt has created an application which overrides this, it is on OCN :) .
 
Got it all figured out guys
Stripped the card down and there was barely any decent TIM contact with the heatsink. Replaced with some IC Diamond and the card now holds constantly at 1370/2100 and runs 10°C cooler
1000

1000
 
Back
Top