• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD's Answer to GeForce GTX 700 Series: Volcanic Islands

[...] here is a more clear image with text you can actually read on the specs:

It's nice and all, but too bad this was already posted 26 posts ago (post #74).
Here, have a cookie.
 
It's nice and all, but too bad this was already posted 26 posts ago (post #74).
Here, have a cookie.

I'm sorry. I didn't notice. Like I said I tend to only cruise the news articles here these days and didn't see all 100+ replies (only the first page or so of replies ends up on the news article).
 
It's nice and all, but too bad this was already posted 26 posts ago (post #74).
Here, have a cookie.

Cool story bro... and you're trolling him... why?
 
"The fab set Q4 as its tentative bulk manufacturing date for the process."

So consumers will be able to purchase limited quantities in Q4 2013 and open availability in Q1 2014? If so, then what will AMD launch between now and then?
 
Go blow that smoke somewhere else. When you look at power usage in actual gaming and average the usage over those the GTX680 vs the 7970 is only using 3-4% more watts and was more often 5-7% off the GTX 680 performance. Now true the Ghz versions on synthetic tests went higher on power usage (not much fun playing synthetic tests), but looking at gaming the GHz version actual came in lower the a GTX 680 by 7 watts. In real world gaming a Ghz will in no way change what you pay in your power bill, it might by these dare I say cost you less...
http://www.hardocp.com/article/2012/03/22/nvidia_kepler_gpu_geforce_gtx_680_video_card_review/13
http://www.hardocp.com/article/2012/10/30/xfx_double_d_hd_7970_ghz_edition_video_card_review/10
Well, something must have happened between those old tests and the newer ones at [H]OCP...
1361915444mQwWwGD0oO_9_1.png


I don't doubt that the power usage cost is negligible for the likely user bases of the cards, but I don't see the 7970GE being more frugal in power consumption than the 680 either...except in a very small minority of games.

Anyhow, I'd say all bets are off with any new architectures on new processes. I'm pretty sure that no one would have predicted the perf-per-watt difference between Fermi and Kepler, so there is no reason why the same can't be said for SI > VI
 
Well, something must have happened between those old tests and the newer ones at [H]OCP...
http://www.hardocp.com/images/articles/1361915444mQwWwGD0oO_9_1.png

I don't doubt that the power usage cost is negligible for the likely user bases of the cards, but I don't see the 7970GE being more frugal in power consumption than the 680 either...except in a very small minority of games.

Anyhow, I'd say all bets are off with any new architectures on new processes. I'm pretty sure that no one would have predicted the perf-per-watt difference between Fermi and Kepler, so there is no reason why the same can't be said for SI > VI

The slight efficiency your claiming for kepler over fermi whilst real is in part due to the lack of compute power and its advanced clock and power gateing but isn't that special if you understand the tech and the way nvidia aimed gk104 so specifically at gaming and isn't out of amds reach at all.
 
imho, a big part of keplers efficiency is the result of their new powertuning, which can keep the voltage a lot closer to the optimum than previous generations, resulting in a lower voltage needed for specific clocks, and thus lower powerconsumption and heat.
 
The slight efficiency your claiming for kepler over fermi whilst real is in part due to the lack of compute power and its advanced clock and power gateing but isn't that special if you understand the tech and the way nvidia aimed gk104 so specifically at gaming and isn't out of amds reach at all.
Really?
I was comparing apples to apples, GF100/GF110 to GK 110.

GK 110 is Kepler µarchitecture isn't it ?
 
Well, something must have happened between those old tests and the newer ones at [H]OCP...
http://www.hardocp.com/images/articles/1361915444mQwWwGD0oO_9_1.png

I don't doubt that the power usage cost is negligible for the likely user bases of the cards, but I don't see the 7970GE being more frugal in power consumption than the 680 either...except in a very small minority of games.

Anyhow, I'd say all bets are off with any new architectures on new processes. I'm pretty sure that no one would have predicted the perf-per-watt difference between Fermi and Kepler, so there is no reason why the same can't be said for SI > VI

My ASUS 7970 Matrix @ 1200/1650 pulls less than 250W @ gaming. In Furmark, those numbers might be possible, but otherwise, those seem incredibly high to me. My 7950's don't pull over 200W, too. More like 150W.

I also doubt their full system, minus VGA pulled only 63W as they report. Just saying. Feel free to check ANY of my motherboard reviews to find more realistic numbers for system power consumption. I'd almost say that [H]ardOCP's reviewer there didn't test anything, really.

Test setup is listed as a 2500k @ 4.8 GHz. Average power consumption of such a CPU is around 150W in prime95, and about 90W in gaming. Impossible to be 63W only for CPU, fans, drives. Just saying. Their numbers are 1000% false. I'd minus at least 75W from each of those listed numbers. Even the NVidia numbers are suspect.
 
Last edited:
Really?
I was comparing apples to apples, GF100/GF110 to GK 110.
GK 110 is Kepler µarchitecture isn't it ?

Yea yeah gk110s power efficiency is way beyond amd not imho am I missing your point here ?.
 
Yea yeah gk110s power efficiency is way beyond amd not imho am I missing your point here ?.
Yeah, you probably are. What I said earlier was:
I'm pretty sure that no one would have predicted the perf-per-watt difference between Fermi and Kepler, so there is no reason why the same can't be said for SI > VI
So, if Nvidia can improve on efficiency between one µarch and another, then the same holds true for AMD with SI (Southern Islands) to VI (Volcanic Islands). I made no comparison between the two vendors regarding what might/could eventuate. The only comparison was in the earlier part of the post- pointing out to Casecutter the vagaries of what passes for "power usage under load" even within tests carries out by the same site.
If I'm comparing µarch to µarch, then I would generally look to compare the analogue of each architectures GPUs. GF100/GF110 and GK110 are both similar in die size, placement within the product stack hierarchy, and feature set.
I also doubt their full system, minus VGA pulled only 63W as they report. Just saying. Feel free to check ANY of my motherboard reviews to find more realistic numbers for system power consumption. I'd almost say that [H]ardOCP's reviewer there didn't test anything, really.
I don't doubt that the [H]ardOCP figures aren't definitive either - they really can't be with the variance between tests conducted only a few months apart. I only used the [H]ardOCP result because Casecutter was using the same source for his initial argument.
 
Yea yeah gk110s power efficiency is way beyond amd not imho am I missing your point here ?.

GK110 w/ 1 CU shut down still consumes at least 20W more than a 7970GHz and gets trashed by it in anything compute. And isn't that much faster in a good number of games, heck, slower in DiRT Showdown, Tomb Raider (under some circumstances) and some other titles.

...and that's considering they had more than a year to get it right (LMAO @ the people saying they didn't release it because GK104 was "good enough").

nVidia might have marginally better power consumption in the high end, but that is all, nothing special about it. Looking at the lower end chips, AMD actually has a wider advantage over nVidia the other way around at higher TDP cards.
 
GK110 w/ 1 CU shut down still consumes at least 20W more than a 7970GHz and gets trashed by it in anything compute. And isn't that much faster in a good number of games, heck, slower in DiRT Showdown, Tomb Raider (under some circumstances) and some other titles.

...and that's considering they had more than a year to get it right (LMAO @ the people saying they didn't release it because GK104 was "good enough").

nVidia might have marginally better power consumption in the high end, but that is all, nothing special about it. Looking at the lower end chips, AMD actually has a wider advantage over nVidia the other way around at higher TDP cards.

Why persist in spreading misinformation? This is from a previous thread about the GTX780:

From W1zzard's own GTX Titan review you can find in the TPU website:

Power consumption:
power_multimon.gif


power_average.gif


power_idle.gif


power_peak.gif


power_maximum.gif


power_bluray.gif


7970 GHz beat by Titan in terms of power consumption efficiency in every single scenario

Relative performance (average of every single 3D benchmark on every resolution):

perfrel.gif


perfrel_1280.gif


perfrel_1680.gif


perfrel_1920.gif


perfrel_2560.gif


GTX Titan beats the 7970GHz in every single resolution, now for Tomb Raider, this from W1zzard's review for the 7990:

tombraider_1280_800.gif


tombraider_1680_1050.gif


tombraider_1920_1200.gif


tombraider_2560_1600.gif


tombraider_5760_1080.gif


The GTX Titan is faster than the 7970GHz in every resolution in that particular game, you may counter the 7990 is faster (and it is) but that's not even the point; dunno about DiRT showdown, but if what you say is true (W1zzard doesn't even test cards using that game) then it's probably the only scenario were the 7970GHz beats the Titan...

EDIT: Oh wait, I found these benchmarks using DiRT Showdown at Anand's:

53356.png


53357.png


53358.png


53359.png


Only in one scenario the 7970 "beats" Titan (if you call 0.9 FPS beating)

EDIT 2: as for the 7970 "trashing" Titan in compute performance, the theoretical max double precision performance (FP64) for 7970 is 1.08TFLOPs whereas Titan's is 1.3TFLOPs, but don't take it from me, this is (once again) from Anandtech, an analysis of Titan's compute performance by Rahul Garg, a Ph D. specializing in the field of parallel computing and GPGPU technology:

53221.png


53222.png


53225.png


53401.png


53414.png




Out of all compute tests performed, only in SystemCompute benchmark Titan is beat by 7970GHz, in all other benchmarks Titan leaves 7970 in the dust... I exactly wouldn't call that "trashing"
 
Last edited:

I don't really care for your cherry-picked reviews TBH...

Also, I do want to remind everyone that TPH does not hold the absolute truth in regards to GPU reviews, you know?
 
Time for some lulz.
From the very same W1zz's review:

わはは~!
ac3_5760_1080.gif


わははは~!
sleepingdogs_5760_1080.gif
 
I don't really care for your cherry-picked reviews TBH...

Also, I do want to remind everyone that TPH does not hold the absolute truth in regards to GPU reviews, you know?

Cherry picked? This is the TPU forums you're posting at, what more appropriate than W1zzard's own reviews?

Not only that, but every scenario presented completely contradicts the facts mentioned by you, I'm not cherry picking anything, I'm actually posting every single test result, and you mention TR and DiRT... now I'm the one cherry picking?

You know, it doesn't really matter, if even showing you all the results (including studies from a Ph.D no less) won't convince you, then nothing will, if that's how you feel about this card in particular, you're entitled to your opinions...

Moving on...

EDIT: Just saw Vinska's reply, and I'm the one cherry picking, right...? I presented the condensed results for every single resolution in every single game... but this can drag on forever I see, it doesn't really matter, you guys win, OK ;)

Peace :)
 
EDIT: Just saw Vinska's reply, and I'm the one cherry picking, right...? I presented the condensed results for every single resolution in every single game... but this can drag on forever I see, it doesn't really matter, you guys win, OK ;)

Peace :)

yeah, it seems like cherry picking, but my point actually was:
Take these reviews with a HUGE grain of salt.

If You take a better look at W1zz's review, on Sleeping Dogs, the 7970 [GE] had almost twice [!] the fps on 5760x1080 when compared to 2560x1600. And on 1920x1200 it was slightly behind 5760x1080.
Similar situation with AC3 - on 5760x1080 it run significantly faster compared to 2560x1600 and 1920x1200 had pretty much the same framerates as 5760x1080

If that doesn't spell out the phrase "something is fishy with this review", then I don't know what would.

EDIT: I pointed out these things in the review discussion thread (there was a similar thing with the 7990, too). But no one seemed to care at all. Yet, I would LOVE to get an explanation or even a guess to WTH is wrong here (as something obviously is).
 
ok fanboy wars will never end so cant we finish with dick size contest and go back to topic.
what i see on cleared pictures (thx apocolypes) looks more like pelidriver based apu (may be for next gen console) then discrete gpu. was exciting and the begging but at closer look it is more likely fake news. :(
 
Last edited:
ok fanboy wars will never end so cant we finsh with dick size contest and go back to topic.

Disagreements and conversation about said topic does not qualify as fanaticism so I recommend not clumping everyone together and calling them "fan boys" when you're just adding to the noise by saying this.

I think we can all agree that the Titan is a powerful card at the cost of some extra moollah where the 7970 provides some less performance for considerably less moollah. Weather or not the 700-series cards will be more like Titan or not, we don't know, but what I will say is that regardless of what NVidia has up their sleeves, AMD is working on something else well.

I think everyone should calm down and acknowledge that both NVidia and AMD are both two very good companies that produce quality hardware and if you disagree with me then maybe you're being a fanatic and I'll challenge you to design a GPU that does better if people are going to continue bashing on people who are doing things that most here can probably only dream of.
 
(including studies from a Ph.D no less)

OK, now I know you are joking... How the Hell is that relevant to anything ever discussed here... What, Doctors or highly educated or top positioned people can't be biased, wrong or just simply... mistaken? If anything, they are more prone to ego mistakes and corruption. But then again, that's some pure generalization to make a point right there.

I can find any number of reviews where we can see the TITAN hover at 35W or more over the 7970GHz as well as benches showing it being beaten by a bunch of frames in DiRT:S, Tomb Raider and probably some other not-so-known titles, as well as having breathing down it's neck or tying by the Radeon in Sleeping Dogs, Far Cry 3: Blood Dragon, Metro 2033, AvP 2010, Sniper Elite V2, Max Payne 3 and some games at 4K... As for compute... don't get me started.
And so could you very probably (heck, you just did)... so I don't really care, I just hate praising an overpriced piece of late hardware for things that aren't even true.


Disagreements and conversation about said topic does not qualify as fanaticism so I recommend not clumping everyone together and calling them "fan boys" when you're just adding to the noise by saying this.

What I was about to reply to him. LOL
Thanks.


Edit: So more on-topic. Like GF 7900GTX to 8800GTX (DX 9.0c to DX10) and Radeon HD 4870 to Radeon HD 5870 (DX10 to DX11) I see this Volcanic Islands card as another huge jump in performance... but related to the jump from HD to UHD more so than anything else, like API upgrade, because let's face it, none of today's card cut it for 3840x2160 gaming (not that it's here yet anyway)... I dislike multi-monitor setups so much that multi-GPU and subsequent issues with them are a none-issue for me from the get-go.
 
Last edited:
did you see the review with the 4k resolutions. on a single card set up the 7970 performed quite well, but titan did come out top. and tbh like people say before the hardware Is performing well on both sides and its up to the software to catch up.

and is that volcanic islands diagram real?
 
From W1zzard's own GTX Titan review you can find in the TPU website:

Power consumption:
http://tpucdn.com/reviews/NVIDIA/GeForce_GTX_Titan/images/power_multimon.gif

http://tpucdn.com/reviews/NVIDIA/GeForce_GTX_Titan/images/power_average.gif

7970 GHz beat by Titan in terms of power consumption efficiency in every single scenario

Well, the power consumption figures from tech report show a different picture:

The 7970GHz edition consumes less than 10 watts more than the GTX680 on full load , but it consumes less power than the 680 on idle and up to 11 watts less when the display is off, and that's makes it more power efficient than the 680.


As for the 7970 "trashing" Titan in compute performance, the theoretical max double precision performance (FP64) for 7970 is 1.08TFLOPs whereas Titan's is 1.3TFLOPs, but don't take it from me, this is (once again) from Anandtech, an analysis of Titan's compute performance by Rahul Garg, a Ph D. specializing in the field of parallel computing and GPGPU technology:

Out of all compute tests performed, only in SystemCompute benchmark Titan is beat by 7970GHz, in all other benchmarks Titan leaves 7970 in the dust... I exactly wouldn't call that "trashing"

I wouldn't trust anandtech, and I certainly think that they are biased in favor of Intel and against AMD.

In the compute tests from toms hardware and techreport or even Hexus, you get a completely different picture. The 7970 does trash even the dual gpu 690 and blow it out of the water when it comes to shader performance in in GPGPU.
 
Disagreements and conversation about said topic does not qualify as fanaticism so I recommend not clumping everyone together and calling them "fan boys" when you're just adding to the noise by saying this.

so you are telling that titan vs 7970 fits nice to "AMD's Answer to GeForce GTX 700 Series: Volcanic Islands ".
english isnt my native and i dont pretend to perfectly understand it but obviously i understand it quite worse then i thought. seams noone want (or can) comment what is shown on the picture so lets share more "test results" that favor "my grafic card".
anyway discusion went too far away from that to be useful. gl in diagram comparison
truth i out there
 
Similar situation with AC3 - on 5760x1080 it run significantly faster compared to 2560x1600 and 1920x1200 had pretty much the same framerates as 5760x1080

If that doesn't spell out the phrase "something is fishy with this review", then I don't know what would.

EDIT: I pointed out these things in the review discussion thread (there was a similar thing with the 7990, too). But no one seemed to care at all. Yet, I would LOVE to get an explanation or even a guess to WTH is wrong here (as something obviously is).



Only AMD can answer why this is the case. I tested myself and get the same results as W1zz does. AMD said their memory management is broken/sub-optimal, and that's how it's noticed...Eyefinity.

Also, Eyefinity doesn't actually draw every single pixel on the side monitors in the same ratio/aspect as on the primary monitor, due to fish-eye effect. So although the resolution of the monitors is 5670x1080/1200, the workload may not actually be that many pixels, depending on app.


Do keep in mind that W1zz used to write ATITool, and writes other AMD-specific clocking apps. Best I can tell, he really doesn't care who is faster, and has no agenda...notice we don't have ads here except on the front page. TPU is not a site driven by the opportunity to make money doing reviews...we all just provide the numbers, and you decide who you like based on the results. Because anyone can replicate our tests, in every review. For me, I actually hope you do test and check our numbers... I know you'll find you get the same results.
 
yeah, it seems like cherry picking, but my point actually was:
Take these reviews with a HUGE grain of salt.

If You take a better look at W1zz's review, on Sleeping Dogs, the 7970 [GE] had almost twice [!] the fps on 5760x1080 when compared to 2560x1600. And on 1920x1200 it was slightly behind 5760x1080.
Similar situation with AC3 - on 5760x1080 it run significantly faster compared to 2560x1600 and 1920x1200 had pretty much the same framerates as 5760x1080

If that doesn't spell out the phrase "something is fishy with this review", then I don't know what would.

EDIT: I pointed out these things in the review discussion thread (there was a similar thing with the 7990, too). But no one seemed to care at all. Yet, I would LOVE to get an explanation or even a guess to WTH is wrong here (as something obviously is).

That's interesting, have you tried sending a PM to W1zzard with your findings? I'm sure he'll appreciate it and change the charts accordingly :)

Disagreements and conversation about said topic does not qualify as fanaticism so I recommend not clumping everyone together and calling them "fan boys" when you're just adding to the noise by saying this.

Couldn't agree more, thank you very much for your post. In the almost 9 years I've been visiting this forum, this is the first time I've been labeled a fanboy. Truth is I had never before made a post with so many charts to try and get my point accross, and all for naught LOL :toast:


OK, now I know you are joking... How the Hell is that relevant to anything ever discussed here... What, Doctors or highly educated or top positioned people can't be biased, wrong or just simply... mistaken? If anything, they are more prone to ego mistakes and corruption. But then again, that's some pure generalization to make a point right there.

The guy's field of study is parallel computing and GPGPU technology, not much room for bias in that field, then again, like I said before, you're entitled to your opinion, and you have already mentioned your mistrust of doctors in general - and science by extension (?!) - so no point in trying to convince you otherwise, right?

I guess we can all agree that a this point speculating on the performance of graphic cards that are yet to be released is pointless, as there no evidence whatsoever to back these facts, all we can do is wait and see, no point in fighting to try and show the world who has the biggest e-peen :p

It's all good, like I said, this could drag on forever, perhaps it's better to move on, for the sake of this thread :)
 
Last edited:
Back
Top