• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Readies Radeon HD 7970 GHz Edition

Do you think HD 7970 GHz Edition can make HD 7970 attractive again?

  • Yes

    Votes: 12 14.3%
  • No

    Votes: 25 29.8%
  • For me it never lost attractiveness

    Votes: 47 56.0%

  • Total voters
    84
VXFP1.png


Man.. i kinda feel sorry for them, but at the same time happy that NVIDIA started kicking so much ass
 
Are you crazy?!?! Of course the new one will be better. Its got a sticker on it! Everyone knows 1337ness comes from stickers! Just look at ricers!

Stickers are for yesterday you hippy - paint it yourself....

(Yes colorful have an igame kudan 680 in the works, get your l33t graffiti skillz out.)

Package.jpg
 
the original price for the hd7970 was $549 it is now $479 so yes it had a HUGE price drop, unfortunately for me i bought it while it was 550, but i dont care xD, drivers are definetly the cultprit because the HD7970 is definetly superior hardware, 4.3 billion transisitors versus 3.5 billion in the 680 more gddr5 memory allowication and 384bit which i dont know why nvidia went back down to 256bit they even had 512bit at one point like the 295 i believe it was, if amd had the same team nvidia does for drivers you can bet your ass the HD7970 would crush the 680 in every benchmark possible

my HD7970 stats
GPU Clock: 1125MHz
Memory: 1575MHz
Pixel Fillrate: 36.0Gpixel/s
Texture Fillrate: 144.0GTexel/s
Bandwidth: 302.4GB/s
 
Last edited:
...what?

how is paying the same price for a card that performs worse, runs hotter, and uses more power a better deal?

1. Its doesn't "perform worse"
2. Runs hotter yet OC WAY BETTER.
3. Less then 1 watt at idle and now NVIDIA users care about power consumption?
4. 7970 is cheaper AND in abundance.

There is NOTHING wrong with the 7970. It was just over priced in the first month or so.
 
1. Its doesn't "perform worse"
2. Runs hotter yet OC WAY BETTER.
3. Less then 1 watt at idle and now NVIDIA users care about power consumption?
4. 7970 is cheaper AND in abundance.

There is NOTHING wrong with the 7970. It was just over priced in the first month or so.

^To add to this...

From a price and performance stand-point the AMDs are better. As many have pointed out going from a 580 to 680 your getting less for your money at a higher price point as they are currently sold.

The 580s where not crippled as the 680 or for that matter the GK104 chip . Ofcouse this wont matter if playing games is your only concern but even then your still paying more for less of a product and features.

I sugguest you go read the Official Nvidia web-site forums and you'll find out a lot of 680 users are wanting features currently only found on the AMD 7000 series. The IQ debate over there is interesting. Oh wait what about the cry'n about the driver issues and all that jazz. Unless you have fanboy glasses on, reading Nvidias forums will take care of that. Heck they still have a thread going from 2009 on BSOD.

So unless your extremely partasin to one brand i dont see how 20-50 dollar differance is cause for such an uproar. Especially when you get pass the ?% overall benchmark disparity and you start comparing core features and abilities.

What am i saying. Keep posting i enjoy reading :D
 
Last edited:
These cards both need to be about $100 cheaper.
 
Well my take on this is TSMC finally got the 28Nm production figured out back say mid-March and by April AMD started seeing more chips like they had originally considered and developed as "Tahiti". (and why Nvidia was all "disappointed" dis’ing AMD, like they didn’t have problems of their own) .

The question not being answered here is... could these process improved chips now providing a 1Ghz, but offering the same power efficiency and TDP? It kind of reminds me of what was a original Fermi, and then what Nvidia got from their re-spin (GF100 Vs GF110). I don't think it's just marketing saying we’ll offer reference clocks at 1Ghz... but a re-release of what Tahiti was to offer originally. I mean what's the difference between this and what the GTX260 had when they release it as a "Core 216"?

That said those who went for 7970’s at $550 are a little upset though AMD never actually knew at that point TSMC could fix their issues or if Nvidia would fair any better, so you've got to go with what you've got. If AMD bring these 1Ghz I figure they’ll be a $480-500 price while they get the other moved out. If they hold or better the current power consumption, and can C-F with existing 7970 that might take a Bios flash to the 1Ghz. It's an equitable response to counter Nvidia, and not upset folk who purchased 3-4 months back.
 
and yes there was those comparisons last gen, there is ALWAYS the same arguments year after year, it doesn't matter, the fact is the HD7970 is superior hardware, the drivers are to blame,slow development, rushing to be the first always has its downfalls, the proof is the benchmark comparisons with the preview hd7970 driver to the current driver, screw clock for clock the hd7970 can beat the 680 with base clocks if it had great drivers
 
Last edited by a moderator:
Aaah love reading these comment's lol they always turn into debates :laugh:
 
this mean "normal'' (non oc) 7970 would get cheaper or that the oc one would be more expensive? (gues the second option)
 
broken english... and yes there was those comparisons last gen, there is ALWAYS the same arguments year after year, it doesn't matter, the fact is the HD7970 is superior hardware, the drivers are to blame,slow development, rushing to be the first always has its downfalls, the proof is the benchmark comparisons with the preview hd7970 driver to the current driver, screw clock for clock the hd7970 can beat the 680 with base clocks if it had great drivers

if u just talk about driver issue - GK104 is far newer card - and even gk104 using sw scheduling system for gpu - but hd7970 using hw scheduling system for gpu

with this conversation gk104 is far more driver intensive compare to hd7970

here with hardware canuck domain test - gtx680 about 15% faster than hd7970

for better comparison in future game 3dmark11 told us gtx680 about 20% faster than hd7970-new games better run on gforce cards - and we bought card for new games - not for 2006 crysis

:)
 
if u just talk about driver issue - GK104 is far newer card - and even gk104 using sw scheduling system for gpu - but hd7970 using hw scheduling system for gpu

with this conversation gk104 is far more driver intensive compare to hd7970

here with hardware canuck domain test - gtx680 about 15% faster than hd7970

for better comparison in future game 3dmark11 told us gtx680 about 20% faster than hd7970-new games better run on gforce cards - and we bought card for new games - not for 2006 crysis

:)

show link showing hd7970 losing to gtx680 with no driver influence on both cards?

Edit, i dont see how its even possible to show a cards true performance since the drivers is what makes them efficient, ... fold with both gpus which ever one can fold with the highest points average i guess would show which one is truely the most powerful
 
this mean "normal'' (non oc) 7970 would get cheaper or that the oc one would be more expensive? (gues the second option)
I say just going with the assertion that they'll be "just OC'd" is not entirely astute. They could be "Tahiti" the way AMD engineering anticipated would arrive from TSMC back last Oct/Nov; at 1Ghz while all within the 250 TDP. When they found the only way to maintain with the 250 TDP of the board design was to drop it to 925Mhz, it was the only choice without moving the launch back for who knows how long. And yes since that time AMD/AIB’s have been able to get improving binning of nice chips that OC, they still had exponentially higher power in achieving that. Now, the latest chip could provide 1Ghz, while offering the efficiency so many thought 7970's where lacking especially when OC'd.
 
Where's the explanation...I don't care about results..I know it's slower. I want to know WHY it's slower, yet faster in games.

I mean, after all, wouldn't that kind of be a selling feature for HD7970? IT does what GTX 680 cannot?

Personally, i think it's a lack or internal caches that are the issue ,and are also why it runs comparitively cool. Buty I do not hear suc hthings, I jsut hear peopel claiming it's slower.

It's architectural. Nvidia simplified the CUDA architecture so it resembles more like the old Radeon VLIW architecture so they can get better pixel pushing, and lower power consumption, however this is at a sacrifice of computing power.
 
It's architectural. Nvidia simplified the CUDA architecture so it resembles more like the old Radeon VLIW architecture so they can get better pixel pushing, and lower power consumption, however this is at a sacrifice of computing power.


:slap:


That tells me what, exactly? It's very obvious it's different already, and you've said nothing other than that it is...:wtf:

I WANT SPECIFICS!!!


:laugh:
 
The change in architecture is only part of it...

The 680 is a mid end card cranked high on clocks and turboed to fill the missing high end spot.
Like other mid series cards it has its dp gpgpu capability cut out... that is also why it uses less power.

Great on games sucky on gpgpu...less value for more.
If you only play games... its a great card. Unfortunately I use both so I spend less and get more... ;)
 
Looks like there are already cards that ship @ 1100 mhz and clock to 1250. I'm not sure if this is a big deal or not.

it's not a big deal at all, as even the cards that ship at stock speeds can max out AMD Overdrive at 1125 core, 1575 mem without overvolting.
i wouldn't be surprised if these GHz Edition cards are simply a reference card with a BIOS flash.
 
it's not a big deal at all, as even the cards that ship at stock speeds can max out AMD Overdrive at 1125 core, 1575 mem without overvolting.
i wouldn't be surprised if these GHz Edition cards are simply a reference card with a BIOS flash.

i see u got the 3820, dude i couldnt wait for ivy bridge-e lol the 3770k is better technology and new shrink so i went with that instead
 
:slap:


That tells me what, exactly? It's very obvious it's different already, and you've said nothing other than that it is...:wtf:

I WANT SPECIFICS!!!


:laugh:
How specific do you want it? :wtf:
Well, I'm no microarchitecture designer yet, so I can't tell you specifics beyond what I've read.
As you know, the original CUDA architecture used in Fermi was complicated, and used a lot of power to work with it.
arch.jpg

AMD was at the same time using VLIW, an architecture great for pushing pixels due to the large number of shader units, however while it is good for simple parallel processing tasks, due to it's simplicity it isn't very suitable for computing.
Cayman%20block%20diagram_575px.png

With GCN, AMD decided to adopt a more complex shader architecture, closer to the CUDA cores present in Fermi, and made an architecture that was good for mostly any purpose, partially due to their future focus on APUs, and floating point math is a priority for the architecture. (HSA allows for floating point calculations to be done on GPU cores rather than FPU calcs on a separate CPU, and a GPU is better for floating point anyhow.)
Tahiti_II_M.jpg

Then with the GTX 680, Nvidia decided to focus on a gaming architecture, simplifing CUDA in order to fit more cores in a smaller number of space, and therefore improve rendering, however due to the simplification of the core design, computing is hampered, and may point to a complete split between workstation and gaming architectures for Nvidia. (this last statement is a personal assessment, due to the lack of a GK110 chip being present or confirmed for a gaming GPU at this time.)
fermi-block-diagram.png


That's how I understand it all.
 
How specific do you want it? :wtf:
Well, I'm no microarchitecture designer yet, so I can't tell you specifics beyond what I've read.
As you know, the original CUDA architecture used in Fermi was complicated, and used a lot of power to work with it. http://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_580/images/arch.jpg
AMD was at the same time using VLIW, an architecture great for pushing pixels due to the large number of shader units, however while it is good for simple parallel processing tasks, due to it's simplicity it isn't very suitable for computing.
http://images.anandtech.com/doci/4061/Cayman block diagram_575px.png
With GCN, AMD decided to adopt a more complex shader architecture, closer to the CUDA cores present in Fermi, and made an architecture that was good for mostly any purpose, partially due to their future focus on APUs, and floating point math is a priority for the architecture. (HSA allows for floating point calculations to be done on GPU cores rather than FPU calcs on a separate CPU, and a GPU is better for floating point anyhow.)
http://www.hardwarezone.com.sg/files/img/2012/01/Tahiti_II_M.jpg
Then with the GTX 680, Nvidia decided to focus on a gaming architecture, simplifing CUDA in order to fit more cores in a smaller number of space, and therefore improve rendering, however due to the simplification of the core design, computing is hampered, and may point to a complete split between workstation and gaming architectures for Nvidia. (this last statement is a personal assessment, due to the lack of a GK110 chip being present or confirmed for a gaming GPU at this time.)
http://media.pcgamer.com/files/2012/03/fermi-block-diagram.png

That's how I understand it all.
You just opened Pandora's box.
 
1. Its doesn't "perform worse"
2. Runs hotter yet OC WAY BETTER.
3. Less then 1 watt at idle and now NVIDIA users care about power consumption?
4. 7970 is cheaper AND in abundance.

There is NOTHING wrong with the 7970. It was just over priced in the first month or so.

1. All the reviews agreed that the 680 "performs better" so it means that the 7970 "performs worse""
2. You're right from the point of view of percentage over stock but not many 7970 can do 1300MHz on air (maybe I'm wrong?)
3. 7970 at 1200MHz uses little under 100W more watts than a 680 at the same clocks.
4. Right

There's nothing wrong with the 7970 but it is still overpriced. Wait and see what the 670 will bring. (Unfortunately availability will be the main question)
 
1. All the reviews agreed that the 680 "performs better" so it means that the 7970 "performs worse""
2. You're right from the point of view of percentage over stock but not many 7970 can do 1300MHz on air (maybe I'm wrong?)
3. 7970 at 1200MHz uses little under 100W more watts than a 680 at the same clocks.
4. Right

I think when themailman says it doesn't perform worse is the fact that you PLAY with both of them and you won't see any difference...

There's nothing wrong with the 7970 but it is still overpriced. Wait and see what the 670 will bring. (Unfortunately availability will be the main question)

At this point in time, the 7970 $479 vs gtx 680 vaporware or $600 - $690 where available, which one do you think is overpriced, which one is a better buy?

It doesn't matter what the gtx 670 brings to the table if it's going to be the same story as gtx 680, unavailable and where it is, it costs $100 more.
 
1. All the reviews agreed that the 680 "performs better" so it means that the 7970 "performs worse""
2. You're right from the point of view of percentage over stock but not many 7970 can do 1300MHz on air (maybe I'm wrong?)
3. 7970 at 1200MHz uses little under 100W more watts than a 680 at the same clocks.
4. Right

There's nothing wrong with the 7970 but it is still overpriced. Wait and see what the 670 will bring. (Unfortunately availability will be the main question)

1. Depends on what you are looking for. Once you OC the 7970 is smokes the 680 and you gotta admit most people who drop this kinda coin on a GPU tend to OC. I mean if you want "plug and play" sure the 680 is better. If you are an enthusiast the 7970 is a better choice.
2. See W1zz's review.
3. Hows that 680 look at idle compared to the 7970? ;)
 
Back
Top