# GeForce Kepler 104 (GK104) Packs 256-bit GDDR5 Memory Bus, 225W TDP



## btarunr (Jan 19, 2012)

NVIDIA GeForce Kepler (GK104) will be NVIDIA's first high-performance GPU launched, based on its Kepler architecture. New reports suggest that this GPU, which will succeed GF114 (on which the likes of GeForce GTX 560 Ti are based), will continue to have a 256-bit wide GDDR5 memory interface. An equally recent report suggests that NVIDIA could give the front-line product based on GK104 as much as 2 GB of memory. We are also getting to hear from the INPAI report that on this product based on the GK104, the GPU will have a TDP of 225W. What's more, NVIDIA is gunning for the performance crown from AMD Radeon HD 7900 series with this chip, so it suggests that NVIDIA is designing the GK104 to have a massive performance improvement over the GF114 that it's succeeding. 





*View at TechPowerUp Main Site*


----------



## Arrakis9 (Jan 19, 2012)

heres hoping for another ATI killer


----------



## eidairaman1 (Jan 19, 2012)

Arrakis+9 said:


> heres hoping for another ATI killer



zzzzz


----------



## Wyverex (Jan 19, 2012)

I'm not sure what to think of this.
If Nvidia's "middle" card would to end up as powerful as AMD's top cards - that would make for a serious advantage for Nvidia's flagship cards. Which would in turn make those cards terribly expensive.
On the other hand, it does bring back memories of HD 4800 cards slacking behind GTX 200, but being super cheap and good buys.

Bring on the competition!


----------



## 1c3d0g (Jan 19, 2012)

So if I'm understanding this correctly, NVIDIA's mid-range GPU is supposed to compete or even topple ATI's high-end?!? Now this is what I'm talking about!  Let the games begin!


----------



## HTC (Jan 19, 2012)

Arrakis+9 said:


> heres hoping for another ATI killer



I hope not.

Neither ATI nor nVidia are allowed to have a significant lead one over the other because, if one of them do, there will be no price wars which will mean pricier cards for us consumers.


----------



## reverze (Jan 19, 2012)

good news for those few people who still buy Nvidia


----------



## entropy13 (Jan 19, 2012)

Arrakis+9 said:


> heres hoping for another ATI killer



ATI's already dead. Killed by...AMD.


----------



## EastCoasthandle (Jan 19, 2012)

Don't get caught up in the marketing.  The 560 ti (GF104), a 40nm part, has a tdp of just 170w.  What they are suggesting for the GK104 is a tdp of 225w for a 28nm part!  Something is not right with that, to say the least.  Even if it's implied they are trying to overclock a 560ti replacement for a mid/high part.


----------



## Shihab (Jan 19, 2012)

reverze said:


> good news for those few people who still buy Nvidia



"few" ?


----------



## entropy13 (Jan 19, 2012)

EastCoasthandle said:


> What they are suggesting for the GK104 is a tdp of 225w for a 28nm part!  Something is not right with that, to say the least.



GTX 580 (GF110) has a TDP of 244W for a 40nm part. If the GK104 is indeed a challenge for the performance crown (and therefore is matched with the HD 7970) then a slight decrease in the TDP (225W v. 244W) with an increase in performance of at least roughly 20% (to make it match against AMD's 7970) is "not right"?


----------



## Completely Bonkers (Jan 19, 2012)

GF104 to G*K*104.

Sounds like a revision to me, not a new design. And 225W is enormous. Shrink to 28nm and new design should result in similar performance at half the power, ie 560Ti performance at 90W, or similar power envelope 150W but double to triple the performance.

Somehow 225W seems all wrong, unless, they are aiming at performance of 4x, which I would say is physically and technically impossible unless their transistor count has gone through the roof, and this chip is the size of a football pitch! (And essentially SLI on-chip).


----------



## Zubasa (Jan 19, 2012)

Enough talk nVidia just release the damn thing.
People can see for themselves, right now its all speculation.


----------



## entropy13 (Jan 19, 2012)

Zubasa said:


> *Enough talk nVidia* just release the damn thing.
> People can see for themselves, right now its all speculation.



Nvidia have actually yet to say anything. But in your second sentence you imply that, stating "its (sic) all speculation." lolwut?


----------



## EastCoasthandle (Jan 19, 2012)

I'm not sure I believe this.  The reason why I didn't suggest a higher transistor count was because if that were true why is it still limited to just a 256-bit memory bus?  If that were the case and they know the GK100 (or whatever they want to call it) is not ready then why not go for memory bus higher then 256bit? Lets be honest, with a tdp of 225w (55w higher then the GF104) do you think they would care about efficiency at that point? 


To me, going from GF104 to GK104 with a 55w tdp increase and still using 256bit just doesn't add up for me.


----------



## entropy13 (Jan 19, 2012)

F**kin' hell people here are taking "which will succeed GF114" too literally, when all it could mean is that the GF114 was their latest chip they made, and thus it's simple logic that a new chip (GK104) would obviously be "succeeding" its predecessor.


----------



## Steevo (Jan 19, 2012)

Perhaps its another Fermi make and bake oven?


----------



## iLLz (Jan 19, 2012)

Completely Bonkers said:


> GF104 to G*K*104.
> 
> Sounds like a revision to me, not a new design. And 225W is enormous. Shrink to 28nm and new design should result in similar performance at half the power, ie 560Ti performance at 90W, or similar power envelope 150W but double to triple the performance.
> 
> Somehow 225W seems all wrong, unless, they are aiming at performance of 4x, which I would say is physically and technically impossible unless their transistor count has gone through the roof, and this chip is the size of a football pitch! (And essentially SLI on-chip).



The F you bolded = Fermi and the K you bolded = Kepler, so I do believe this would be a new design.  My guess is they kept the numbering scheme similar for easier comparison which is a welcome move considering the past transgressions.  

And if the power numbers are to be believed, then I would think this card to be very fast.  I thought Nvidia released slides that said Kepler and the subsequent releases would double or triple speed.  Maybe they are following through on this.


----------



## EastCoasthandle (Jan 19, 2012)

If this were true and the GK104 was challenging the 7900 series wouldn't it have been displayed or told at CES ? As far as I know nothing was mentioned about their next gen parts.


----------



## Fourstaff (Jan 19, 2012)

Ah, the predicted "but wait, we have something too" response from Nvidia. I wonder how good Kepler is going to be, given the massive power consumption.


----------



## Vancha (Jan 19, 2012)

If that's the TDP despite moving to the 28nm process, there's no excuse for this not being monstrous.


----------



## DarkOCean (Jan 19, 2012)

Isn't 225w tdp a little scary? gtx570 has official tdp of 215 and in reality according to wizz reviews its 100w over that!


----------



## Protagonist (Jan 19, 2012)

I'll believe it when i see it. a TDP of 225 on 28nm GK104 damn that sounds wrong, compared to 170 on 40nm GF104/114. it should be lower TDP and more performance or equal TDP and more performance.

Don't they see what Intel does with their processors.


----------



## phanbuey (Jan 19, 2012)

excited to see this card...  NV cards game so smooth.


----------



## EastCoasthandle (Jan 19, 2012)

Vancha said:


> If that's the TDP despite moving to the 28nm process, there's no excuse for this not being monstrous.



Going from 170w using 40nm to 225w using 28nm does not necessarily mean monstrous performance.  It could be the opposite.


----------



## Vancha (Jan 19, 2012)

EastCoasthandle said:


> Going from 170w using 40nm to 225w using 28nm does not necessarily mean monstrous performance.  It could be the opposite.


I chose my words very carefully.


----------



## punani (Jan 19, 2012)

EastCoasthandle said:


> Going from 170w using 40nm to 225w using 28nm does not necessarily mean monstrous performance.  It could be the opposite.



Could you clarify ? 

Higher wattage on a smaller process is a "two times" performance increase by my understanding.


----------



## devguy (Jan 19, 2012)

If this is true, then how is nVidia going to pull of another dual GPU card (especially if this isn't even their flagship Kepler)?  Super low clocks?

Even if this outperforms the HD 7970, the HD 7990 will walk all over it (and also probably outperform the flagship Kepler if this is not it).


----------



## CrAsHnBuRnXp (Jan 19, 2012)

eidairaman1 said:


> zzzzz



Says the fanboy


----------



## treboRR (Jan 19, 2012)

Completely Bonkers said:


> GF104 to G*K*104.
> 
> 
> Somehow 225W seems all wrong, unless, they are aiming at performance of 4x, which I would say is physically and technically impossible unless their transistor count has gone through the roof, and this chip is the size of a football pitch! (And essentially SLI on-chip).



C'mon man! ATIs 7970 has twice the transistors count (2640M vs 4310M transistors - 6970 vs 7970) but has only 20% or little more perf vs last gen of cards so think twice! ATI failed to sqeeze the perf out of those transistors. So lets hope Nvida will not fail with his architecture


----------



## CrAsHnBuRnXp (Jan 19, 2012)

Zubasa said:


> Enough talk nVidia just release the damn thing.
> People can see for themselves, right now its all speculation.



I read somewhere (might have been here on TPU) that they were releasing the cards in March-April.


----------



## Nihilus (Jan 19, 2012)

devguy said:


> If this is true, then how is nVidia going to pull of another dual GPU card (especially if this isn't even their flagship Kepler)?  Super low clocks?.



Yeah performance is great and all, but what will the gtx 680 pull - 300w?! ?  And the gtx 690?!!


----------



## Steevo (Jan 19, 2012)

punani said:


> Could you clarify ?
> 
> Higher wattage on a smaller process is a "two times" performance increase by my understanding.



Memory cache is a huge thermal wasteland. If they had to increase cache to gain performance at whatever core/shader speed the accompanying power use has minimal impact on actual performance as they don't scale.

For memory to operate reliably at the same frequencies as the core it has to be balanced between capacitive energy stored and drain, if it stores too much it can't run at high speeds as it won't switch fast enough, and if it drains (leaks) too fast it takes a crap load pf power. So each Kb 1024 bits plus any ECC if needed adds to the overall thermal package, and eats up valuable real estate on the die. 

They could have a BullDozer on their hands.




treboRR said:


> C'mon man! ATIs 7970 has twice the transistors count (2640M vs 4310M transistors - 6970 vs 7970) but has only 20% or little more perf vs last gen of cards so think twice! ATI failed to sqeeze the perf out of those transistors. So lets hope Nvida will not fail with his architecture





Obviously you have ignored the overclocking feats these cards are capable of......plus immature drivers due to a whole new design...


Its only 20% at stock, and once you overclock these monsters they eat the competition and all you have left is a green cloud fart.


----------



## Crap Daddy (Jan 19, 2012)

Steevo said:


> Its only 20% at stock, and once you overclock these monsters they eat the competition and all you have left is a green cloud fart.



Interesting way to compare a new arch/generation product on 28nm to a product that's more than one year old tech. Or you want to suggest that this is your opinion about the new unreleased but only talked about NV GPUs?

On another note everything gets more confusing and I really don't think we'll see something worthy before April.


----------



## Yellow&Nerdy? (Jan 19, 2012)

They have redone the naming scheme I think. The TDP is 225W, which is 15W higher than the 7970: unless Nvidia plans on pulling off another Fermi, that should be a good indication of the performance. Quite interesting that the memory bandwidth is 256-bit, which is off the GF114, not the GF110. Then again I don't blame Nvidia for going further with the GF114 instead of the GF110, because we all remember what a steaming pile GF100 was.


----------



## Completely Bonkers (Jan 19, 2012)




----------



## Steevo (Jan 19, 2012)

Crap Daddy said:


> Interesting way to compare a new arch/generation product on 28nm to a product that's more than one year old tech. Or you want to suggest that this is your opinion about the new unreleased but only talked about NV GPUs?
> 
> On another note everything gets more confusing and I really don't think we'll see something worthy before April.



As things go ATI/AMD seemingly looked at what the competition was doing and made their own spin of it, so comparing clock for clock against current Nvidia offerings we only see 5-7% increase in performance in performance, but the fact that they clock so high and so easily adds to the my second comment/reply about efficient use of caches and power consumption. 

Given the current thermal output and cooler ability of 7970's 1.1Ghz stock clock seems entirely reasonable. But that probably would have created an artificial deficiency of cores that met their specs. Knowing them they are probably binning so that when the 7950 comes out they will have enough stock for it and for the next large batches of 7970, giving them more than enough time to get stock of other slightly defective cores for midrange and some low end cards, and also some pristine cores for use in the dual GPU variants to combat whatever Kepler is bringing.


----------



## Crap Daddy (Jan 19, 2012)

If we are in the rumor zone I would suggest another read:

http://semiaccurate.com/2012/01/19/nvidia-kepler-vs-amd-gcn-has-a-clear-winner/

The notorious NVidia hater Charlie Demerjian either has gone mad or was hacked. Or he is right.


----------



## gorg_graggel (Jan 19, 2012)

i think the current situation is quite amusing... 

they keep telling people that the gk104 is their performance class part.
though it's being presumed to be within the same performance/power envelope as amd's current offering...which is considered high-end.
rumors say, that by q2 the high-end gk110 part will be out, which is gonna be even faster, because that's supposed to be the actual maxed out kepler with a higher tdp...

it's seems to me to be basically the same strategy they used with gf100 and it's refresh gf110, just a bit optimized in terms of naming schemes to make people believe their new gen is so good, that it's cut down part even beats the competitions high end part. 

what i think is, that they knew they're gonna be late to the party and had to come up with that "story". if they'd not been late it would not be gk104 and gk110, but gk100 and gk110...or even just gk110 named gk100 and no refresh at all...
not that they necessarily changed their plans along the road, but had learned their lesson from the little disaster the first fermi chip has been and how much of a redemption the fermi refresh was afterwards. so they adapted and optimized that process into a more controlled strategy, because it turned out to be very successful...

well anyways, considering tahiti seems to have quite a bit of reserves in it's design, amd could also release something along the lines of gk110's tdp and, guess what, close the gap again (or even surpass it). actually there have been rumors that this might just happen, so... 
i think they might not have stopped optimizing the design further after they released tahiti... 

amd shouldn't have booted most of the bulldozer marketing team. they should have switched them with tahiti's. 
in my perception tahiti's pre-release info seemed kinda modest compared to what nvidia is doing or what the bulldozer marketing tried to pull off with the "FX" brand (and failed). 
being modest helps not being "bulldozed" (pun intended) by disappointed customers because of high expectations, but makes more people not buy your product initially (although it's very good), because the competition claims theirs will be better, so more people wait for it... high risk, high reward...kinda...

nvidia's marketing seems to know better how enthusiast's brains work, as lots of people seem to fall for it... 
i know what you did there you dirty, little rascals! 


of course, i have pulled all this out of my arse and don't claim any of it to be true...but i think i might be somewhat thinking in the right direction here...
it's all just a few "elaborated" (actually not) guesses (yeah, more of that!) 

myself being a cheapskate/value type buyer and underdog lover, amd's strategy caters more to me. so, while i think the 7970 deserves to be priced higher than the former performance leader, i won't ever buy a card at its current price point again (like i did, back in the days...sweet gf2ultra ...set me back about 600€/1100 deutsche mark)...
so i kinda hope the new nvidia cards to be a bit faster than the amds, so they can drop to a price i'm willing to pay (about 350€)...

well, that post branched out more than intended, but well...there you go...


----------



## AsRock (Jan 19, 2012)

Fourstaff said:


> Ah, the predicted "but wait, we have something too" response from Nvidia. I wonder how good Kepler is going to be, given the massive power consumption.



Yes gotta love that, we have some thing better but wait it's not ready yet.. And if it not done by nvidia it's done by a fan of nvidia and the shit happens with both sides.

Just to damage AMD sales by getting people hold on their money for a while longer which really how i see it don't really matter as AMD don't have enough 7900 cards to supply yet.


----------



## Steevo (Jan 19, 2012)

This round I am willing to pay for performance, but not until I know for sure its what I want. I might get a green card in my rig. But I love to overclock red stuff.


----------



## Benetanegia (Jan 19, 2012)

Crap Daddy said:


> If we are in the rumor zone I would suggest another read:
> 
> http://semiaccurate.com/2012/01/19/nvidia-kepler-vs-amd-gcn-has-a-clear-winner/
> 
> The notorious NVidia hater Charlie Demerjian either has gone mad or was hacked. Or he is right.



 2012... world... boom


----------



## OneCool (Jan 19, 2012)

Smaller is faster


----------



## overclocking101 (Jan 19, 2012)

it would not surprise me at all if this is true. it would make sense that the first card would be a 560 replacement so that nvidia has time to make the higher end cards better. It's very common. and if the performance is what they say it is going to be ill be on the green team once again.


----------



## EarthDog (Jan 19, 2012)

Benetanegia said:


> 2012... world... boom


hahahahaha, I take back everything I said about that lieing ass tool.


----------



## blibba (Jan 19, 2012)

256 bit bus means cheap to make.

This card, like GF104, is designed for price wars. They'll make a high-end behemoth too, don't worry


----------



## MxPhenom 216 (Jan 19, 2012)

Can't wait to fry my bacon and eggs for breakfast on this card. Sorry Nvidia, but I honestly don't see this card beating the HD7970


----------



## Kaynar (Jan 19, 2012)

treboRR said:


> C'mon man! ATIs 7970 has twice the transistors count (2640M vs 4310M transistors - 6970 vs 7970) but has only 20% or little more perf vs last gen of cards so think twice! ATI failed to sqeeze the perf out of those transistors. So lets hope Nvida will not fail with his architecture



Remember they are using a new architecture on this card, so u cant compare the transistor count as a simple %  increase... i'm sure their next cards (i.e. HD8000 series) will be a small revision of the HD7000 and will have another performance bump.

I have an HD7970, it scores P8380 in 3dmark11 standard benchmark, which I believe is 15% better than GTX580 and the GTX580 is another 10% or so better than HD6970. I won't be surprised is nVidia new cards are 5-10% faster than HD7970 and cost $100 more.


----------



## blibba (Jan 19, 2012)

nvidiaintelftw said:


> Can't wait to fry my bacon and eggs for breakfast on this card. Sorry Nvidia, but I honestly don't see this card beating the HD7970



I don't think it'll match it for multi-monitor support or idle power. But out and out performance, I don't think Nvidia have the balls to release anything that doesn't right now.


----------



## Casecutter (Jan 19, 2012)

Read it again tt just says, "NVIDIA is _gunning_ for the performance crown from AMD Radeon HD 7900 series with this chip"
If it sounds too good to be true... I see this as smoke. 

The GK104 will provide a few "cherry picked" chips that AIB's will again turn into Uber OC "For the Win" units that might encroach into 7970 territory, but hardly going to have the loins share of offerings holding to that.  

The fight will be against the 7950 at $400 MSRP; Nvidia will have more of O.C's that will best a 7950, but probably be more than $400, while minimal equipped reference clocks will be made to spare with a 7950.  But as normal it will be price/equipment and this time almost every AMD AIB’s will come straight out of the gate, with nice coolers and OC's, while maintain within an MSRP of <$400.


----------



## Kaynar (Jan 19, 2012)

Casecutter said:


> The fight will be against the 7950 at $400 MSRP; Nvidia will have more of O.C's that will best a 7950, but probably be more than $400, while minimal equipped reference clocks will be made to spare with a 7950.  But as normal it will be price/equipment and this time almost every AMD AIB’s will come straight out of the gate, with nice coolers and OC's, while maintain within an MSRP of <$400.



The true fail of AMD is that their best single-GPU in their new generation series is just 15-20% faster than nVidia's previous gen cards, which means that HD7950 will perform the same as GTX580 for the same price also and then nVidia will have their new cards 2 months later which will step all over AMD... yet I choose to buy HD7970 for some reason...


----------



## MxPhenom 216 (Jan 19, 2012)

Kaynar said:


> The true fail of AMD is that their best single-GPU in their new generation series is just 15-20% faster than nVidia's previous gen cards, which means that HD7950 will perform the same as GTX580 for the same price also and then nVidia will have their new cards 2 months later which will step all over AMD... yet I choose to buy HD7970 for some reason...



I don't see a fail at all with what AMD has released. What do you have out right now game wise that will even use the cards potential? It wont be for another 1 or 2 till the next gen consoles are released till we will be closer to see what these new cards are going to be able to really do. Right now we are stuck with console ports at 120fps.


----------



## blibba (Jan 19, 2012)

nvidiaintelftw said:


> I don't see a fail at all with what AMD has released. What do you have out right now game wise that will even use the cards potential? It wont be for another 1 or 2 till the next gen consoles are released till we will be closer to see what these new cards are going to be able to really do. Right now we are stuck with console ports at 120fps.



Right, so we might as well play them using the least possible amount of power? I think that's how your answer needs to end for it to be a compelling argument.


----------



## Casecutter (Jan 19, 2012)

Kaynar said:


> The true fail of AMD is that their best single-GPU in their new generation series is just 15-20% faster than nVidia's previous gen cards, which means that HD7950 will perform the same as GTX580 for the same price also and then nVidia will have their new cards 2 months later which will step all over AMD... yet I choose to buy HD7970 for some reason...


A little more explanation here... It’s not as doom and gloom... I see it as trying to persevere, persist working with TSCM on their cutting edge. (In that it cuts both ways)
http://www.techpowerup.com/forums/showthread.php?p=2521343#post2521343


----------



## EarthDog (Jan 19, 2012)

7950 should beat the 580... the 6950 wasnt close to 20% slower than the 6970.


----------



## St.Alia-Of-The-Knife (Jan 19, 2012)

entropy13 said:


> ATI's already dead. Killed by...AMD.



It's not the end of AMD yet, but we can see it from here


----------



## trt740 (Jan 19, 2012)

Na the GTX 580 and 7900 series are not that far apart and since they have been around a while the jump from GTX 580 to these new performance levels seem very doable.  The 7900 series are monster cards but truly do you guy think that the GTX 580 is a bad deal at around 400.00 and do you really believe they cannot improve it 20- 40 percent  on a design that is nearly 2 years old.


----------



## xenocide (Jan 19, 2012)

trt740 said:


> but truly do you guy think that the GTX 580 is a bad deal at around 400.00 and do you really believe they cannot improve it 20- 40 percent  on a design that is nearly 2 years old.



If it were $400 it would be a great deal, but it currently is around $500, with the 7970 in low supply running around $560-580.


----------



## trt740 (Jan 19, 2012)

xenocide said:


> If it were $400 it would be a great deal, but it currently is around $500, with the 7970 in low supply running around $560-580.



Okay never mind they upped the price they were down to 429.00 but not anymore.  This is a great deal still Galaxy 58NLH5HS3PXZ GeForce GTX 580 (Fermi) 1536MB... because of the 70.00 cooler but there is a rebate.  The 7900 card is better if you can live with the noise but thats the best I have.  Also finding a 7970 that hasn't had the price jacked is like finding Bigfoot.  Tiger Direct should be ashamed


----------



## NC37 (Jan 19, 2012)

EastCoasthandle said:


> Don't get caught up in the marketing.  The 560 ti (GF104), a 40nm part, has a tdp of just 170w.  What they are suggesting for the GK104 is a tdp of 225w for a 28nm part!  Something is not right with that, to say the least.  Even if it's implied they are trying to overclock a 560ti replacement for a mid/high part.



The GF104 is the 460. The 114 is the 560.


----------



## Casecutter (Jan 19, 2012)

trt740 said:


> No check again they are as low as  429.00-539.00 and the 7970s are 500.00 to 599.00.


Checking Egg the best is $440 -AR$50 for an "ucky" ECS or a PNY for $478 shipped (right)... there's a few Asus, EVGA or Gigabytes for $470-480 after rebates ($20-30), so that's the range for quality offerings.  IF Egg had 7970 stock you'd need to ad 15-20% to those final prices. (and that has No rebates)
Summary:
@ 2560x 15% > GTX580
@ 1920x 10% > GTX580   
(though with titles like Skyrim, STALKER COP, A&P, CRYSIS the 7970 shows it's legs)  

Now, in about 2 weeks, when more of the AIB Customs hit market we might get a better feel of the GTX580 market, so far in limited supply there's no reason for Nvidia to run scared as I hypothesize.


----------



## EastCoasthandle (Jan 19, 2012)

NC37 said:


> The GF104 is the 460. The 114 is the 560.


It's still based on the GF104 which is what I'm referencing.  Remember, the transistor count didn't change between the gf104 to gf114.


----------



## Beertintedgoggles (Jan 20, 2012)

Showing up late and not adding anything to the argument, but I always hate when people state you can't compare two cards since one of them is 2 yr. old tech.... blah, blah, blah.

Right now the fastest single GPU offering you can buy from nVidia = 580
Right now the fastest single GPU offering you can buy from ATI = 7970

Seems pretty damn comparable to me, of course you need to be able to find the 7970 to buy it.


----------



## Fluffmeister (Jan 20, 2012)

And that compare is absolutely fine, it's just when you start to go into details like the 40nm vs 28nm, transistors counts and the general age of the architecture that things look less rosy for AMD.

Can nV topple the 7970? You'd have to be pretty naive to think otherwise.


----------



## Beertintedgoggles (Jan 20, 2012)

Might be right and of course then you're comparing nV's current gen vs. AMD's which has been out for months already as well.  I don't care if they are made out of magic fairy dust, the fastest currently out versus the fastest currently out is where the competition should lie.


----------



## ViperXTR (Jan 20, 2012)

kinda reminds me of the G94 9600GT, first 9 series that came out, a midrange part (tho not from a new architecture)


----------



## Fluffmeister (Jan 20, 2012)

Beertintedgoggles said:


> Might be right and of course then you're comparing nV's current gen vs. AMD's which has been out for months already as well.  I don't care if they are made out of magic fairy dust, the fastest currently out versus the fastest currently out is where the competition should lie.



Of course, and equally I couldn't give a toss if AMD's card is out now and nV's is out 6 months down the line. SI on 28nm vs Kepler on 28nm floats my boat more than getting excited over it currently beating an ageing rock star that is the GTX 580.


----------



## morphy (Jan 20, 2012)

What gets me excited isn't the stock speeds - it's the overclocking headroom we're seeing  from a hi end card regardless of current or prev gen.  

Maybe Nvidia will have the same kind of oc'ing headroom in their top end cards but I doubt it. If anything Nvidia won't be content with just regaining the performance crown - they want to obliterate the competition so expect max clocks.

That said it almost seems as if AMD is holding back the 7970 on purpose. Expect them to have something up their sleeves too.


----------



## Nihilus (Jan 20, 2012)

*Big Green*

Well I'm sure Nvidia will have to have the fastest flagship card again this coming generation.  Of course they will charge whatever they want, and performance/ watt will suck compared to ATi.  Same pattern as always.  Hopefully the GTX 660 will be more reasonable/efficient again and keep the HD 79xx prices down.  

20% performance improvement over the gtx 580 is excellent considering that power used.  Image if ATi made a card that used the same power draw as the GTX 580!

Also, nobody avoids a card because it has low performance/transisistor count


----------



## Over_Lord (Jan 20, 2012)

This looks like a rushed out part.


----------



## adulaamin (Jan 20, 2012)

I hope it beats the 7970 by a significant margin and they sell it with the same price...


----------



## valio (Jan 20, 2012)

I think that the TDP is 225 only because of the 2 six pins connectors on the card, which means anything considering that the HD6870 has two of them too, and it doesn't need 225W at all. This is a new architecture and we don't know if it will have the same perf/watt or it will be better considering the slides nvidia showed some time ago about tesla, fermi, kepler and maxwell. I'm an Nvidia fan but what i can say is that amd is better in power consumption since 5xxx series and it would be time for nvidia to solve the problem somehow. I would never choose a gts 450 over an ati 5750 becouse of the their power requirements, but i also know that they will give me the same performance and nvidia has PhyX and it maybe can give me some good effect in some games that i like and it will cost me not more than 2-3 euros/dollars every year (different power consumption), so i could also choose the nvidia one. I believe it's good for us that they try to improve everything of this cards but someone who wants the best performance and spends 200-500 $ to obtain it, will not be interested in 20 watts difference if he doesn't play  games day and night without interruptions (idle differences are really smaller). It all depends on the point of view. I only hope for competition to help lower prices


----------



## Chappy (Jan 20, 2012)

256-bit? Is this nVidia's mid-range card competing with AMD's High-End? NOT GOOD! A thousand $ price tag GPU for nVidia's flagship...


----------



## semantics (Jan 20, 2012)

Chappy said:


> 256-bit? Is this nVidia's mid-range card competing with AMD's High-End? NOT GOOD! A thousand $ price tag GPU for nVidia's flagship...


maybe the use special gddr5 memory  probably polished by the tears of fanboys.


----------



## KooKKiK (Jan 20, 2012)

Only 256 bit and GDDR5 memory could be a bandwidth limit ???


----------



## gorg_graggel (Jan 20, 2012)

valio said:


> . I believe it's good for us that they try to improve everything of this cards but someone who wants the best performance and spends *400-500* $ to obtain it, will not be interested in 20 watts difference if he doesn't play  games day and night without interruptions (idle differences are really smaller). It all depends on the point of view. I only hope for competition to help lower prices



fixed...

i think your estimated range is too far apart...

this is from a germany based perspective, as energy prices are pretty high here...


----------



## Ikaruga (Jan 20, 2012)

I seriously wonder what that picture has to do with anything of this. If that's from a Kepler tech demo, I'm dissapoint:/


----------



## Benetanegia (Jan 20, 2012)

KooKKiK said:


> Only 256 bit and GDDR5 memory could be a bandwidth limit ???



Not necessarily, no. There's a lot of room in memory clocks. In previous gen Nvidia used <<1000 Mhz GDDR5 clocks, AMD is using 1375 Mhz GDDR5. That's a potential 40% improvement right there, and performance relation to memory bandwidth is not linear. A 40% increase in BW could potentially suffice for an up to 80% performance increase before becoming too much of a bottleneck.



Ikaruga said:


> I seriously wonder what that picture has to do with anything of this. If that's from a Kepler tech demo, I'm dissapoint:/



It's the Stonegiant DX11 benchmark, released years (?) ago.


----------



## KooKKiK (Jan 20, 2012)

Benetanegia said:


> Not necessarily, no. There's a lot of room in memory clocks. In previous gen Nvidia used <<1000 Mhz GDDR5 clocks, AMD is using 1375 Mhz GDDR5. That's a potential 40% improvement right there, and performance relation to memory bandwidth is not linear. A 40% increase in BW could potentially suffice for an up to 80% performance increase before becoming too much of a bottleneck.
> 
> 
> 
> It's the Stonegiant DX11 benchmark, released years (?) ago.



I think there's not much headroom for GDDR5 speed, since AMD's Tahiti use the same memory clock as previous gen but increase the buswidth from 256 to 384 bit.

And for your mention, nVidia previous gen used 320 and 384 buswidth not a 256 bit like this. That means you need to increase memory clock to somewhat about 1600 - 1800 MHz for BW compensation.

1600 - 1800 MHz GDDR5, i mean... WooooooW thats must be a super special DDR5


----------



## Benetanegia (Jan 20, 2012)

KooKKiK said:


> I think there's not much headroom for GDDR5 speed, since AMD's Tahiti use the same memory clock as previous gen but increase the buswidth from 256 to 384 bit.
> 
> And for your mention, nVidia previous gen used 320 and 384 buswidth not a 256 bit like this. That means you need to increase memory clock to somewhat about 1600 - 1800 MHz for BW compensation.
> 
> 1600 - 1800 MHz GDDR5, i mean... WooooooW thats must be a super special DDR5



Yes with same GDDR5 AMD went from 256 bit to 384 bits to obtain a 50% increase in memory bandwidth. Nvidia can get almost the same increase by just using the same memory that AMD has been using for 2 generations now. Simple.

Nvidia used 384 bits on their *high-end chip*, GK104 is NOT high-end. High-end nowadays means GPGPU and GPGPU requires more bandwidth, that's why GF100/110 had a 384 bit bus, and same for Tahiti. High-end==GPGPU also means you need to leave headroom, it means you cannot make compromises, it means going overkill sometimes. Mid-range means you can take compromises, you can cut corners.

Besides GTX560 Ti used a 256 bit bus and 1000 Mhz memory, like I said. To match HD7970 performance they need 50% performance over the GTX560. They don't need 1600-1800 Mhz GDDR5 that's absurd. They don't even need the 40% that 1375 Mhz GDDR5 would bring, because GPU perf is not linearly related to memory bandwidth.


----------



## KooKKiK (Jan 20, 2012)

Benetanegia said:


> Yes with same GDDR5 AMD went from 256 bit to 384 bits to obtain a 50% increase in memory bandwidth. Nvidia can get almost the same increase by just using the same memory that AMD has been using for 2 generations now. Simple.
> 
> Nvidia used 384 bits on their *high-end chip*, GK104 is NOT high-end. High-end nowadays means GPGPU and GPGPU requires more bandwidth, that's why GF100/110 had a 384 bit bus, and same for Tahiti. High-end==GPGPU also means you need to leave headroom, it means you cannot make compromises, it means going overkill sometimes. Mid-range means you can take compromises, you can cut corners.
> 
> Besides GTX560 Ti used a 256 bit bus and 1000 Mhz memory, like I said. To match HD7970 performance they need 50% performance over the GTX560. They don't need 1600-1800 Mhz GDDR5 that's absurd. They don't even need the 40% that 1375 Mhz GDDR5 would bring, because GPU perf is not linearly related to memory bandwidth.



i know that GPU performance is not related to memory bandwidth.

But, in many case, insufficient bandwidth can cause severe deduction in graphic performance. ( ex. HD5670 GDDR3 vs HD5670 GDDR5 )


so, u gonna tell me that the bandwidth of 6970 level is enough for 7970 performance.

where's the proof ???


----------



## Jonap_1st (Jan 20, 2012)

well then, only times will tell..


----------



## Selene (Jan 20, 2012)

reverze said:


> good news for those few people who still buy Nvidia


lol yea a few~!


----------



## Benetanegia (Jan 20, 2012)

KooKKiK said:


> i know that GPU performance is not related to memory bandwidth.
> 
> But, in many case, insufficient bandwidth can cause severe deduction in graphic performance. ( ex. HD5670 GDDR3 vs HD5670 GDDR5 )
> 
> ...



There's no direct proof of that, obviously, however there's hundreds of evidences found on other cards, that demostrate that memory bandwidth is not a heavy limiting factor. 

First of all you have to understand that HD7970 did NOT require all the bandwidth that it has. It does need more than HD6970, especially for compute, but it does not strictly need as much as it has. AMD did not have any other option than going 384 bits, because GDDR5 speeds higher than 1400 Mhz are not very doable and are very very expensive anyway. So their only option was a wider bus.

Now:

Evidence #1
192 bit GTX460 has 86 GB/s BW
256 bit 460 has 115 GB/s, that's 33% more BW but performance difference is not much bigger than 5%.

Another example, GTX 480 vs GTX 570, evidence #2

GTX 480 has 177 GB/s
GTX 570 has 152 GB/s - it is slightly faster, despite the 480 having 16% more memory bandwidth.

So is HD7970 kind of performance posible with HD6970 kind of bandwidth? Absolutely.

PS: The HD5670 example you posted, GDDR5 vs GDDR3, you are talking about *half the bandwidth* which is not going to be the case with GK104 at all (if it really is 256 bit anyway). We would be talking about a 50% reduction is buss width, but an increase of 40% in clocks, for a net bandwidth loss of 10% compared to the GTX580, a card itself is probably NOT limited by it's memory bandwidth anyway.


----------



## Ikaruga (Jan 20, 2012)

Benetanegia said:


> It's the Stonegiant DX11 benchmark, released years (?) ago.



Yes I know, that's why I was wondering why would they demo their new tech with that.


----------



## Benetanegia (Jan 20, 2012)

Ikaruga said:


> Yes I know, that's why I was wondering why would they demo their new tech with that.



I think it's just Bta posting a random image because there's no picture of Kepler yet.


----------



## overclocking101 (Jan 20, 2012)

so many nvidia haters! if ati haters went on and on about how something cant be true we would get infractions for "flaimbaiting" etc (i know I've had it happen). just makes little sense to me, if you dont believe it oh well so what who cares??? its a damn graphics card not a political debate for christs sake


----------



## Red_Machine (Jan 20, 2012)

It's like Microsoft haters, nobody cares about them.


----------



## KooKKiK (Jan 20, 2012)

Benetanegia said:


> There's no direct proof of that, obviously, however there's hundreds of evidences found on other cards, that demostrate that memory bandwidth is not a heavy limiting factor.
> 
> First of all you have to understand that HD7970 did NOT require all the bandwidth that it has. It does need more than HD6970, especially for compute, but it does not strictly need as much as it has. AMD did not have any other option than going 384 bits, because GDDR5 speeds higher than 1400 Mhz are not very doable and are very very expensive anyway. So their only option was a wider bus.
> 
> ...



You have NO proof but i have my proof.

3dm11 score of my GTX580@850 and stock BW

http://3dmark.com/3dm11/2588707

GTX580@850 and HD6970 BW ( 1835 mem clocks )

http://3dmark.com/3dm11/2588751

nuff said ??? 


ps. i know that in order to bring GTX580 to HD7970 level in 3dm11, i have to push my 580 almost 1000 core clock but 850 core is enough for proving.


----------



## gorg_graggel (Jan 20, 2012)

overclocking101 said:


> so many nvidia haters! if ati haters went on and on about how something cant be true we would get infractions for "flaimbaiting" etc (i know I've had it happen). just makes little sense to me, if you dont believe it oh well so what who cares??? its a damn graphics card not a political debate for christs sake



lol, are you serious?

this is one of the most civil kept discussions about that topic i have seen in a long time...

people are actually discussing and speculating without any name calling or anything...

and yes, it's a damn graphics card, which is being discussed on a tech enthusiast website...what are we supposed to do? talk about donuts?

you sir, are the one who is trying to cause some stir...so either contribute, or get lost...


----------



## Benetanegia (Jan 20, 2012)

KooKKiK said:


> You have NO proof but i have my proof.
> 
> 3dm11 score of my GTX580@850 and stock BW
> 
> ...



lol. That's no proof of anything, because you don't have Kepler. So an overclocked GTX580 (10% OC) with a 10% underclock on the memory does 3% slower in 3Dmark 11 than without underclock. Wow!! That so totally proves your point, man... No.

Besides the fact that 3% is thin air, we are not talking about making a card like yours be as fast as HD7970 and what memory bandwidth it needs for that. Things don't work like that. AMD/Nvidia spend months designing and balancing out their architectures and chips to get the most out of them and tweaking internal latencies and such. You taking your card and absolutely destroying that balance with a 10% core overclock and 10% memory underclock means nothing. But please, by all means try again. 

EDIT: At least you proved that AMD and Nvidia do their job and don't just ramdomly choose the specs of cards, but then again looking at how the only difference is 3% maybe you proved the opposite. I just can't choose what you proved yet. In general nothing, other than a GTX580 at 850 Mhz...

And to finish. You artificially created a 20% deficit in memory bandwidth and the most you obtained was 3% less performance. Bravo, because like I said earlier Nvidia could create a card with only a 10% deficit, so 1.5% slower? Aww man, horrible bottleneck. AWWWWW!

/sarcasm


----------



## KooKKiK (Jan 20, 2012)

Benetanegia said:


> lol. That's no proof of anything, because you don't have Kepler. So an overclocked GTX580 (10% OC) with a 10% underclock on the memory does 3% slower in 3Dmark 11 than without underclock. Wow!! That so totally proves your point, man... No.
> 
> Besides the fact that 3% is thin air, we are not talking about making a card like yours be as fast as HD7970 and what memory bandwidth it needs for that. Things don't work like that. AMD/Nvidia spend months designing and balancing out their architectures and chips to get the most out of them and tweaking internal latencies and such. You taking your card and absolutely destroying that balance with a 10% core overclock and 10% memory underclock means nothing. But please, by all means try again.
> 
> ...



oh... c'mon stop all BS thing.


my GTX580 is not even close to HD7970, but still it has a bottleneck.

imagine Kepler or HD7970@6970 BW couldn't be any faster than mine and thats not only 3% for sure.


at first, you told me that high end gpus have excessive BW, and thats for gpu computing purpose.



> First of all you have to understand that HD7970 did NOT require all the bandwidth that it has. It does need more than HD6970, especially for compute, but it does not strictly need as much as it has. AMD did not have any other option than going 384 bits, because GDDR5 speeds higher than 1400 Mhz are not very doable and are very very expensive anyway. So their only option was a wider bus.



then you change your argument and told me Kepler doesn't manage memory bandwidth in the same way as Fermi and SI.



> Besides the fact that 3% is thin air, we are not talking about making a card like yours be as fast as HD7970 and what memory bandwidth it needs for that. Things don't work like that. AMD/Nvidia spend months designing and balancing out their architectures and chips to get the most out of them and tweaking internal latencies and such. You taking your card and absolutely destroying that balance with a 10% core overclock and 10% memory underclock means nothing. But please, by all means try again.



what kind of unreliable person you are ??? 


Try proving something ( at least find me some reference that not come from your mouth )

*OR stop BS around here !!!*


----------



## Benetanegia (Jan 20, 2012)

KooKKiK said:


> bla bla bla[/B]



Bla, bla, bla 3% difference between both of your scores and I'm sure you even went as far as doing many and chosing the ones that showed the biggest difference. Don't worry everyone does that when desperately trying to prove something. Too bad you didn't check what the real difference was. Lame.

And I don't have to prove anything, since I never actually claimed anything. I said that a bottleneck is not warranted, that there's high chances that a bottleneck won't occur and provided REAL evidence of previous cards NOT being bottleneck. The one who says there's going to be bottleneck is you, and the only proof you could provide is a lameass comparison with 3% difference that could be derived from margin of error in 3DMark scoring system or a cat farting down the street. You are not right. Get over it.

EDIT: bah, I decided to be nice and teach you one or two things. Here: http://realworldtech.com/page.cfm?ArticleID=RWT042611035931&p=2



> In most of the cases we analyzed, *2X higher memory bandwidth yielded ~30% better 3DMark Vantage GPU performance.* A good estimate is that performance scales with the cube root of memory bandwidth, as long the memory/computation balance is roughly intact.





> The Radeon HD 3870 and 4670 were the pair we mentioned on the earlier page. The 3870 has *2.13X* the memory bandwidth of the latter, which translates into the *36%* better performance





> In a similar vein, the Radeon 4870 and 4850 achieve *14%* and *27%* higher 3DMark scores over their bandwidth starved cousins



Note: both have 2x or 100% more bandwidth that their "starved cousins". 



> The last example pair is the 335M and 4200M, which show somewhat less benefit from bandwidth. The 335M has nearly *triple the bandwidth* of the 4200M, identical shader throughput, and about *40% higher performance*.


----------



## phanbuey (Jan 20, 2012)

KooKKiK said:


> You have NO proof but i have my proof.
> 
> 3dm11 score of my GTX580@850 and stock BW
> 
> ...



Off topic:
Looks like your proc is chocking your 580 like crazy - my 570 at 800Mhz gets a higher p score and graphics score of within 2%.


----------



## ZakkWylde (Jan 21, 2012)

*Some sketchy news about Nvidia's Kepler*

This was on my facebook feed, thought others here might like a look. Some un-based claims of Nvidia dominance.

http://www.maximumpc.com/article/news/longtime_nvidia_critics_says_kepler_clear_winner_against_amds_tahiti_architecture


----------



## OOZMAN (Jan 21, 2012)

Just some silly rumours with no evidence man.


----------



## ZakkWylde (Jan 21, 2012)

My thoughts exactly


----------



## ViperXTR (Jan 21, 2012)

evidenz plz


----------



## Damn_Smooth (Jan 21, 2012)

I hope that they're right. Bring on a price war.


----------



## KooKKiK (Jan 21, 2012)

Benetanegia said:


> Bla, bla, bla 3% difference between both of your scores and I'm sure you even went as far as doing many and chosing the ones that showed the biggest difference. Don't worry everyone does that when desperately trying to prove something. Too bad you didn't check what the real difference was. Lame.
> 
> And I don't have to prove anything, since I never actually claimed anything. I said that a bottleneck is not warranted, that there's high chances that a bottleneck won't occur and provided REAL evidence of previous cards NOT being bottleneck. The one who says there's going to be bottleneck is you, and the only proof you could provide is a lameass comparison with 3% difference that could be derived from margin of error in 3DMark scoring system or a cat farting down the street. You are not right. Get over it.
> 
> ...



i didn't see anything in the article that prove your argument.

may be u should "try again" 


oh, and you said you didn't claim anything ???

what is this ??? 



> So is HD7970 kind of performance posible with HD6970 kind of bandwidth? Absolutely.




If i had Kepler IN HANDS and benched it right now, i'm sure u gonna make an excuse like "it's only an engineering sample" anyway.


----------



## Benetanegia (Jan 21, 2012)

KooKKiK said:


> i didn't see anything in the article that prove your argument.
> 
> may be u should "try again"
> 
> ...



You don't see anything in that article that proves my point? Hahahaha. Nice try, but stop trolling.

My point: Kepler might not be memory bandwidth limited, just as countless of previous cards that AMD and Nvidia surprised us with, that had much less bandwodth than their predecesor. <-- (stating posibilities/probabilities, without stating or asserting how things are going to be only how they may be == no claim)
Proofs: the article, 8800GTX vs 9800GTX, GTX480 vs GTX570, and several cards in the article and many many other cards before and after.

Your claim: Kepler *will* be memory bandwidth limited. (stating what it will be == claim)
Proof: NONE.
what you think it's "proof": Your GTX580, which is NOT Kepler by any means or stretch of imagination, suffers a 3% penalty when creating an artificial 15-20% gap between stock/balanced GPU clocks and memory clocks. That's it, every 20% less memory BW, degrades performance by 3% on the GTX580, which is not GK104.

I'm still awaiting your proof. The burden of proof in on your side, as always has and you have ZERO proofs so far. Of course you won't have any proof until Kepler is released, but you'll figure it out. 

On the positive side, you are a good troll. Mamma troll is probably proud of you.


----------



## KooKKiK (Jan 21, 2012)

oh boy, mr. slippery 


i proved it in the same way of your first argument ( hi-end gpus have excessive bandwidth and that for computing, and even 6970 BW is enough for 7970 performance bla... bla... )

and then you changed it ( maybe Kepler won't handle bandwidth in the same way as Fermi/7970 ) 

WTF !!! 


your article say it straightly that when everything is about the same, difference in bandwidth mainly affect the overall performance.

( see 3870 vs 4670 and 360M vs 435M comparison )

and 570 vs 480 is not the case coz GTX480 is a partially shader disabled chip, but NOT in the memory part.

and this difference is only about 10 - 15% ( 9800 vs 8800 either ) NOT as much as when you compare 6970 to 7970 BW.


anyway, i've finished this, let the people see and judge by themselves whos right and whos wrong.


----------



## Benetanegia (Jan 21, 2012)

KooKKiK said:


> i proved it in the same way of your first argument ( hi-end gpus have excessive bandwidth and that for computing, and even 6970 BW is enough for 7970 performance bla... bla... )



You proved nothing. 3% of change with 20% of relative memory change is not something to even take into account. 3% is NOT bottleneck. I never said BW does not affect performance at all, I said it does not affect it *significantly*. Learn to read and notice the subtle differences.



> and then you changed it ( maybe Kepler won't handle bandwidth in the same way as Fermi/7970 )



I didn't change anything. It's all part of the same point. Memory bandwidth does not work as you think AT ALL. BW is not a wall against which the GPU hits and stops. BW bottleneck/limitation is an efficiency curve, where low values affect performance a lot and higher and higher values have diminishing returns.

And of course different architectures/chips react differently to BW. Even thinking it's any different than that is stupid. Example right in the article I posted, there's 5 cards with 51.2 GB/s memory bandwidth and all of them have very different performance:

By brands (more similarities in architecture)

GTS 160 M - 3374
9800M GTS - 3700 (+9%)
9800M GTX - 4123 (+22%)

HD4670 - 2552
HD5830 - 4243 (+66%)



> your article say it straightly that when *everything is about the same*, difference in bandwidth mainly affect the overall performance.



When everything is the same... when everything else is the same... of course the only factor that remains (memory bandwidth) affects performance. Right. But as the article also points out, it does not affect anything close to linearly. In fact in many cases it's completely minimal.



> So when everything is the same...



Too bad it's never ever the same between 2 different chips, even if they have the same or similar specs, as the article shows. And Kepler will definitely be different to Fermi, while still mantaining a lot of similarities when it comes to overall architecture, but still the chips are going to be very very different, and as such, yes it is posible, no, probable, that Kepler won't be bottlenecked by 256 bit. And we don't even know if it has 256 bit anyway, that's why I never claimed anything as fact and always carefully chose my words. It is probable that Kepler won't be bottlenecked and yes, why not it is also posible that it will be bottlenecked, but if talking about probablities, IF it really is 256 bit, it's far more probable that it won't be affected much, OR IT WOULDN'T RELEASE with 256 bit!! Or do you think they randomly choose specs?? pff

EDIT: http://translate.google.es/translat...deon-hd-7970/20/#abschnitt_384_bit_in_spielen

HD7970 with HD6970 bandwidth, 15% slower than stock HD7970. Still 20% faster than HD6970. And this is with horrendously high latencies (which is why I said you can't just underclock for comparison, it's not 100% accurate). 1800 Mhz can certainly run with much reduced latencies compared to 2700 Mhz memory.


----------



## Casecutter (Jan 23, 2012)

ZakkWylde said:


> This was on my facebook feed, thought others here might like a look. Some un-based claims of Nvidia dominance.
> 
> http://www.maximumpc.com/article/news/longtime_nvidia_critics_says_kepler_clear_winner_against_amds_tahiti_architecture


From the Maximum PC post...
"even Nvidia's mid-range cards will give AMD's high-end GPUs a _run_ for their money"
"claiming Nvidia's mid-range cards will have the _moxie to challenge _AMD's higher end GPUs"

Those I can correspond, but this might be a little strong... "Nvidia is going to "_win this round on just about every metric_" with its Kepler architecture, which will _trump_ AMD's Tahiti "handily."

Read what I'm thinking we'll see... when the catchphrase was, "NVIDIA is _gunning_ for the performance crown from AMD Radeon HD 7900 series with this chip"
http://www.techpowerup.com/forums/showthread.php?p=2521506#post2521506


----------



## crazyeyesreaper (Jan 23, 2012)

all those articles are just reposts with edits of Semi Accurate article written by Charlie, id wait for real numbers before worrying about anything.


----------



## phanbuey (Jan 23, 2012)

wow... the Charlie I had read before literally would start frothing at the keyboard at the mere mention of nvidia.

One of the comments suggested that he was fishing for a content thief... probably true lol.


----------

