• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

AMD Radeon R9 290X 4 GB

Trust me mate...that's why they called me Elders or opa in kaskus :laugh:



TDP is straight value constraint with VRM design :)

Let me elaborate...


Titan and GTX 780 only had 6 choke Foxconn made with only single channel/tunneling driver+ mosfet ,albeit R22 rating 35A each at OC (operational condition) 90'C with tolerance 10-15%.This could translate they only deliver 200W,added some pci-e slot power 75W and then you have fairly 275W.

While R290X...


Had 6 chokes Coiltronics made with dual channel/tunneling driver+mosfet,R 15 rating 50A each at OC (operational condition) 110'C with tolerance 5-10%.This could translate they will deliver 300W,added some pci-e slot power 75W and then you have 375W :)



It's just nature of internet :laugh:
You can always mind them though,basically they didn't have valid ground to make such a statement implying to spot which is which.Its hilarious to see anyone debating over enthusiast card while he himself never touched,try,test or even had one :laugh:
Just like debating Viper is ridiculously inefficient,Beemer had utterly crap materials and Mercedes big cc cant run faster than yo mama riding a wheel chair while he only had Prius :laugh:

Dude I just checked the spec sheet for the coils and their rated for 70A at 125c just search coiltronics 1007r3 and the first copper bussman pdf contains the specs. Also those mosfets exist in only to variants that I know of one for 70A and the ones the Vapor-x 7970/50/r9 280x uses which are 60A rated. So the vrm has nothing to do with tdp. Tdp stands for expected power draw based on in lab tests.
Now if you look at the VRM closely you can see it is in fact 5+1+1 design capable of pushing 350A on the core voltage line without degradation(that happens only if you go over the source drain voltage spec or source drain current spec) it is capable of pushing another 40A on the VTT rail. So basically this VRM is one of the best designs on the market in terms of raw power output, it should also have low noise because it is a 1pwm to 1phase design what it does lack is efficiency as the components are running close to the maximum of their spec.
 
Last edited:
The discussions of board power and VRM output are missing the question of the OP. The OP (and I) wonder why W1zzard uses conflicting numbers. W1zzard says that the same configuration of power cables can supply different amounts of power (i.e. 375W on an AMD card versus 300W on an NVidia card for 6+8pin PCIe power connectors). This has nothing to do with how much power the VRM can output, just how much power can be input to the VRM.

In these reviews W1zzard always quotes the amount of power than can be input without violating the PCIe specification. So the question is that since both cards conform to the same PCIe power specifications, why can AMD draw 75W more than NVidia without violating the specification?
 
Good review like always, but I'm curious about something.

From 290X and 280X review :


From 270X review :


From 780 and TITAN review :


From 760 review :


Is there any difference between AMD's and NVIDIA's power connectors configuration & specification? Or the difference coming from PCI-E 3.0 specification?

I just fail at math.

PCIe slot = 75W
6-pin = 75W
8-pin = 150W
 
The discussions of board power and VRM output are missing the question of the OP. The OP (and I) wonder why W1zzard uses conflicting numbers. W1zzard says that the same configuration of power cables can supply different amounts of power (i.e. 375W on an AMD card versus 300W on an NVidia card for 6+8pin PCIe power connectors). This has nothing to do with how much power the VRM can output, just how much power can be input to the VRM.

In these reviews W1zzard always quotes the amount of power than can be input without violating the PCIe specification. So the question is that since both cards conform to the same PCIe power specifications, why can AMD draw 75W more than NVidia without violating the specification?

Can't find where he states the tdp of the card but only goes out of spec by 15w which I think doesn't matter too much also the only difference between a 6 and 8 pin is that the 8 pin has 2 extra ground wires so it really doesn't matter that the card pulls around 1.25A more than permitted on the configuration of 6+8 pin because the limit exists to make sure you're power cables don't melt or catch fire. Which they won't if the draw is only beyond limits for a second or 2 though it may trigger ocp on some psus.
 
So now people are saying "well, i could spend an extra $50-100 for a GTX 780 and it will blow away a R290X if I get one that's non reference" . Well, *LIGHTBULB*, what about non-reference R290X's? Like a lightning from MSI or something from Gigabyte or ASUS. C'mon... I've never seen so many NVidia-biased responses in a video card review in a long time... this is silly business.

+1
This is exactly what i've been saying all along, only people choose to ignore it.
 
Oh you showed him this picture. My plan was to humiliate the Titan/780 later on for their poor VRM design. In short, except the flashy cover, the board and the chip of Titan/780 are far more inferior than 290X's counterparts. No surprise that 290X has been broken records with LN2, it is truly the beast.

I didn't say Titan/GTX 780 had a bad VRM,its their design to suit their chip.Needless to say,you're correct...aside from that,let finish this never ending bashing shall we? :)

Make sense, I understand now, thank you.
But still curious why W1zzard didn't write about this difference in VRM design can cause difference maximum power draw despite both using exactly same power configuration.
When I first saw this I thought it was a typo, but then after reading all R9 reviews (280X,270X, and now 290X) I assume that there must be something going on.

From TPU picture I see that 290X is using R23 not R15 (R15 is 7970).
Is there any difference between R23 and R15?
All I can find is this datasheet : http://www.cooperindustries.com/con...-datasheets/Bus_Elx_DS_4341_FP1007_Series.pdf

You may PM'ed Wizzard about that :laugh:

It's R15 (1007R3-R15) on the main phase,the R23 you saw was a split plane use for another instance,such as memory banks and memory controller.Basically there's no different between R15 and R23,except R23 is more sensitive (with delta A) and more accurate feeding voltage, but also capable of doing "instance spike / passthrough" if the chip needs it.

Dude I just checked the spec sheet for the coils and their rated for 70A at 125c just search coiltronics 1007r3 and the first copper bussman pdf contains the specs. Also those mosfets exist in only to variants that I know of one for 70A and the ones the Vapor-x 7970/50/r9 280x uses which are 60A rated. So the vrm has nothing to do with tdp. Tdp stands for expected power draw based on in lab tests.
Now if you look at the VRM closely you can see it is in fact 5+1+1 design capable of pushing 350A on the core voltage line without degradation(that happens only if you go over the source drain voltage spec or source drain current spec) it is capable of pushing another 40A on the VTT rail. So basically this VRM is one of the best designs on the market in terms of raw power output, it should also have low noise because it is a 1pwm to 1phase design what it does lack is efficiency as the components are running close to the maximum of their spec.

About the spec sheet there's no account that will be translate to a raw power.Notice there still dual channel (dual Low RDS(on) in single package plus one bridge between them).I don't know about how they controlled it though,but obviously board maker wouldn't let their choke work on their maximum spec all the time.I've read some of from jonny,that Renesas,Foxconn Magic,Sanyo as coil maker (toroidal,composite,ferrite,duralium coil) they test specific part only a small amount of time such as sudden spike,so on the spec sheet only as guidance but not their OC (operational condition).
 
Last edited:
+1
This is exactly what i've been saying all along, only people choose to ignore it.

We're not ignoring it because we're "NVidia zealots"; you're just not seeing the full picture because you assume that NVidia is doing nothing.

First, I want to point out something W1zzard said:

so far, everybody who I talked to, said that AMD doesn't allowed custom designs for 290X. This will probably change soon, maybe AMD just wants to sell more of their ref boards asap

There's no proof that vendors don't have custom designs ready today. The evidence indicates that AMD is not letting vendors release custom designs even if they do have them. When that restriction will lift is up for question, but as of now AMD has released no time frame. So to say when these custom designs will be coming is pure speculation.

What is known is that in three weeks NVidia will release the GTX 780 Ti; there's no debating that time frame. It's also hard to argue that the 780Ti won't change NVidia's pricing structure of their lineup driving down the price of the GTX 780.

Here's a more realistic scenario for those championing custom cooled R9 290X's

When custom cooled R9 290X's come about, there will have been a price cut for custom GTX 780's. We will be in the same competitive situation all over again. You will be able to get either the R9 290X or the GTX 780 for the same amount of money. The comparison will be whether you want the more powerful R9 290X and are willing to put up with the extra heat and noise or if you want the slower GTX 780 with much less heat and noise.

The R9 290X seems to be difficult to cool, so there might even be a situation where vendors want to spend extra money on the coolers of their R9 290X's to make them faster and they end up more expensive than the custom GTX 780's, which will make the situation even more confusing.

What I see personally happening is a segmentation of the R9 290X market. There will be R9 290X's about the same cooling performance as the reference cooler and they will be priced to compete with GTX 780's. Then there will be R9 290X's with extravagent heatsinks that can get extra performance, and they will be priced to complete with the GTX 780Ti. This is sort of like what AMD did with the 7970 vs. the 7970 GHz edition, since you could get GHz edition performance with the regular 7970 just by improving its cooler and applying an overclock,
 
Last edited:
We're not ignoring it because we're "NVidia zealots"; you're just not seeing the full picture because you assume that NVidia is doing nothing.

First, I want to point out something W1zzard said:



There's no proof that vendors don't have custom designs ready today. The evidence indicates that AMD is not letting vendors release custom designs even if they do have them. When that restriction will lift is up for question, but as of now AMD has released no time frame. So to say when these custom designs will be coming is pure speculation.

What is known is that in three weeks NVidia will release the GTX 780 Ti; there's no debating that time frame. It's also hard to argue that the 780Ti won't change NVidia's pricing structure of their lineup driving down the price of the GTX 780.

Here's a more realistic scenario for those championing custom cooled R9 290X's

When custom cooled R9 290X's come about, there will have been a price cut for custom GTX 780's. We will be in the same competitive situation all over again. You will be able to get either the R9 290X or the GTX 780 for the same amount of money. The comparison will be whether you want the more powerful R9 290X and are willing to put up with the extra heat and noise or if you want the slower GTX 780 with much less heat and noise.

The R9 290X seems to be difficult to cool, so there might even be a situation where vendors want to spend extra money on the coolers of their R9 290X's to make them faster and they end up more expensive than the custom GTX 780's, which will make the situation even more confusing.

What I see personally happening is a segmentation of the R9 290X market. There will be R9 290X's about the same cooling performance as the reference cooler and they will be priced to compete with GTX 780's. Then there will be R9 290X's with extravagent heatsinks that can get extra performance, and they will be priced to complete with the GTX 780Ti. This is sort of like what AMD did with the 7970 vs. the 7970 GHz edition, since you could get GHz edition performance with the regular 7970 just by improving its cooler and applying an overclock,

you nailed it, exactly what I wanted to say as well, why is that people who are breathing logic and sense into this discussion suddenly labeled as "NVidia zealots"? while the opposites are resorted to name calling and personal insults?
 
We're not ignoring it because we're "NVidia zealots"; you're just not seeing the full picture because you assume that NVidia is doing nothing.

First, I want to point out something W1zzard said:



There's no proof that vendors don't have custom designs ready today. The evidence indicates that AMD is not letting vendors release custom designs even if they do have them. When that restriction will lift is up for question, but as of now AMD has released no time frame. So to say when these custom designs will be coming is pure speculation.

What is known is that in three weeks NVidia will release the GTX 780 Ti; there's no debating that time frame. It's also hard to argue that the 780Ti won't change NVidia's pricing structure of their lineup driving down the price of the GTX 780.

Here's a more realistic scenario for those championing custom cooled R9 290X's

When custom cooled R9 290X's come about, there will have been a price cut for custom GTX 780's. We will be in the same competitive situation all over again. You will be able to get either the R9 290X or the GTX 780 for the same amount of money. The comparison will be whether you want the more powerful R9 290X and are willing to put up with the extra heat and noise or if you want the slower GTX 780 with much less heat and noise.

The R9 290X seems to be difficult to cool, so there might even be a situation where vendors want to spend extra money on the coolers of their R9 290X's to make them faster and they end up more expensive than the custom GTX 780's, which will make the situation even more confusing.

What I see personally happening is a segmentation of the R9 290X market. There will be R9 290X's about the same cooling performance as the reference cooler and they will be priced to compete with GTX 780's. Then there will be R9 290X's with extravagent heatsinks that can get extra performance, and they will be priced to complete with the GTX 780Ti. This is sort of like what AMD did with the 7970 vs. the 7970 GHz edition, since you could get GHz edition performance with the regular 7970 just by improving its cooler and applying an overclock,

There will be soon another card in the mix and it could be the most interesting. The 290. At a presumably 450$ price point and 780 performance it could cause some real pain for NVidia's lineup.
 
Can you even read what you're posting? R290X won 6/8 of the single card benchmarks you posted....blah verbal blah diarrhea blah....
More to the point, I'm reading what others are posting. My point was aimed at the hyperbole - MASSACRE - really?
Depends on your terms of reference I suppose. My definition would be a substantial leap over the previous architecture. 9700 Pro over the GF4 Ti ? Yes!, 8800GTX over X1950XTX? Yes. 290X over GTX 780* by a few fps per game? Not really. It would be a different matter if one card was limited to gameplay without AA while the other could use FS AA.

The fact that there isn't that much real world gameplay difference between the cards is a leading factor in why this thread is so long.

* If a single digit performance lead qualifies as a MASSACRE, I'm quite surprised that a 20% performance lead for the GTX 780 over AMD's single GPU champ didn't warrant an even more hyperbolic response from you or your sidekick.
 
question

I plan on getting 2x 290x for my 120Hz 1440p panel but I am not sure if there will be modded versions of the 290x in a decent time frame. I plan on H20 for both GPUs so buy now or wait for non ref. designed boards to hit the shelves?
 
We're not ignoring it because we're "NVidia zealots"; you're just not seeing the full picture because you assume that NVidia is doing nothing.

I'm not assuming that, it is obvious nvidia has an answer, it's just that answer is not going to change things from what they are now, meaning 290x will still be the faster one.

It's my opinion but i think it's clearly obvious than non reference versions of 290x are going to be fast enough to compete with anything nvidia gets out of the gates.

If i'm wrong at that well, i'm not perfect, but i'm not going to start criticisim the shit out of nvidia for not being able to get past by the 290x.

That still leaves one question left to answer: r9 290 performance, which will be equal to 780 i guess.

Let amd have their time of glory guys, nvidia had it, it's now amd turn to be king of the hill, and lowering the prices at the same time, how can that be bad??? lol.....

Here's a more realistic scenario for those championing custom cooled R9 290X's

When custom cooled R9 290X's come about, there will have been a price cut for custom GTX 780's. We will be in the same competitive situation all over again. You will be able to get either the R9 290X or the GTX 780 for the same amount of money. The comparison will be whether you want the more powerful R9 290X and are willing to put up with the extra heat and noise or if you want the slower GTX 780 with much less heat and noise.

The R9 290X seems to be difficult to cool, so there might even be a situation where vendors want to spend extra money on the coolers of their R9 290X's to make them faster and they end up more expensive than the custom GTX 780's, which will make the situation even more confusing.

What I see personally happening is a segmentation of the R9 290X market. There will be R9 290X's about the same cooling performance as the reference cooler and they will be priced to compete with GTX 780's. Then there will be R9 290X's with extravagent heatsinks that can get extra performance, and they will be priced to complete with the GTX 780Ti. This is sort of like what AMD did with the 7970 vs. the 7970 GHz edition, since you could get GHz edition performance with the regular 7970 just by improving its cooler and applying an overclock,

That's also pure speculation.
 
There will be soon another card in the mix and it could be the most interesting. The 290. At a presumably 450$ price point and 780 performance it could cause some real pain for NVidia's lineup.

I agree to a point, but AMD is filling in the slots in NVidia's lineup. I doubt the GTX 770 will be able to compete with the R9 290, but then again it doesn't have to for $50 less. I'm skeptical that the R9 290's performance will be that close to the GTX 780 at stock speeds. I think it will be lower performance than the GTX 780 in stock form because AMD will limit its power to 225W (2 x 6-pin PCIe) in order to appeal to a broader market.

However, the R9 290 will be a very good value for overclockers. This is because the Hawaii chip is obviously power limited. The R9 290 presumably will have the same cooler and board as the R9 290X but a lower heat ouput; therefore, if you just crank up the R9 290's power limit to that of the R9 290X it should reach about the same performance as the R9 290X.
 
You were the one that postulated a before the titan was released situation. You can granularize your logic after the fact to suit your needs as much as you want. But it really just means no one will want to talk to you because you constantly rescope the debate. Enjoy talking to yourself.

You looking for some cookies?
 
It's my opinion but i think it's clearly obvious than non reference versions of 290x are going to be fast enough to compete with anything nvidia gets out of the gates.

I don't disagree with you on that. What I disagree with you is that these non-reference versions of R9 290X that compete with the GTX 780Ti will be any less expensive than the GTX 780Ti. If you want to get to greater performance than the current R9 290X you need a 375W card. That is strictly custom card territory, as in EVGA Classified, Galaxy HOF, Gigabyte SOC, etc., and those cards carry premiums over reference cards.

Let amd have their time of glory guys, nvidia had it, it's now amd turn to be king of the hill, and lowering the prices at the same time, how can that be bad??? lol.....

Lowering prices is great, but just because AMD's R9 290X forced NVidia into a position to lower prices doesn't mean that I'm now obligated to buy a R9 290X to thank AMD.

I have no manufacturer preference assuming features are the same. Furthermore, I don't believe in this whole "glory" or "halo card" thing, and I don't think many people in this forum do either. I buy the product that fits my needs no matter who the manufacturer is. This "glory" is all marketing and nothing else; just because someone has the best card in the world doesn't mean that every product in their entire lineup is good (and more importantly, well priced).
 
More to the point, I LUUUV Nvidia coz it costs moar -- it must be ghud, just like da Apples...brb, gotta knock one out over dat Titan...waah waaah I'm a condescending zealot in denial, please help meh...

My definition would be a substantial leap over the previous architecture. 9700 Pro over the GF4 Ti ? Yes!, 8800GTX over X1950XTX? Yes. 290X over GTX 780* by a few fps per game? Not really. It would be a different matter if I could use some logic here...

The fact that this isn't an Nvidia card review is a leading factor in why I pooped in this thread for so long.

* If a single digit performance lead qualifies as a MASSACRE, I'm quite surprised that a 20% performance lead for the GTX 780 over AMD's single GPU champ over a year later has my green-coloured Jen-Hsun boxers in a twist.

Nice oversight/comprehension fail yet again.


Nvidia GF4 to Radeon 9000 = different generations of GPUs + API change (DX7/8 to DX9)

ATI X1900 to 8000 = different generation of GPUs yet again + API change again (DX9 to DX10)

This gen (R200 series) is a direct response to Nvidia 700 series re-brand which launched first, almost a year and a half after its "competition" that is the 7970 (with a mediocre 20%-30% improvement and absolutely retarded pricing), not a game-changing start of a new generation (GCN 7970, which by the way, only was a few dozen to a hundred dollars more than the old GTX 580 it was replacing when it came out -- and was much faster) architecture/not even a die shrink, and with no new API still.

It's bad enough that you're making stupid, nonsensical posts, blindly defending your favourite brand without understanding what you've posted, but for you to then be condescending to people explaining to you why you're wrong is...moronic, to say the least.
 
Last edited:
I dont really care about the heat and the power the new card usses, i just want preformance for the money.
I live in Denmark, and here the new R9-290X can be bourght for 3999,- Dkr which is just about 740 dollars incl. VAT and shipment. Thats not really that bad. GFX Titan cost 7194,- Dkr/1331 dollars, GFX 780 at 4917,-/910 dollars.

Does it get hot? yeah, does it make a lot of noise? yeah, but again who cares. I dont because im just waiting for an aftermarketcooler from Artic cooling and some memory/mofsetcooling from Enzotech i pure copper and i put that on the card, and im a happy camper again:).
If thats not enuff, i have a complete watercoolingset just laying arround and i have to invest in is a fullblock from EK, then i have no problems with heat or with noise, so who cares about the warrenty, if it brakes i will buy a new one, how hard is that ?.

I have had 2x5850 in crossfire since they came out and they can still hold their own in the games i play(1920x1200) for now on a 26" Iiyama screen, and i havent really seen anything until now, that could convince me in upgrading.

Im going to tryout eye enfinity with the new card on 3 screens, so the resolution will be high, and thats where this card is going to shine and show all its power. My old cards will be put in my husbands computer and we will both be fine. He can play his small games while i take on BF4(cant wait) and AC4 Blackflag and so on, and i will be happy for the next 2 years gaming.

Im sorry if i havent spelled correct english, but i do my best.
 
This gen (R200 series) is a direct response to Nvidia 700 series re-brand which launched first, almost a year and a half after its "competition" that is the 7970 (with a mediocre 20%-30% improvement and absolutely retarded pricing), not a game-changing start of a new generation (GCN 7970, which by the way, only was a few dozen to a hundred dollars more than the old GTX 580 it was replacing when it came out -- and was much faster) architecture/not even a die shrink, and with no new API still.

I'm not agreeing with the OP, but your memory of the GTX 580/7970 comparison (at launch) is incorrect

GTX 580: $500
HD 7970: $550
Price increase: 10%
7970 Performance Advantage (TPU Link): 10% at 1920x1200.

If you go by price/performance, the 7970 did not improve on the GTX 580. It performed better and was priced higher by an equal percentage. It was not a great deal, and it certainly did not "change the game".

I think that everyone needs to realize that whether it is is AMD or NVidia, both companies are opportunistic when launching new cards that are faster than anything their competitor can offer. There is no "good" or "bad" company. When AMD clearly had the high end with the 7970, it didn't price the card to be especially competitive with NVidia; similarly, when NVidia launched the 780 it had the high end and did not price the card to be competitive with AMD. The R9 290X is not in this category; it couldn't cleanly beat Titan, so AMD priced it aggressively instead.
 
Last edited:
I have no manufacturer preference assuming features are the same. Furthermore, I don't believe in this whole "glory" or "halo card" thing, and I don't think many people in this forum do either. I buy the product that fits my needs no matter who the manufacturer is. This "glory" is all marketing and nothing else; just because someone has the best card in the world doesn't mean that every product in their entire lineup is good (and more importantly, well priced).

Well, i've had more geforces than radeons, but what i hate nvidia for is that they seem to be the more greedy company when it comes to pricing, come on $1000 really?, 650?
When amd had the chance to do that they put 550, but hey that is not the point of the thread, and i'm not puting a revolver to your head and saying buy a 290x, i myself don't like the reference card either but i'm not going to critizice the shit out of it like it is some piece of garbage that couldn't be out 9 months before.....everyone has been at that position, be nvidia or amd...
 
because the titan is not in direct competition with 290x, with a portion of the titan users buying this card for company/workstation use, the Titan can be seen as a card that's "best of both worlds" between a gaming card and a professional card(which cost significantly more than gaming cards).

another factor is that the titan and to that extend the gtx780 is much better in terms of noise and temperature and efficiency. from a technical point of view, it would be much more difficult to design a product that does well in all criteria than if a product were just focused in one. in this case, Nvidia had to design a graphics card that had noise, temperature, power consumption well under control while trying to maximize performance, do you understand this is a lot more difficult to achieve than just brute force performance? I would imagine the R&D cost would be much higher too which is reflected in the premium. To give an example of this would be that I'm an architectural designer, if I were asked to design a very cheap building or a very efficient building it would be relatively easy, but if I were asked to design a building that is both cheap, elegant, and energy efficient then I would probably charge a hell of a lot more to come up with the design, do you get my drift?

I also stated that the 290x has just release while the titan/780 has been out for half a year. and Nvidia will soon be adjusting their pricing very soon, its too early to be calling "massacres" and "obliterations" at this point.

what bugs me (or scares me) more than anything at this point is that with the release of 290x, many of the "Nvidia-naysayers" are suddenly out of the forest creating this product of Titan/780hybrid, where this imaginary card has the price tag of the Titan but the performance of the 780. and with that creation of another imaginary card with the performance of 290x uber mode and noise/temp of the silent mode, and is selectively pitting these two imaginary cards against each other to further their agenda. Don't get me wrong im not taking any sides here, but I would like to sort the facts straight and see some consistencies in their arguments.

Really? Why would the top of the line single GPU card of AMD not be competing against the top of the line single GPU of nVidia if it trades blows evenly with it?

I fail to see the logic in that, dude!

Your second point however does make a lot of sense (still in the part i highlighted): gaming cards are WAY cheaper then professional cards. Even so, this does NOT negate my above statement.
 
Nvidia GF4 to Radeon 9000 = different generations of GPUs + API change (DX7/8 to DX9)
Makes no difference to the market. The GF4 Ti was Nvidia's line of cards when the 9700 Pro debuted. You run what you brung.
ATI X1900 to 8000 = different generation of GPUs yet again + API change again (DX9 to DX10)
Same argument....and DirectX 10? Yeah, that made all the difference :shadedshu. Number of DirectX 10 games at 8800 GTX launch...TWO (Dungeons & Dragons Online :rolleyes:, and FSX )
This gen (R200 series) is a direct response to Nvidia 700 series re-brand
Ah! Didn't you just say that difference between GPUs are negated because of DirectX version? I seem to recall that one of these is DX11.2 compliant and one is DX11.0. Let me guess, your argument negates performance revisions within a DX version.

It's bad enough that you're making stupid, nonsensical posts.
Hey, I've just read your devotion to the logical fallacy. We all have our cross to bear, so don't go full-emo just yet.
 
I'm not agreeing with the OP, but your memory of the GTX 580/7970 comparison (at launch) is incorrect

GTX 580: $500
HD 7970: $550
Price increase: 10%
7970 Performance Advantage (TPU Link): 10% at 1920x1200.

If you go by price/performance, the 7970 did not improve on the GTX 580. It performed better and was priced higher by an equal percentage. It was not a great deal, and it certainly did not "change the game".

I think that everyone needs to realize that whether it is is AMD or NVidia, both companies are opportunistic when launching new cards that are faster than anything their competitor can offer. There is no "good" or "bad" company. When AMD clearly had the high end with the 7970, it didn't price the card to be especially competitive with NVidia; similarly, when NVidia launched the 780 it had the high end and did not price the card to be competitive with AMD. The R9 290X is not in this category; it couldn't cleanly beat Titan, so AMD priced it aggressively instead.

7970 at launch only had 10% lead due to the driver problem, but now the gap is at least 40%. AMD cards usually take time to mature, and they are beasts when they reach their prime.
 
7970 at launch only had 10% lead due to the driver problem, but now the gap is at least 40%. AMD cards usually take time to mature, and they are beasts when they reach their prime.

According to W1zzard's graph, and assuming i'm not screwing up my math, the difference went from 8.6% to 20.9%: a huge increase, yes, but not 40%.
 
Back
Top