• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Radeon R9 290X 4 GB

This cracked me up. :laugh:


Dammit that's funny!

Nice bit of CGI work there with the graphics card, too. Notice how they inverted it, so it's going left to right instead.

NVIDIA's GK110 graphics processor was first introduced as a Tesla-only product to power demanding GPU compute applications. NVIDIA has now released it as a GeForce GPU too. It uses 7.1 billion transistors on a die size that we measured to be 561 mm².

AMD's Hawaii graphics processor uses the GCN shader architecture. It is produced on a 28 nm process at TSMC Taiwan, with 4.31 billion transistors on a 438 mm² die.

28% smaller die with the same performance +/- 3% but requires 40W more power = thermal dynamics of the efficiency. There is only so much heat copper can dissipate, a higher fan speed is going to be required as the die size is smaller and heat output more concentrated, the shaders are more efficiently utilized in the 290 than the Titan. I don't know how else to explain it to you, I'm sure if thy wanted to create a vapor chamber and alter the cooler and increase the price to $750 they would have, but they know who is buying the card, enthusiasts, with liquid cooling, or overclockers who would rather get a cheaper card and spend the extra $100 on a custom cooler.


If you actually look at the cooler its the exact same design as Nvidia uses, except the Titan has a vapor chamber.


So would you rather buy a card cheaper and be able to customize it, or buy the more expensive card like everyone else?

Well, the GPU is actually 6b not 4b transistors according to the review. It's less than GK110 on the same 28nm process, yet uses more power. I don't get it. Is the onboard power regulator that inefficient, perhaps? I don't know

Also, there's no excuse for putting a crap cooler on there. The GTX 770, a lower end card than the 290x, uses the exact same vapour chamber cooler as the Titan, yet doesn't cost the earth.

In short, it looks like AMD designed a GPU with bags of performance (I'm thinking of wizzy's comments about scaling) and then gimped it. Why the hell they'd do this I don't know. They'd make more money if they did it right. Did the beancounters strike, perhaps?

Check out the start of this O3D review and you'll see what I mean about the cooler.

[yt]-lZ3Z6Niir4[/yt]
 
Last edited by a moderator:
Well, the GPU is actually 6b not 4b transistors according to the review. It's less than GK110 on the same 28nm process, yet uses more power. I don't get it. Is the onboard power regulator that inefficient, perhaps? I don't know
Might be a case of transistor density. Hawaii at 6.2bn trans and 438mm² is 14.55 million/mm² against GK110's 7.08bn and 561mm² (12.62 million/mm²), and that comparison might actually be worse factoring out the lower power demand uncore parts of the chips ( I/O, memory controllers etc.) given the difference in bus sizes and GDDR5 controllers.
Also, there's no excuse for putting a crap cooler on there.
None whatsoever. AMD would have had a slam dunk on their hands if not for reservations over the power dissipation from review sites. As it is they've basically engineered a fault into a product that had no downside. The GPU looks power hungry, and you're still limited to a 2-slot design for OEM contracts, but I'm pretty certain a cooling design house could have come up with a more elegant and effective option. Even if it raised the price by $20-30, it still would have paid for itself in more positive reviews.
 
Might be a case of transistor density. Hawaii at 6.2bn trans and 438mm² is 14.55 million/mm² against GK110's 7.08bn and 561mm² (12.62 million/mm²), and that comparison might actually be worse factoring out the lower power demand uncore parts of the chips ( I/O, memory controllers etc.) given the difference in bus sizes and GDDR5 controllers.

None whatsoever. AMD would have had a slam dunk on their hands if not for reservations over the power dissipation from review sites. As it is they've basically engineered a fault into a product that had no downside. The GPU looks power hungry, and you're still limited to a 2-slot design for OEM contracts, but I'm pretty certain a cooling design house could have come up with a more elegant and effective option. Even if it raised the price by $20-30, it still would have paid for itself in more positive reviews.

Consider that, usually, custom AIB cooled graphics cards on reference boards cost like 10$ or 20$ more so AMD/ATI really has no excuses continuing with these crappy coolers.
 
Consider that, usually, custom AIB cooled graphics cards on reference boards cost like 10$ or 20$ more so AMD/ATI really has no excuses continuing with these crappy coolers.

How about the crappy VRM circuit of Titan/780 ;)
 
How about the crappy VRM circuit of Titan/780 ;)


Completely unrelated argument you bring up.

Titan and 780 power delivery circuitry is perfectly fine at stock and for moderate overclocks.

We are talking about a stock card that is not reaching its own stock clocks.
 
If I had a ref 290x I would remove the blower casing and mount fans all over the heatsink, maybe that would help out?
 
If I had a ref 290x I would remove the blower casing and mount fans all over the heatsink, maybe that would help out?

If I had a ref 290x, I would remove the blower casing and mount a waterblock over that beyotch and love life :D
 
If I had a ref 290x, I would remove the blower casing and mount a waterblock over that beyotch and love life :D

Well it seems EK only has a few 290x back plates in stock & they are all out of 290x water blocks. :(
 
Well it seems EK only has a few 290x back plates in stock & they are all out of 290x water blocks. :(

That makes me haz a sad :( for now. I'll have to wait before purchasing for $ reason anyway :o
 
Completely unrelated argument you bring up.

Titan and 780 power delivery circuitry is perfectly fine at stock and for moderate overclocks.

We are talking about a stock card that is not reaching its own stock clocks.

Might want to read up on how the clocks work

AMD has a ceiling of up-to 1ghz

specs2.jpg
 
On the chart above under the section memory type, data rate it appears the 280x & 270x do up to 6 - 5.6 Gbps com paired to the 290x which is up to 5 Gbps?
 
NVidia is not going to reduce the price on Titan. Titan is a compute card that AMD can't match in DP floating point performance, and the people who can use compute capabilities are the only people who should have been buying it. It was a fluke that Titan ended up being a high end gaming card as well. That compute niche is restored with high end gaming cards like the R9 290X and will further be reinforced with the GTX 780Ti. There still is no DP compute card that can compete with Titan for the price.
QUOTE]

Although someone probably already called you out on this: that's 100% BS. Please be silent, little boy, you know nothing of compute. Not only did the 7970 come incredibly close to the titan's double precision at 925mhz core, its single precision was higher, even the 7950 was nearly as powerful for both precisions. It's already known that the r290x broke the 5 tf barrier, I guess you don't grasp how this almost certainly equates to the double precision performance... Please, stop wasting all your time on forums pretending like you have the slightest clue about GPGPU implementations or architectures. When implemented in real instances, GCN already showed greater real world floating performance than titan...Completely out of your depth here, kid.

Oh, let's not forget that AMD's opencl performance is vastly superior and also provides the fastest non-biased raycasting solution...
 
^ I'm still wonder why people believe that Titan is a compute card :rofl:
Face the truth, only Quadro/Tesla cards benefit from their specific driver, not the Geforce. Titan has better compute power than 680, but it will never reach the level of its brothers in Quadro/Tesla line up. In fact, when it comes to raw compute power, no single GPU card can beat 290x now.

On the chart above under the section memory type, data rate it appears the 280x & 270x do up to 6 - 5.6 Gbps com paired to the 290x which is up to 5 Gbps?

It is the data rate of the VRAM, 290x default core mem is 1250 MHz, which is translated to 5 Gbps. The memory bus of 290x however is 512 bit instead of 384 bit in 280x, therefore the memory bandwidth of 290x is 320 GBps.
 
Last edited:
Completely unrelated argument you bring up.
But it does serve to break up a reasoned discussion by rebooting the Green vs Red trolling :rolleyes:
NVidia is not going to reduce the price on Titan. Titan is a compute card that AMD can't match in DP floating point performance, and the people who can use compute capabilities are the only people who should have been buying it. It was a fluke that Titan ended up being a high end gaming card as well. That compute niche is restored with high end gaming cards like the R9 290X and will further be reinforced with the GTX 780Ti. There still is no DP compute card that can compete with Titan for the price.

Although someone probably already called you out on this: that's 100% BS. Please be silent, little boy, you know nothing of compute. Not only did the 7970 come incredibly close to the titan's double precision at 925mhz core, its single precision was higher
1. The 7970 isn't the R9-290X
2. You need to add an "["
It's already known that the r290x broke the 5 tf barrier
Single precision. I believe the statement you're ranting over specifically mentions double precision
, I guess you don't grasp how this almost certainly equates to the double precision performance... Please, stop wasting all your time on forums pretending like you have the slightest clue about GPGPU implementations or architectures.
I'm guessing the OP knows a great deal more than you do- quelle surprise. The R9-290X has it's FP64 rate capped at 1:8 of single precision - probably due to the power demand of FP64 calculation.
tpu.jpg


The R9-290X has a lower FP64 value than the HD 7970 (0.7 TFlops vs 1.18 for the 7970)
Maybe you can lay off the insults. They don't further the discussion, and aren't particularly relevant given the actual facts.
 
But it does serve to break up a reasoned discussion by rebooting the Green vs Red trolling :rolleyes:

1. The 7970 isn't the R9-290X
2. You need to add an "["

Single precision. I believe the statement you're ranting over specifically mentions double precision

I'm guessing the OP knows a great deal more than you do- quelle surprise. The R9-290X has it's FP64 rate capped at 1:8 of single precision - probably due to the power demand of FP64 calculation.
http://img.techpowerup.org/131027/tpu.jpg

The R9-290X has a lower FP64 value than the HD 7970 (0.7 TFlops vs 1.18 for the 7970)
Maybe you can lay off the insults. They don't further the discussion, and aren't particularly relevant given the actual facts.

im impressed that you kept it together im more impressed that you actually backed it by statistical facts instead of just politely shrugging him off. I think this is a good example why iv stayed here so long
 
Here is another warning to not insult others and to be civil.
Infractions handed out.
 
But it does serve to break up a reasoned discussion by rebooting the Green vs Red trolling :rolleyes:

1. The 7970 isn't the R9-290X
2. You need to add an "["

Single precision. I believe the statement you're ranting over specifically mentions double precision

I'm guessing the OP knows a great deal more than you do- quelle surprise. The R9-290X has it's FP64 rate capped at 1:8 of single precision - probably due to the power demand of FP64 calculation.
http://img.techpowerup.org/131027/tpu.jpg

The R9-290X has a lower FP64 value than the HD 7970 (0.7 TFlops vs 1.18 for the 7970)
Maybe you can lay off the insults. They don't further the discussion, and aren't particularly relevant given the actual facts.

Its probably a BIOS lock that could be cracked. It is very demanding and could very well push it past the thermal capacity of most coolers.

I want to see what happens with water or extreme cooling.
 
Its probably a BIOS lock that could be cracked. It is very demanding and could very well push it past the thermal capacity of most coolers.
The FirePro version features a 1:4 FP64 rate by all accounts, so it could be possible from that angle, although the FirePro will clock lower to accommodate the overhead.
FWIW, GPU input power on these reference cards is limited to 208 watts by PowerTune ( 300W total with VRAM and efficiency taken into account).
Couldn't find an English speaking site with the PowerTune parameters offhand, but here's PCGH's

Powertune behaves like this and in this order. The first point is mandatory.
1. Do not supply the GPU with more than 208 watts input

2. try to hit 1 GHz, but
3. do not go over 95 °C and
4. do not go over 40/55 percent PWM impulse

5. if acoustic and thermal limits are hit, reduce clock/voltage until you hit 727 MHz
6. if acoustic and thermal limits are still hit, ignore max PWM impulse
7. if nothing helps, shut down when reaching 100 °C for more than 1 sec.
 
Last edited:
This thread is turning into Epicness and may well yet sit in the TPU hall of fame, the similarities between this and an old 2900xt versus 8800GTS comparison is uncanny, I think many of you need to stop and pause for a minute because some seem to have an inability to argue or disagree with any form of maturity, try and get some perspective on this, it's a GPU not a life support system, if I was a guest coming here to get some opinion of a new GPU offering I would at the very least be bemused at some of the reaction and at the very worst think I had accidentally stumbled over a kindergarten squabble!

In reality.... only time will tell with the 290X, in it's present reference form you could argue that it is far from perfection, lets check back in 3 months and see how opinion is divided then.
 
This thread is turning into Epicness and may well yet sit in the TPU hall of fame, the similarities between this and an old 2900xt versus 8800GTS comparison is uncanny, I think many of you need to stop and pause for a minute because some seem to have an inability to argue or disagree with any form of maturity, try and get some perspective on this, it's a GPU not a life support system, if I was a guest coming here to get some opinion of a new GPU offering I would at the very least be bemused at some of the reaction and at the very worst think I had accidentally stumbled over a kindergarten squabble!

In reality.... only time will tell with the 290X, in it's present reference form you could argue that it is far from perfection, lets check back in 3 months and see how opinion is divided then.

A well reasoned post. No, that's not right. :shadedshu

For some odd reason, comparisons with new CPUs and GPUs always leads to situations like this. I suspect at least part of the reason is denialism by people defending the underdog, when really it's the underdog because it's simply not as good. Accept it, don't buy the inferior item and move on. Simple. I did that with my gear and I'm very happy with it as a result.
 
Accept it, don't buy the inferior item and move on. Simple. I did that with my gear and I'm very happy with it as a result.

I would hardly call either product inferior...

As soon as everyone realizes that the GTX 780, Titan, and R9 290X are not bad GPUs, the better off everyone will be. The real question is how much are you willing to invest for that experience, not which card is better than the other.
 
I would hardly call either product inferior...

As soon as everyone realizes that the GTX 780, Titan, and R9 290X are not bad GPUs, the better off everyone will be. The real question is how much are you willing to invest for that experience, not which card is better than the other.

The only real let down with the 290x seems to be that awful cooler - even CrossFire works properly now. Did you check out that overclock3d video I posted above? Tom Logan is absolutely gutted about it, going on ad nauseum about it and he's much more expert with these things than me plus he has tested it too. He really wanted it to be an NVIDIA killer.

It's an expensive product and should work properly out of the box. Expecting someone to mod the cooler to fix AMD's shortcomings is completely unreasonable. Logan ended up recommending the MSI GTX 780 Twin Frozr OC Gaming with the custom cooler, because it ran quieter, faster and overclocked like a banshee, while being only £30 more expensive (his words).

I dunno why AMD design an inherently good product (that GPU really does have a lot to give) along with a decent motherboard, but then hamstring it like this. It's very frustrating. I want to see AMD kick NVIDIA in the nuts with an all round excellent product and sell that at a good price. Then we'd get proper competition, better prices and better products from both sides.

MSIGTX780.jpg
 
Last edited:
Price /performance vs Titan, I'm impressed.
Price /performance vs a couple of used 680s(or 670s) in SLI? Not impressed at all.

I said some time ago that Fermi's power/voltage architecture and it's boost system, make it an extremely friendly and easy product to manage as well as tweak.

Combined with the 98% scale-ability of SLI among other amenities, there's just no good reason to fork over even more money for a new single card that's slower and produces no extenuating bonuses.

I'm glad that AMD has come to market with something competitive, but they took too long to produce something that's going to be trumped in half a year.
 
Back
Top