Thursday, January 29th 2015
AMD Cashes in On GTX 970 Drama, Cuts R9 290X Price
AMD decided to cash-in on the GeForce GTX 970 memory controversy, with a bold move and a cheap (albeit accurate) shot. The company is making its add-in board (AIB) partners lower pricing of its Radeon R9 290X graphics card, which offers comparable levels of performance to the GTX 970, down to as low as US $299.
And then there's a gentle reminder from AMD to graphics card buyers with $300-ish in their pockets. With AMD, "4 GB means 4 GB." AMD also emphasizes that the R9 290 and R9 290X can fill their 4 GB video memory to the last bit, and feature a 512-bit wide memory interface, which churns up 320 GB/s of memory bandwidth at reference clocks, something the GTX 970 can't achieve, even with its fancy texture compression mojo.
And then there's a gentle reminder from AMD to graphics card buyers with $300-ish in their pockets. With AMD, "4 GB means 4 GB." AMD also emphasizes that the R9 290 and R9 290X can fill their 4 GB video memory to the last bit, and feature a 512-bit wide memory interface, which churns up 320 GB/s of memory bandwidth at reference clocks, something the GTX 970 can't achieve, even with its fancy texture compression mojo.
181 Comments on AMD Cashes in On GTX 970 Drama, Cuts R9 290X Price
1 card cost's 300$ converted into my countries currency it's:
300 dollars is = 25560.06 rsd
25560.06 rsd x2 is = 51120.12 rsd
and this is how much one card cost's in my country:
www.winwin.rs/racunari-i-komponente/racunarske-komponente/graficke-kartice/graficka-kartica-amd-radeon-r9-290x-gigabyte-oc-4gb-ddr5-dvi-hdmi-dp-512bit-gv-r929xoc-4gd-1189519.html
BTW the price in red is the price if you order it online
the green is the price if you want to buy it from the shop.
As far as wattage goes though, I can say that $22 a year is still asking a lot for a power difference between cards though. I can tell you I have 3 R9 290X cards, I game at 4K pretty often when I am not working including BF4, LoL (Though that only uses 1 card), and others and my electric bill has not gone up at least that I have noticed from my last couple of builds. But then again I have had Dual HD 6990's, 2 GTX 580's, 2 GTX 460's, 2x GTX 295 (Dual PCB variants), etc so maybe I am a bad matchup for that however power usage generally on computers while one can use significantly more than another under load those loads fluctuate so much on top of the fact that while the numbers sounds high in the grand scheme comparing to most other household appliances (Even just lighting) it really is not as much as you would think. But there are a lot more OEM machines with NVidia inside than AMD which is where a lot more of those numbers tend to come from especially on the mobile market. Many times as well those numbers when a machine is returned for a problem even if it is the GPU those numbers do not hit most of the polls when a manufacturer finds a bad GPU.
GTX 970 has 13 SMMs with a computational power of 3494 GFlop/s (75.76% of GTX 980), with a theoretical memory bandwidth of 196 GB/s vs. 224 GB/s (87.5%). When a GPU accesses a single tiny block of memory, it will never read it at 224 GB/s. A single block will be placed in one of the eight memory chips, where each one is accessible at 28 GB/s (on separate 32-bit buses). Let's say you have a hypothetical GPU of 32 SMMs and a 512-bit memory bus(total), loading the same single block of memory from one SMM will still be just as slow as on GTX 970. The reason for this is the GPU allocates a single block on one memory controller. If you think about it for a moment, you'll realize why every single SMM can't load from memory at 224 GB/s, that would make the GPU extremely complex and defeat the purpose of a GPU processing different data as efficient as possible.
When looking back on GTX 970 and GTX 980, simple maths shows the GTX 970 has more memory bandwidth per GFlop (0.056 vs. 0.049 GB/flop), meaning in a gaming setting where each frame time is limited, each SMM has more memory bandwidth at their disposal than on an GTX 980. Even though each GPU can store 4 GB in memory, no game will ever come close to using all of that in a single frame rendering, and typical load is generally around 1.5 GB per frame or lower. So provided that the GPU/driver is smart enough to store appropriately in the two memory pools, the last slow 512 MB can be completely transparent to the end user. Heck, even if the GTX 970 had only 3 GB of fat memory it could still be achieved. Given the fact that GTX 970 has more memory bandwidth per SMM than GTX 980, GTX 970 is still less bottlenecked by the memory bus in a gaming setting. This is why the slowdown for GTX 970 vs. GTX 980 is negligible, and the slowdown we can see has more to do with fewer SMMs than the memory bus.
When using the GPU for computing(CUDA or OpenCL) the consequences of the slower memory might be a bit different, in specific situations where the program accesses randomly across all of GPU memory.
But for gaming, the GTX 970 still remains just good of a choice as two weeks ago. If it weren't for the specific compute situations, no one would probably ever notice the slowdown. Any any minor issues resulting from this memory configuration can be fixed in software (if any).
ie: Have you seen a Steam Hardware Survey... ever?!
Anyhow, returns usually fall under two categories - failure or customer dissatisfaction. Whichever the reason, the numbers are pertinent to those who might purchase.
As an aside, those numbers are for an older timeframe. I actually aggregated the latest numbers from Hardware France in a post a week ago(I won't try to C&P here).
Of the machines I've been called on to diagnose and repair, most faults are from user error - assuming five minutes gaming is sufficient testing for a maxed out overclock. The largest card issues were from HD 4870's and 4890's, followed by GTX 280's running too high a clock, and those single slot 8800GT's with the cheapo 7-blade blower fan.
Anyways, i have had 2 go wrong on me over the last 25 years. One was a 2900XT but that was my fault as i forgot to plug the fan on it to cool the ram chips top side which were overclocked as high as i could get them.
The other as on a XFX 4890 and when the cooler was put back on it the cooler would make a odd sound which was sent back to XFX and they replaced it with a newer series card and sent it directly to it's new user.
Good stuff. Definitely higher return rates for AMD cards.
Although it would not stop me from buying one, how ever i think it's more like hard drives some brands you have good luck with some others not so. I had my fair share of seagate and WD drives how ever seagate i have always gave e issues were as with some one else had bad luck with WD's.
There are not known manufacturing errors for both camps, and any responsibility should go to AIB partners anyway.
I have had 2 cards fail on me in my life and both were NVidia (1 9800GT and 1 9800GX2), but that does not say much as I have bought mostly NVidia from the beginning so its not a good comparison. Any card unless there is a specific design flaw can fail at a similar rate and the only thing that really matters when comparing is abnormally high rates.