Tuesday, December 2nd 2014
Choose R9 290 Series for its 512-bit Memory Bus: AMD
In one of the first interviews post GeForce GTX 900 series, AMD maintained that its Radeon R9 290 series products are still competitive. Speaking in an interview with TweakTown, Corporate Vice President of Global Channel Sales, Roy Taylor, said that gamers should choose the Radeon R9 290X "with its 512-bit memory bus" at its current price of US $370. He stated that the current low pricing with R9 290 series is due to "ongoing promotions within the channel," and that AMD didn't make an official price adjustment on its end. Taylor dodged questions on when AMD plans to launch its next high-end graphics products, whether they'll level up to the GTX 900 series, and on whether AMD is working with DICE on "Battlefield 5." You can find the full interview in the source link, below.
Source:
TweakTown
107 Comments on Choose R9 290 Series for its 512-bit Memory Bus: AMD
All in all, I think we can all agree that the GTX 970/980 are ahead of the curve because it's new technology. AMD is behind because they haven't released anything new for quite some time. I will change my stance if they release something new but until then, I just see an aging lineup next to a cutting edge one offered by nVidia.
Vanilla flavour GTX 980 = <160 watts.
ANd lol @ AMD with its 512MB bus that matters to the .01% of people that rock 4K or x3 4K monitors... Oye. What a marketing machine they are. Preying on the ignorance of the consumer (ok, both have done this to be fair).
I'm quite disappointed with MSI and AMD videocards, drivers for the 290 series are bad, and I had problems with hardware too.
I migrated back to nvidia (EVGA 980GTX SC) and now I'm quite happy with my gaming experience again.
It'll be interesting to see what they think when NVIDIA legitimately have zero answer for AMDs next cards for more than 12 months.
And your comparison is asinine as its not even comparing the same damn game. In order to make the comparison empirical and have ANY value to it, they need to be tested across the same exact thing. ;) An answer? They one up themselves every time something new comes out. Occasionally each answer with a bump in mid gen (think 7970 Ghz edition or 780ti... etc. That debate, to me, is hilarious because both sides can be right, it just depends on what the poster thinks was released 'first' and what was the 'response'...
Also, if AMD do turn out to be using 20nm ... that'll be a disaster for NVIDIA. They don't have any designs that can launch on 20nm anymore.
This is the first time in many years that there will be a big inter-generational leap in performance, and the first time the other firm won't be able to catch up for a long time.
as usual AMD marketing is fun to watch. but i think this one still okay. it is better than "you guys should holding buying 900 series because we are the future of gaming and our 285 is faster than GTX760"
amd current gpu remains competitive (at its current price/perf) and only green warriors said otherwise.
The reason is 4096sp generally won't be used to it's full extent in core gameplay, closer to ~3800 (just as you saw with 280x vs gk104, or 7950/280 vs 7970/280x scaling on a half scale), and when you figure whatever that number is divided by 2560 effective units in GM204, and the fact it can do 1500mhz BECAUSE of having such secondary cache....that ain't good. Btw, this is why big maxwell is essentially 3840 units ([128sp+32sfu]*24). The same way gk104 was essentially 1792 (192+32*8)....because the optimal count for 32/64 ROPs is right around there. Slightly higher in GK104's case (and hence why 280x was slightly faster per clock), but that was a fairly small chip and could expect decent yields. Slightly lower in big maxi's case, but I'd be willing to bet most parts sold will be under that threshold (which is still less than 1 shader module).
What's unfortunate is while excessive compute and high bw is good for certain things (like tressfx etc), it's still a better play to generally have less units than what the rops can handle in most core gaming situations, as it's more power/die/bw efficient (again, see gk104 vs 280x), and if need-be scale the core clock for performance of all units (texture, rops etc) at an optimal ratio. If we essentially get a 2x280x just because AMD has the bandwidth to do so (and clockspeeds won't allow a more efficient core config with higher clock to saturate it, similar to their more recent bins that generally do ~1100mhz) they are kind of missing the big picture in an effort to pull out all the stops and create something slightly faster through brute force...It'll be Tahiti vs GK104 all over again on a literally slightly larger scale.
All they are doing is moving the goalpost with CUs and bw, more-or-less similarly since R600, when a fundamental efficiency change is sorely needed. I'm talking like when they went to 4VLIW instead of 5 (when the avg call was 3.44sp), the move to 4x16 with a better scheduler, or to a lesser extent what they did with compression in 285. Even if the bandwidth problem is solved for another generation (and even that's arguable when larger than 4GB is going to quickly become normal and HBM won't see that for a year or more, not to mention GM200 will literally be out of their league if on the same process) the fundamental issue is the lack of architectural evolution to cope with outside factors (bw, process node capabilities) not lining up with what they currently have. Some of that is probably tied to and hamstrung by their ecosystem (HSA, APUs, etc), but I still think it primarily comes down to lack of resources in their engineering dept over the last couple to few years.
I truly think Roy knows all this (that they currently are in a bad place and the immediate future doesn't appear fabulous either), but his job is his job, and I respect that.
You would need a helluva an overclocked cpu to reach those speeds as that is not remotely a gpu limited res.
Run 3dmk 01 though. ;)
Modern games of course wouldn't go that fast.
@Recus Both of those pictures are pretty cool. :)
Anyone that says Nvidia has no 20nm gpu and is a disaster if AMD goes with it and has 0 proof as well to back their statement, you sir are as well a Complete AMD Tool.
NVIDIA have no 20nm. It's a fact. We don't know if AMD do. Personally I think it's unlikely, but it may transpire.
As I said, NVIDIA was planning on the shrink for Maxwell, but TSMC not being ready delayed it essentially forcing them to stick with 28nm for this release. Not resting on their laurels, I think they did a pretty damn good job of increasing IPC, memory efficiency, and power consumption on that same process to bring out the well received 9 series. With that in mind, I think NVIDIA will be in a position to catch up sooner rather than later IF AMD brings a game changer to the table.
Remember, NVIDIA has also brought to the table a die shrink in the same platform (GTX 260 from 55nm to 45nm IIRC). Who's to say that don't have the 20 nm plans still on the shelf ready to go????
www.kitguru.net/components/graphic-cards/anton-shilov/nvidia-may-skip-20nm-process-technology-jump-straight-to-16nm/
the 512bit bus is nice and all but obviously memory bandwidth hasn't really been an issue for a long time now. The superior performance on the 970 and 980 in most cases makes it the better buy. But your comparison fails. You're comparing things of very different natures.
If until February 2015 AMD not launch a new GPU that really bring an improvement in these two requirements then throw my two R9 270X in the trash and buy a GTX 980 from ASUS Strix, I am Brazilian and here the NVIDIA cards are more expensive than AMD, 980 Strix costs about US $ 1200, but I have no choice if you want to improve my system without having to change the power supply, do not want to have to buy a power supply 1200w for new cards from R9 300 series.