Tuesday, December 2nd 2014

Choose R9 290 Series for its 512-bit Memory Bus: AMD

In one of the first interviews post GeForce GTX 900 series, AMD maintained that its Radeon R9 290 series products are still competitive. Speaking in an interview with TweakTown, Corporate Vice President of Global Channel Sales, Roy Taylor, said that gamers should choose the Radeon R9 290X "with its 512-bit memory bus" at its current price of US $370. He stated that the current low pricing with R9 290 series is due to "ongoing promotions within the channel," and that AMD didn't make an official price adjustment on its end. Taylor dodged questions on when AMD plans to launch its next high-end graphics products, whether they'll level up to the GTX 900 series, and on whether AMD is working with DICE on "Battlefield 5." You can find the full interview in the source link, below.
Source: TweakTown
Add your own comment

107 Comments on Choose R9 290 Series for its 512-bit Memory Bus: AMD

#51
Aquinus
Resident Wat-man
SasquiI'm no fan of NVidia, but looking at some of the TPU reviews by W1zzard, the 970 and 980 really shine in idle mode. Like 5W for a 970 vs. 40W for a 290. The 970 and 980 really shine in power consumption and they did pricing right.

Circling back to the Titan vs. 290x, who was winning then? Duh...
Another reason why I'm considering a 970. AMD's multi-monitor idle consumption is garbage in comparison until you get to the R7 cards. Considering I'm writing code most of the time on my machine, I would say that saving 50-watts or so over what I have now would be tangible over time since my tower is probably on about 14-16 hours a day and the GPUs aren't loaded 90% of the time-ish.

All in all, I think we can all agree that the GTX 970/980 are ahead of the curve because it's new technology. AMD is behind because they haven't released anything new for quite some time. I will change my stance if they release something new but until then, I just see an aging lineup next to a cutting edge one offered by nVidia.
Posted on Reply
#52
the54thvoid
Super Intoxicated Moderator
f2bnp
Very selective.



Vanilla flavour GTX 980 = <160 watts.
Posted on Reply
#53
EarthDog
KarymidoNwhen AMD learn to optimize energy consumption and make a decent stock cooling system, so Nvidia will have competition. Today AMD has stratospheric consumption of energy with poorly optimized drivers and ridiculously high (I have a CrossfireX).
Give them time to get their next gen out and see what happens. This is new arch from NVIDIA, while AMD's has been out for quite some time now. ;)

ANd lol @ AMD with its 512MB bus that matters to the .01% of people that rock 4K or x3 4K monitors... Oye. What a marketing machine they are. Preying on the ignorance of the consumer (ok, both have done this to be fair).
Posted on Reply
#54
VictorLG
SteveS45Personally I don't think AMD is the only camp with a driver issue. Both camps in my opinion are equally meh.
I have two systems one with a R9 280X and HD7970 in crossfire, and a new MSI Gold(Bronze) edition GTX970.

The GTX970 has been having a lot of problems on Display Port with screen tearing after coming back from sleep state. Google GTX900 Display Port tearing/black screen, a lot of people have the same problem. And sometimes switching from Nvidia surround to normal triple monitor or vice versa causes BSOD on Windows7.

On the HD7970, I wouldn't say AMD had better or flawless drivers, we all know they don't. But I don't see the Nvidia drivers superior in any way.

So I think driver and feature wise, both camps are equally meh.
Your problems with the 970 are MSI's fault, not Nvidia's. They even stated that there will be a new BIOS release to fix some of the problems, specially the fan rotation and output ones.

I'm quite disappointed with MSI and AMD videocards, drivers for the 290 series are bad, and I had problems with hardware too.

I migrated back to nvidia (EVGA 980GTX SC) and now I'm quite happy with my gaming experience again.
Posted on Reply
#55
f2bnp
the54thvoidVery selective.



Vanilla flavour GTX 980 = <160 watts.
You call me very selective, yet you then showcase a chart with averages from anandtech.

Posted on Reply
#56
midnightoil
Lots of people screaming that AMD are done for and unrecoverably far behind with the relative performance between 290&290X / 970&980 (neither being true).

It'll be interesting to see what they think when NVIDIA legitimately have zero answer for AMDs next cards for more than 12 months.
Posted on Reply
#57
EarthDog
f2bmpYou call me very selective, yet you then showcase a chart with averages from anandtech.
That's from here... not andand... ;)

And your comparison is asinine as its not even comparing the same damn game. In order to make the comparison empirical and have ANY value to it, they need to be tested across the same exact thing. ;)
midnightoilLots of people screaming that AMD are done for and unrecoverably far behind with the relative performance between 290&290X / 970&980 (neither being true).

It'll be interesting to see what they think when NVIDIA legitimately have zero answer for AMDs next cards for more than 12 months.
An answer? They one up themselves every time something new comes out. Occasionally each answer with a bump in mid gen (think 7970 Ghz edition or 780ti... etc. That debate, to me, is hilarious because both sides can be right, it just depends on what the poster thinks was released 'first' and what was the 'response'...
Posted on Reply
#58
midnightoil
EarthDogThat's from here... not andand... ;)

And your comparison is asinine as its not even comparing the same damn game. In order to make the comparison empirical and have ANY value to it, they need to be tested across the same exact thing. ;)

An answer? They one up themselves every time something new comes out. Occasionally each answer with a bump in mid gen (think 7970 Ghz edition or 780ti... etc. That debate, to me, is hilarious because both sides can be right, it just depends on what the poster thinks was released 'first' and what was the 'response'...
That can't happen this time. The big marketing spiel for new cards is high resolutions. Namely 4K and 2560x1440. At high res particularly the HBM cards will blow GDDR cards out of the water. NVIDIA backed the wrong horse and had to ditch their stacked memory plans. They've been redesigning their future architectures to use the AMD-designed HBM .. for some time in 2016.

Also, if AMD do turn out to be using 20nm ... that'll be a disaster for NVIDIA. They don't have any designs that can launch on 20nm anymore.

This is the first time in many years that there will be a big inter-generational leap in performance, and the first time the other firm won't be able to catch up for a long time.
Posted on Reply
#59
EarthDog
2560x1400/1600 doesn't really need HBM. 4K, ok. But hell 256bit cards plow through 2560x1440 with plenty of AA (assuming it has the vram capacity to support it). Not to mention the efficiency improvements of Maxwell's memory architecture offering a fair amount more bandwidth due to their updated memory compression.
Also, if AMD do turn out to be using 20nm ... that'll be a disaster for NVIDIA. They don't have any designs that can launch on 20nm anymore.
Wouldnt TSMC and NVIDIA's launch time have something to do with it? I recall TSMC having delays in moving to their 20nm node essentially forcing NVIDIA to design Maxwell on 28nm instead of 20nm. The 980 and 970, much like the 670/680 were not the 'full' core implementations. I would imagine there are full Maxwell chips upcoming. While those may be more of an incremental improvement, that still leaves AMD with, what I imagine to be around a 15-20% performance gap to close. While that isn't impossible, they need to bring their big boy pants to the table with their new generation. That said, here is to hoping we see that. :)
This is the first time in many years that there will be a big inter-generational leap in performance, and the first time the other firm won't be able to catch up for a long time.
Only time will tell, but, I haven't seen much to make me believe that will happen... but again, I hope so for the sake of competition and innovation. :)
Posted on Reply
#60
renz496
midnightoilLots of people screaming that AMD are done for and unrecoverably far behind with the relative performance between 290&290X / 970&980 (neither being true).

It'll be interesting to see what they think when NVIDIA legitimately have zero answer for AMDs next cards for more than 12 months.
wow. did you have fact to back up that statement? or just your delusional assumption?

as usual AMD marketing is fun to watch. but i think this one still okay. it is better than "you guys should holding buying 900 series because we are the future of gaming and our 285 is faster than GTX760"
Posted on Reply
#61
qubit
Overclocked quantum bit
Forget all these hires benchmarks for a moment, I wanna see one at 1024x768 just for giggles. :p I want to see framerates at 500-1000fps in some old game to demonstrate just how far graphics performance has come.
Posted on Reply
#62
Recus
So 24 wheels is better than 4, right? Right?


Posted on Reply
#63
SIGSEGV
midnightoilAlso, if AMD do turn out to be using 20nm ... that'll be a disaster for NVIDIA. They don't have any designs that can launch on 20nm anymore.
i doubt amd will use 20nm node on their next gen gpu. in my opinion amd will jump on samsung's finfet 14nm on q2 or q3 2015 instead tsmc's finfet 16nm.

amd current gpu remains competitive (at its current price/perf) and only green warriors said otherwise.
Posted on Reply
#64
alwayssts
Eukashiwhen the HBM technology is loaded into RADEON, a problem in a memory band is cleared.
there is no need to increase the secondary cache as Maxwell.
Except if R390x is indeed 4096sp, 512gbps (Which would be 4*1GB HBM operating at 128gbps) would really only be good up to around 1120mhz (if like Hawaii) or around 1200mhz if using the compression tech we saw in R9 285. With or without factoring in scaling (96-97%), that doesn't touch big maxwell (at probably a fairly similar size, if not Fiji granted slightly smaller on the same process)...and you can bet your butt we'll see a '770'-like GM204 (or really weak-sauce butchered big Maxwell sku) if it's stock clock is 1ghz. While this method for bw would work for a 28 or even 20nm part using their current arch, compared to what is possible on 16nm it's not nearly enough if they want to actually compete.

The reason is 4096sp generally won't be used to it's full extent in core gameplay, closer to ~3800 (just as you saw with 280x vs gk104, or 7950/280 vs 7970/280x scaling on a half scale), and when you figure whatever that number is divided by 2560 effective units in GM204, and the fact it can do 1500mhz BECAUSE of having such secondary cache....that ain't good. Btw, this is why big maxwell is essentially 3840 units ([128sp+32sfu]*24). The same way gk104 was essentially 1792 (192+32*8)....because the optimal count for 32/64 ROPs is right around there. Slightly higher in GK104's case (and hence why 280x was slightly faster per clock), but that was a fairly small chip and could expect decent yields. Slightly lower in big maxi's case, but I'd be willing to bet most parts sold will be under that threshold (which is still less than 1 shader module).

What's unfortunate is while excessive compute and high bw is good for certain things (like tressfx etc), it's still a better play to generally have less units than what the rops can handle in most core gaming situations, as it's more power/die/bw efficient (again, see gk104 vs 280x), and if need-be scale the core clock for performance of all units (texture, rops etc) at an optimal ratio. If we essentially get a 2x280x just because AMD has the bandwidth to do so (and clockspeeds won't allow a more efficient core config with higher clock to saturate it, similar to their more recent bins that generally do ~1100mhz) they are kind of missing the big picture in an effort to pull out all the stops and create something slightly faster through brute force...It'll be Tahiti vs GK104 all over again on a literally slightly larger scale.

All they are doing is moving the goalpost with CUs and bw, more-or-less similarly since R600, when a fundamental efficiency change is sorely needed. I'm talking like when they went to 4VLIW instead of 5 (when the avg call was 3.44sp), the move to 4x16 with a better scheduler, or to a lesser extent what they did with compression in 285. Even if the bandwidth problem is solved for another generation (and even that's arguable when larger than 4GB is going to quickly become normal and HBM won't see that for a year or more, not to mention GM200 will literally be out of their league if on the same process) the fundamental issue is the lack of architectural evolution to cope with outside factors (bw, process node capabilities) not lining up with what they currently have. Some of that is probably tied to and hamstrung by their ecosystem (HSA, APUs, etc), but I still think it primarily comes down to lack of resources in their engineering dept over the last couple to few years.

I truly think Roy knows all this (that they currently are in a bad place and the immediate future doesn't appear fabulous either), but his job is his job, and I respect that.
Posted on Reply
#65
EarthDog
qubitForget all these hires benchmarks for a moment, I wanna see one at 1024x768 just for giggles. :p I want to see framerates at 500-1000fps in some old game to demonstrate just how far graphics performance has come.
hires?

You would need a helluva an overclocked cpu to reach those speeds as that is not remotely a gpu limited res.

Run 3dmk 01 though. ;)
Posted on Reply
#66
qubit
Overclocked quantum bit
Oh, just try the original Unreal Tournament from 1999 on modern high end hardware. It really does reach framerates like that and it's so fast, that the game's speed actually varies erratically and looks quite ridiculous. :laugh:

Modern games of course wouldn't go that fast.

@Recus Both of those pictures are pretty cool. :)
Posted on Reply
#67
arbiter
Anyone that says Nvidia has no answer for AMD's gpu for 12 months and has 0 proof to back statement, you sir are complete AMD tool.

Anyone that says Nvidia has no 20nm gpu and is a disaster if AMD goes with it and has 0 proof as well to back their statement, you sir are as well a Complete AMD Tool.
Posted on Reply
#68
EarthDog
qubitOh, just try the original Unreal Tournament from 1999 on modern high end hardware. It really does reach framerates like that and it's so fast, that the game's speed actually varies erratically and looks quite ridiculous. :laugh:

Modern games of course wouldn't go that fast.

@Recus Both of those pictures are pretty cool. :)
Ahh, it takes a 15 year old game that the iGPU could run that fast to make that point. Gotcha.
Posted on Reply
#69
midnightoil
arbiterAnyone that says Nvidia has no answer for AMD's gpu for 12 months and has 0 proof to back statement, you sir are complete AMD tool.

Anyone that says Nvidia has no 20nm gpu and is a disaster if AMD goes with it and has 0 proof as well to back their statement, you sir are as well a Complete AMD Tool.
They don't. It's a fact. Everyone knew they were a bit behind AMD with stacked memory anyway. However in 2013 when they cancelled HMC entirely and decided to shift to the AMD-designed and Hynix backed HBM, we knew for sure that unless AMD delayed their HBM products enormously, NVIDIA wouldn't be able to compete for a while. HMC Volta was canned, and replaced with HBM Pascal which is tentatively scheduled for H2 '16.

NVIDIA have no 20nm. It's a fact. We don't know if AMD do. Personally I think it's unlikely, but it may transpire.
Posted on Reply
#70
EarthDog
NVIDIA have no 20nm. It's a fact.
You mention its a fact, but... how do you know its a fact? You haven't supported that assertion with any links.

As I said, NVIDIA was planning on the shrink for Maxwell, but TSMC not being ready delayed it essentially forcing them to stick with 28nm for this release. Not resting on their laurels, I think they did a pretty damn good job of increasing IPC, memory efficiency, and power consumption on that same process to bring out the well received 9 series. With that in mind, I think NVIDIA will be in a position to catch up sooner rather than later IF AMD brings a game changer to the table.

Remember, NVIDIA has also brought to the table a die shrink in the same platform (GTX 260 from 55nm to 45nm IIRC). Who's to say that don't have the 20 nm plans still on the shelf ready to go????
Posted on Reply
#71
Slizzo
f2bnp
How about comparisons for GPUs running at stock speeds? That's what both AMD and nVidia spec, hardly their fault if the board partners are running the GPUs out of spec.
Posted on Reply
#72
64K
If AMD goes to the 20nm process with their GPUs then I don't see how Nvidia can compete with them by staying on the 28nm process but maybe Maxwell is that efficient to where they can. I have heard the rumors too that Nvidia is going to wait until next year to go to the 16nm process but how the hell will TSMC be ready for that when they couldn't get the 20nm process down. I don't know. There's some crazy rumors flying around. Here's one of them

www.kitguru.net/components/graphic-cards/anton-shilov/nvidia-may-skip-20nm-process-technology-jump-straight-to-16nm/
Posted on Reply
#73
EarthDog
but maybe Maxwell is that efficient to where they can.
As I posted above, look what they did with it already on the 980... Several % faster than 780ti, with less memory bus width and CUDA cores, and uses almost 33% less (~100W less) power than a 780ti. Perhaps they wrung the rag dry though...?
Posted on Reply
#74
yogurt_21
RecusSo 24 wheels is better than 4, right? Right?


If you're transporting something larger than a thumb drive, yes yes it is.

the 512bit bus is nice and all but obviously memory bandwidth hasn't really been an issue for a long time now. The superior performance on the 970 and 980 in most cases makes it the better buy. But your comparison fails. You're comparing things of very different natures.
Posted on Reply
#75
KarymidoN
EarthDogGive them time to get their next gen out and see what happens. This is new arch from NVIDIA, while AMD's has been out for quite some time now. ;)

ANd lol @ AMD with its 512MB bus that matters to the .01% of people that rock 4K or x3 4K monitors... Oye. What a marketing machine they are. Preying on the ignorance of the consumer (ok, both have done this to be fair).
It was the same with the R9 2XX series, I expected they become more economic in energy and were not, I hoped they become less hot and noisy, but they were not ... AMD focuses on Competitive price and reasonably good performance, the problem is that the reference Coolers are Horrible and custom models are not attractive.
If until February 2015 AMD not launch a new GPU that really bring an improvement in these two requirements then throw my two R9 270X in the trash and buy a GTX 980 from ASUS Strix, I am Brazilian and here the NVIDIA cards are more expensive than AMD, 980 Strix costs about US $ 1200, but I have no choice if you want to improve my system without having to change the power supply, do not want to have to buy a power supply 1200w for new cards from R9 300 series.
Posted on Reply
Add your own comment
Nov 21st, 2024 09:34 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts