Tuesday, January 13th 2015

Next AMD Flagship Single-GPU Card to Feature HBM

AMD's next flagship single-GPU graphics card, codenamed "Fiji," could feature High-Bandwidth Memory (HBM). The technology allows for increased memory bandwidth using stacked DRAM, while reducing the pin-count of the GPU, needed to achieve that bandwidth, possibly reducing die-size and TDP. Despite this, "Fiji" could feature TDP hovering the 300W mark, because AMD will cram in all the pixel-crunching muscle it can, at the expense of efficiency from other components, such as memory. AMD is expected to launch new GPUs in 2015, despite slow progress from foundry partner TSMC to introduce newer silicon fabs; as the company's lineup is fast losing competitiveness to NVIDIA's GeForce "Maxwell" family.
Source: The TechReport
Add your own comment

119 Comments on Next AMD Flagship Single-GPU Card to Feature HBM

#1
GhostRyder
I believe the HBM is almost but confirmed at this point (I guess things could always change) but as far as this GPU goes its going to be interesting what choices they make and how much power it truly outputs. Personally as far as efficiency goes I am kinda mixed on the high end of cards because while I want more efficiency it is not the primary concern. Though I hope that if they decide that, I hope this top tier card is seriously powerful and beats their previous generation but a significant margin.

Although the naming rumors seem to be a curious part for me because most leaks and such seem to point to calling the next "top" card the R9 380X which would mean that a 390X is in the shadows somewhere and if so where do these specs fall under. More of this to me is just going to be waiting for official confirmation than anything but I am getting more and more intrigued by what I hear.
Posted on Reply
#2
Steevo
They did it with GDDR5, pulled a rabbit out of a hat for the most part, using a new and unproven hardware.
Posted on Reply
#3
happita
With all the problems foundry partners have had with decreasing the size nodes down to 20nm, I just had to jump onto my R9 290 from my previous 5850. The high-end isn't where power consumption should be a worry, it is the mid/high, mid, mid/lower end segments that should be a priority in trying to get that watt number lower. If the newly appointed CEO Lisa Su works this company around, and the launch of the next wave of cards AND CPUs/APUs is even a little bit successful, AMD will be in a much better position than it's currently in.
Posted on Reply
#4
LocutusH
These AMD hyper-space-nextgen-dxlevel13123-gcn technologies look so good on the paper, but somhow always fail to show their strength when it comes to real world games after release...
Posted on Reply
#5
Ferrum Master
They are going full bank = maximum risk.

I guess they bet on the techprocess, no matter what, it all will lay out when FinFETs arrive, and the power question will be drawn away.

It may be a Intel like Tick...
Posted on Reply
#6
Petey Plane
happitaWith all the problems foundry partners have had with decreasing the size nodes down to 20nm, I just had to jump onto my R9 290 from my previous 5850. The high-end isn't where power consumption should be a worry, it is the mid/high, mid, mid/lower end segments that should be a priority in trying to get that watt number lower. If the newly appointed CEO Lisa Su works this company around, and the launch of the next wave of cards AND CPUs/APUs is even a little bit successful, AMD will be in a much better position than it's currently in.
The high end also needs to worry about power consumption, unless you want a single GPU pulling 600+ watts and needing a 280mm rad with 4 fans just to keep it cool. Efficiency is just as important in the high-end, if not more so, because greater efficiency means more performance per watt.
Posted on Reply
#7
happita
Petey PlaneThe high end also needs to worry about power consumption, unless you want a single GPU pulling 600+ watts and needing a 280mm rad with 4 fans just to keep it cool. Efficiency is just as important in the high-end, if not more so, because greater efficiency means more performance per watt.
I'm not saying it shouldn't be. I'm saying that no true enthusiast will look at power consumption as the main reason whether or not to purchase a high-end card for their system. EVERYONE likes lower power consumption, but when a company's offerings are pretty much all higher watt consuming products versus their competitors, it makes people wonder how efficient their design really is. I'm not knocking AMD, I'm just being realistic here. But at the same time, AMD is not really competing with Nvidia's Maxwell cards ATM, they're just lowering prices on their R9 series instead in the meantime.
Posted on Reply
#8
the54thvoid
Super Intoxicated Moderator
Power draw itself is relative. If this card can perform 50% faster than 290X but only consume 10% more power, that's okay. If it can just about drive a single 4k panel, it's a win.
The win/lose scenario kicks in with NV's part. but again, the mythical touted GM200 is also suggested to be nowhere near as efficient as GM204. We'll all win if the new AMD card comes out on steroids.
Posted on Reply
#9
GreiverBlade
LocutusHThese AMD hyper-space-nextgen-dxlevel13123-gcn technologies look so good on the paper, but somhow always fail to show their strength when it comes to real world games after release...
i can't see that ... my 290 still go strong and both of my friend who have some 770/780Ti and 970 are surprised that my 190$ 2nd hand (under water ... tho) is nearly toe to toe with the cards they respectively own, meaning the 780Ti and 970, the 770 is out of league. (i also had a 770 before alongside with a 7950 )
Posted on Reply
#10
erocker
*
300w? Cool, not bad.
New tech? Awesome
If price and performance are in line, my PCI-E slot is ready. :D
Posted on Reply
#11
Mathragh
So there are two guys mentioned, both from AMD and working on stuff.

One page mentions the 380X and the other 300W. Why do then people all of a sudden conclude they're both working on the 380X? wouldn't t make a lot more sense if the 2nd guy worked on the 390X?
Posted on Reply
#12
Steevo
LocutusHThese AMD hyper-space-nextgen-dxlevel13123-gcn technologies look so good on the paper, but somhow always fail to show their strength when it comes to real world games after release...
Whut?

AMD/ATI delivered some pretty good products, like the 9xxx, X800, 4xxx series, 5xxx series, 7xxx series and the R9 is still great at 4K output. The change from VLIW to GCN was just a few years ago and their first success was with the 7xxx series on it and they still make those chips as a very competitive offering today, which speaks volumes about how good it was and is.

Their most fatal flaw is and remains the promise of many software advantages about 2 years before they are actually available or ever being available. And their CPU cache latency issues.
Posted on Reply
#13
Ferrum Master
Petey Planebecause greater efficiency means more performance per watt.
Guys keep in mind that R9 290 uses first gen 28nm HPM process and 980 uses HPP

They are like apples and oranges really.
Posted on Reply
#14
GreiverBlade
Ferrum MasterGuys keep in mind that R9 290 uses first gen 28nm HPM process and 980 uses HPP

They are like apples and oranges really.
so, that mean AMD is holding in perf side but loosing in consumption side with a older and nearly obsolete process??? ok the 980 is a real good one but not so far from a good OC'ed 290X and quite a bit pricier.
happitathey're just lowering prices on their R9 series instead in the meantime.
well that's a good idea tho since you can find a 290 for a 770 price where i am and a 290X at a 970 price ... unless purely fanboy statement, it's wrong to say AMD can't compete (even if as i wrote a bit above the 970/980 are good product ofc)
Posted on Reply
#15
Ferrum Master
GreiverBladeso, that mean AMD is holding in perf side but loosing in consumption side with a older and nearly obsolete process??? ok the 980 is a real good one but not so far from a good OC'ed 290X and quite a bit pricier.
Well because I guess nVidia paid a lions share to TSMC to be his lovely puppy. It is been always like that actually... The R9 290 has approximately 20% transistors on board too, that heats up, but still reduces the performance gap at their heat cost. But hey... ATI was a Canadian company... heater during cold winter... actually a two in one :D

And actually they must sell their R9 290 no matter what, unsold silicon is a more loss for them than sold for a bargain. I bet they calculated everything as good they can.
Posted on Reply
#16
HumanSmoke
MathraghSo there are two guys mentioned, both from AMD and working on stuff.
One page mentions the 380X and the other 300W. Why do then people all of a sudden conclude they're both working on the 380X? wouldn't t make a lot more sense if the 2nd guy worked on the 390X?
My thoughts as well. 380X supposes a second-tier card which doesn't gel with the initially high price of HBM. Another consideration is if the 380X is a 300W card, then the 390X is either way outside the PCI-SIG, or it someway off in the future on a smaller process node ( If the 380X is a 300W card on 20nm then AMD have some serious problems with BoM).
Ferrum MasterGuys keep in mind that R9 290 uses first gen 28nm HPM process and 980 uses HPP
TSMC doesn't have an HPP process. GM 204 uses 28HPC (High performance mobile computing) since Maxwell is a mobile-centric architecture. The difference in efficiency is more a product of how Nvidia prioritize perf/watt at the expense of double precision - so more a difference in opinion at the ALU level AFAIK.
From TSMC's own literature:
Posted on Reply
#17
Ferrum Master
HumanSmokeMy thoughts as well. 380X supposes a second-tier card which doesn't gel with the initially high price of HBM. Another consideration is if the 380X is a 300W card, then the 390X is either way outside the PCI-SIG, or it someway off in the future on a smaller process node ( If the 380X is a 300W card on 20nm then AMD have some serious problems with BoM).
Well Boney don't you think also that this number is just the theoretical engineering envelope(summing up the power connector max theoretical delivery current). Do the actually have a real mass produced silicon from GloFo that shows the real consumption numbers. I guess nope... The best they have is still 28nm alfa silicon or even more...

Thanks for correcting, but still that graph is kind of useless.
Posted on Reply
#18
HumanSmoke
Ferrum MasterWell Boney don't you think also that this number is just the theoretical engineering envelope(summing up the power connector max theoretical delivery current).
If AMD are working on 300W board power - even for ES, how does that portend for a higher tier card in the same series. When has a top tier card used less power than the second-tier card in the same model series?
Ferrum MasterDo the actually have a real mass produced silicon from GloFo that shows the real consumption numbers. I guess nope... The best they have is still 28nm alfa silicon or even more...
AMD had at least a hot lot of silicon at least two months ago. By your reckoning, either AMD haven't made any headway with silicon in the interim (indicating a metal layer revision), or are taking a leisurely approach in revising the silicon.
Ferrum MasterThanks for correcting, but still that graph is kind of useless.
The chart wasn't provided to supply information on the processes (that's what the individual product briefs are for), it was provided to show what 28nm processes TSMC provides.
Posted on Reply
#19
Ferrum Master
HumanSmokeIf AMD are working on 300W board power - even for ES, how does that portend for a higher tier card in the same series. When has a top tier card used less power than the second-tier card in the same model series?

AMD had at least a hot lot of silicon at least two months ago. By your reckoning, either AMD haven't made any headway with silicon in the interim (indicating a metal layer revision), or are taking a leisurely approach in revising the silicon.
I am just trying to understand why the quote appeared on linkedin, nobody says she didn't work on such project, but nobody told what kind of tech node it had, it could be a catch and the blooper around these news.

The seconds they are using GloFo now, we have no hard info on them and their silicon leakage at this stage. There may be many variables.

And the speculation about the 380X, it is funny that it has not the R9 class in front of it, ain't it?
Posted on Reply
#20
snakefist
One thing that tends to be overlooked in most comments (and even articles).

The silicon is cheap. The development is costly.

When an architecture manages to endure a long time with relatively small improvements (as GCN is), cards made on it make significant profit. Even with a price reduction. Yesterdays flagship becomes mid-high, mid range becomes entry etc.

AMD offerings still generate profit, despite lowered price - and probably a good deal of it.

NVIDIA has done the same multiple times in the past - remember all the re-branding?

Not taking sides, 970 and 980 are certainly excellent products, but they are enthusiast level only - we are yet to see mid-range products (and eagerly, if I may add).

These 'mysterious' AMD cards (I also suppose there are likely two of them) are also eagerly awaited - HBM is a technology which is looking promising, but real life test should confirm to which extent.
Posted on Reply
#21
dj-electric
Here's my super-not-hardware-geeky answer to this storm of comments:

I don't care if it will feature HBM, LLP or WTF.
If i will recieve a card with low noise, high performance and a good price - i'm in.
Posted on Reply
#22
W1zzard
Dj-ElectriCIf i will recieve a card with low noise, high performance and a good price - i'm in.
GTX 970
Posted on Reply
#23
dj-electric
GTX 970 is indeed a prime example.
All i need is to get it shrunk and pack about twice that performance and i'll make the leap :)
Posted on Reply
#24
Mathragh
Dj-ElectriCGTX 970 is indeed a prime example.
All i need is to get it shrunk and pack about twice that performance and i'll make the leap :)
Would also be optimal if we had more than one supplier of such a card.
Posted on Reply
#25
GreiverBlade
Ferrum MasterATI was a Canadian company... heater during cold winter... actually a two in one :D

And actually they must sell their R9 290 no matter what, unsold silicon is a more loss for them than sold for a bargain. I bet they calculated everything as good they can.
i am Swiss i live in mountain ... i fit the customer profile for AMD/ATI cards (even if 44° is the max for my little princess now ... thanks Kryografics) now thanks to that i will be able to get a 2nd 290 + a psu for cheaper than if i wanted to jump on maxwell even ordering another block+backplate + a auxiliary 140X60/65mm rad would set the total a bit bellow a 980 :D and no way a 970 would be tempting over a "already owned" 290 (i was tempted in the beginning but on a second look it proved to be not a good idea, ie: Side-grade)
Dj-ElectriCIf i will recieve a card with low noise, high performance and a good price - i'm in.
W1zzardGTX 970
well ... as i paid my 290 on 2nd hand (not used too long fortunately ) and if i take in account the price of the loop/block/backplate to add to the total price it's still under the price of a 970 for me ... so judging by the temps and silence of operation (on a 240X60mm rad ) my card fits that description :D but if you mean "direct out of the box" then yes a 970 is fine. (not blaming nvidia ... blaming the greedy retailer/etailer around me :roll: )
Posted on Reply
Add your own comment
Nov 19th, 2024 04:29 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts