Tuesday, August 25th 2015

AMD Radeon R9 Nano Core Configuration Detailed

AMD's upcoming mini-ITX friendly graphics card, the Radeon R9 Nano, which boasts of a typical board power of just 175W, is not a heavily stripped-down R9 Fury X, as was expected. The card will feature the full complement of GCN compute units physically present on the "Fiji" silicon, and in terms of specifications, is better loaded than even the R9 Fury. Specifications sheet of the R9 Nano leaked to the web, revealing that the card will feature all 4,096 stream processors physically present on the chip, along with 256 TMUs, and 64 ROPs. It will feature 4 GB of memory across the chip's 4096-bit HBM interface.

In terms of clock speeds, the R9 Nano isn't too far behind the R9 Fury X on paper - its core is clocked up to 1000 MHz, with its memory ticking at 500 MHz (512 GB/s). So how does it get down to 175W typical board power, from the 275W of the R9 Fury X? It's theorized that AMD could be using an aggressive power/temperature based clock-speed throttle. The resulting performance is 5-10% higher than the Radeon R9 290X, while never breaching a power target. Korean tech blog DGLee posted pictures of an R9 Nano taken apart. Its PCB is smaller than even that of the R9 Fury X, and makes do with a slimmer 4+2 phase VRM, than the 6+2 phase VRM found on the R9 Fury X.
Sources: VideoCardz, IYD.kr
Add your own comment

101 Comments on AMD Radeon R9 Nano Core Configuration Detailed

#51
FordGT90Concept
"I go fast!1!11!1!"
It could be that Fury X gets the low ASIC quality chips and Nano gets the high ASIC quality chips.

We don't know the price yet. At this point, I wouldn't be surprised if the price is close to Fury X's. Nano effectively is Fury X for small form factor computers (Steam Machines comes to mind).
Posted on Reply
#52
Assimilator
If any of you crying AMD fanboys can point me to a place where nVIDIA made claims about the GTX 970 that weren't borne out by independent reviewers... please, go ahead. I'll be waiting, probably until hell freezes over.

And please, stop trotting out the 3.5GB BS. The GTX 970 can address all 4 gigabytes of the graphics memory it has; that makes it a 4GB graphics card. If you claim anything else, you're either ignorant, stupid, or a combination of both.

Keep crying those fanboy tears though. They taste delicious.
Posted on Reply
#53
Anusha
I think it is about time I got rid of my Korean panel.

Pricing though...$500?
Posted on Reply
#54
a_ump
Where do people get these 30% overclocks? i mean i've had 2x 8800GT's, a 8800GTX, a HD 4870, GTS 250, HD 5770, and now a GTX 560. Best overclock i've ever sustained stable 24/7 was on my HD 5770 where i got it from 800mhz to 910mhz core. So 13% is my best and most my oc's are around 7%-10% at best. Must have rotten luck.
Posted on Reply
#55
the54thvoid
Super Intoxicated Moderator
AssimilatorIf any of you crying AMD fanboys can point me to a place where nVIDIA made claims about the GTX 970 that weren't borne out by independent reviewers... please, go ahead. I'll be waiting, probably until hell freezes over.

And please, stop trotting out the 3.5GB BS. The GTX 970 can address all 4 gigabytes of the graphics memory it has; that makes it a 4GB graphics card. If you claim anything else, you're either ignorant, stupid, or a combination of both.

Keep crying those fanboy tears though. They taste delicious.
As a 980ti Kingpin owner (quite possibly the most Nvidia centric fanboy card you can buy), your comment about the 4gb is absolute crap. The 970 does work well but it does not address the full 4gb in a normal fashion. It only utilises 3.3gb-3.5gb and the remainder throttles performance when called upon.
It still performs well and I'll defend that but Nvidia CLEARLY mislead consumers about 4gb, when the last 0.5gb can hinder performance, though as stated, very few scenarios get to that point.
Posted on Reply
#56
alucasa
In SFF builds and I've built a lot of them, Geforce x60 GTX is the limit I put on it. SFF case's lack of air flow and small volume means I shouldn't put anything more powerful than that.
Posted on Reply
#57
Mr McC
AssimilatorIf any of you crying AMD fanboys can point me to a place where nVIDIA made claims about the GTX 970 that weren't borne out by independent reviewers... please, go ahead. I'll be waiting, probably until hell freezes over.

And please, stop trotting out the 3.5GB BS. The GTX 970 can address all 4 gigabytes of the graphics memory it has; that makes it a 4GB graphics card. If you claim anything else, you're either ignorant, stupid, or a combination of both.

Keep crying those fanboy tears though. They taste delicious.
Assimilator, try to focus on the product in hand rather than derailing the thread by acting as an apologist for nVidia products that clearly entailed the false advertising you claim you are unwilling to forgive in the case of AMD. The accusations of fanboyism you level against others are discredited by your refusal to apply the same yardstick in each case. Would you measure your own schlong in centimetres and everyone else's in inches and compare without converting units to support your belief that you are the new Ron Jeremy?
Posted on Reply
#58
GhostRyder
64KI had a GTX 970 before and it was a very nice card. I was an early adopter and got the card before the truth came out. I did think it strange that I got it for $360 when the 980 was $550 with only a little better performance. The previous generation was $400 for the 670 and $500 for the 680. I expected similar pricing.

But yeah, Nvidia definitely told some lies and AMD lies sometimes and Publishers lie sometimes. It's a bit of a shady hobby we're in.
Its just an unfortunate game we all play, its part of being a computer hobbyist as things are not always what they seem. We all get through it somehow though :p
AssimilatorIf any of you crying AMD fanboys can point me to a place where nVIDIA made claims about the GTX 970 that weren't borne out by independent reviewers... please, go ahead. I'll be waiting, probably until hell freezes over.

And please, stop trotting out the 3.5GB BS. The GTX 970 can address all 4 gigabytes of the graphics memory it has; that makes it a 4GB graphics card. If you claim anything else, you're either ignorant, stupid, or a combination of both.

Keep crying those fanboy tears though. They taste delicious.
One of the only clear fanboys here is you since you see fit to trash talk most AMD product news. If you want to talk about people being fanboys, look in a mirror.
the54thvoidAs a 980ti Kingpin owner (quite possibly the most Nvidia centric fanboy card you can buy), your comment about the 4gb is absolute crap. The 970 does work well but it does not address the full 4gb in a normal fashion. It only utilises 3.3gb-3.5gb and the remainder throttles performance when called upon.
It still performs well and I'll defend that but Nvidia CLEARLY mislead consumers about 4gb, when the last 0.5gb can hinder performance, though as stated, very few scenarios get to that point.
^This sums it up quite nicely!
a_umpWhere do people get these 30% overclocks? i mean i've had 2x 8800GT's, a 8800GTX, a HD 4870, GTS 250, HD 5770, and now a GTX 560. Best overclock i've ever sustained stable 24/7 was on my HD 5770 where i got it from 800mhz to 910mhz core. So 13% is my best and most my oc's are around 7%-10% at best. Must have rotten luck.
Depends on the cards, recent video cards can overclock a lot better (Especially on NVidias side) than in the past where it was a lot more luck of the draw. I had Dual GTX 980's which could only achieve about 980mhz core clock max while my friends stopped at 925mhz. Then I had a set of HD 6990's both of which attained a 1000mhz overclock on the cores where as another friend had one that could hit a little over 1000mhz (I think 1025 though I cannot remember) and another that could even break 940. Overclocking in any form anyways only gets you some gains up to a point. If you really want overclocking cards, your best bet is to buy cards that are designed for it as those are the ones that will have a higher binning cycle (Especially if they start out with higher clocks) like the MSI Lightning series, Asus Matrix, or EVGA Classified's.

I doubt the R9 Nano will have much overclocking at all. I just want to see this cooler and the card in action so we can understand how it works because having this full core seems a bit weird over just binning out some less than stellar example chips of the bunch.
Posted on Reply
#59
lilhasselhoffer
AssimilatorIf any of you crying AMD fanboys can point me to a place where nVIDIA made claims about the GTX 970 that weren't borne out by independent reviewers... please, go ahead. I'll be waiting, probably until hell freezes over.

And please, stop trotting out the 3.5GB BS. The GTX 970 can address all 4 gigabytes of the graphics memory it has; that makes it a 4GB graphics card. If you claim anything else, you're either ignorant, stupid, or a combination of both.

Keep crying those fanboy tears though. They taste delicious.
You sir, are either a willful idiot or a fanboy of the highest caliber.

Nvidia itself basically said that they sold a 3 GB card, with a memory structure whose last bits were only designed such that "...GTX 970 is a 4GB card. However, the upper 512MB of the additional 1GB is segmented and has reduced bandwidth. This is a good design because we were able to add an additional 1GB for GTX 970 and our software engineers can keep less frequently used data in the 512MB segment..." That article that quote comes from can be found here: www.gamespot.com/articles/nvidia-boss-responds-to-gtx-970-false-advertising-/1100-6425510/

If you are to be truly honest, that's the Nvidia team peddling BS. You could bend over backwards and accept their logic, but if you do so then the Fury Xis the absolute best card currently on the market. You just have to reduce your sample size to a few hand picked titles at 4K, where the 980ti is beaten.


What we are arguing is to have a consistent standard for judgement. Right now, you've got two options. Either both companies peddle whatever BS will move cards, or both companies are 100% honest because they can find at least one instance where their claims are true.

Nvidia, AMD, and Intel all say whatever they need to to move hardware. This is why being an early adopter sucks so hard. If you don't wait for reviews, you'll always be disappointed. Stating that one manufacturer, or another, is uniformly better is stupid. Neither is better than the other, only their currently offered products are better or worse when measured to one another.


Edit:
If the 3.5 GB memory thing is still an impasse, maybe you should review an article from a year ago, that is surprisingly still accurate today streamcomputing.eu/blog/2014-08-05/7-things-nvidia-doesnt-want-know/.

That's right, people doing actual coding work with OpenCL and CUDA are calling Nvidia on BS. Kinda seems like the people actually using GPUs for stuff other than gaming recognize that AMD may not be doing well, but it's because of their marketing and not actual performance. If the AMD marketing was half as slimy as Nvidia they'd be claiming Fury X cured cancer, because it can be used for BOINC and the like.
Posted on Reply
#60
yogurt_21
a_umpWhere do people get these 30% overclocks? i mean i've had 2x 8800GT's, a 8800GTX, a HD 4870, GTS 250, HD 5770, and now a GTX 560. Best overclock i've ever sustained stable 24/7 was on my HD 5770 where i got it from 800mhz to 910mhz core. So 13% is my best and most my oc's are around 7%-10% at best. Must have rotten luck.
I took 2 GTX 480's from 700 core to 882 that's a 26% overclock on air. Had a 2900 XT go from 743 core to 985 core that's a 32.5% overclock(granted that was on water). My X1800XT went from 625 core to 780 24.8% (also on water) Even my old X700 pro went from 425 to 515, a 21% clock (granted that was pencil modded) and a Radeon 9000 that went from 200 to 250, a 25% clock and that didn't even hit pro levels as that was 275 core and would have been a 37.5% overclock.

Perhaps you're buying pre-clocked cards because 30% doesn't seem rare to me at all. Now 50% that's rare.

In fact the only cards that haven't at least hit 20% were my 9800 pro 256MB, an X1950XT, a 9600GT, and a GTX 295 FTW edition. Aside from the 9800 pro all the rest were preclocked by the manufacturer and you can't expect a 20% overclock on top of an existing overclock.
Posted on Reply
#61
tabascosauz
lilhasselhofferYou sir, are either a willful idiot or a fanboy of the highest caliber.

Nvidia itself basically said that they sold a 3 GB card, with a memory structure whose last bits were only designed such that "...GTX 970 is a 4GB card. However, the upper 512MB of the additional 1GB is segmented and has reduced bandwidth. This is a good design because we were able to add an additional 1GB for GTX 970 and our software engineers can keep less frequently used data in the 512MB segment..." That article that quote comes from can be found here: www.gamespot.com/articles/nvidia-boss-responds-to-gtx-970-false-advertising-/1100-6425510/

If you are to be truly honest, that's the Nvidia team peddling BS. You could bend over backwards and accept their logic, but if you do so then the Fury Xis the absolute best card currently on the market. You just have to reduce your sample size to a few hand picked titles at 4K, where the 980ti is beaten.


What we are arguing is to have a consistent standard for judgement. Right now, you've got two options. Either both companies peddle whatever BS will move cards, or both companies are 100% honest because they can find at least one instance where their claims are true.

Nvidia, AMD, and Intel all say whatever they need to to move hardware. This is why being an early adopter sucks so hard. If you don't wait for reviews, you'll always be disappointed. Stating that one manufacturer, or another, is uniformly better is stupid. Neither is better than the other, only their currently offered products are better or worse when measured to one another.


Edit:
If the 3.5 GB memory thing is still an impasse, maybe you should review an article from a year ago, that is surprisingly still accurate today streamcomputing.eu/blog/2014-08-05/7-things-nvidia-doesnt-want-know/.

That's right, people doing actual coding work with OpenCL and CUDA are calling Nvidia on BS. Kinda seems like the people actually using GPUs for stuff other than gaming recognize that AMD may not be doing well, but it's because of their marketing and not actual performance. If the AMD marketing was half as slimy as Nvidia they'd be claiming Fury X cured cancer, because it can be used for BOINC and the like.
There is hardly any place for ethics in business, but Nvidia has a long history of taking it to new lows. Selling a 4GB card that isn`t exactly 4GB of the expected GDDR5 is beside the point; the issue is that Nvidia vigorously defended the fact that technically you still get "4GB of VRAM" without acknowledging the issue. I don't, and a lot of people don't give a singular shit about how Nvidia underestimated the effects of employing that kind of crossbar design to achieve that core config. "Oh, it was the best that could be done under the circumstances and we know that we should've been more transparent about it." They knew exactly what they were doing and tried to keep a low profile about it. How anyone can just accept this sad excuse of an explanation and still have the same amount of confidence in Nvidia products is simply beyond me.

This could go on and on but we could be here forever, making a list of all the times Nvidia has been dirty. I just hope the time comes for Nvidia to pay for its business practices; hell, Intel waded through a load of shit for one bit of controversy surrounding OEMs.

Whoever said earlier that the R9 390 is a bullshit product has left me scratching my head. In what way does it not live up to the product it's marketed as? It sure as hell gives the GTX 970 a run for its money, even without any mention of the VRAM. What, do you not have a proper PSU?

Also, someone doesn't seem to understand the importance of binning. Binning is not for OCing alone. In a diminutive card like the Nano, where every bit of the heatsink matters, you don't want some garbage quality chip that takes absurd amounts of voltage to hit the boost clocks you want. 30% overclocks are not that rare in the light of big Maxwell, but suggesting that Nano should've been built around that goal is insanely ridiculous.
Posted on Reply
#62
Sony Xperia S
yogurt_21I took 2 GTX 480's from 700 core to 882 that's a 26% overclock on air. Had a 2900 XT go from 743 core to 985 core that's a 32.5% overclock(granted that was on water). My X1800XT went from 625 core to 780 24.8% (also on water) Even my old X700 pro went from 425 to 515, a 21% clock (granted that was pencil modded) and a Radeon 9000 that went from 200 to 250, a 25% clock and that didn't even hit pro levels as that was 275 core and would have been a 37.5% overclock.

Perhaps you're buying pre-clocked cards because 30% doesn't seem rare to me at all. Now 50% that's rare.

In fact the only cards that haven't at least hit 20% were my 9800 pro 256MB, an X1950XT, a 9600GT, and a GTX 295 FTW edition. Aside from the 9800 pro all the rest were preclocked by the manufacturer and you can't expect a 20% overclock on top of an existing overclock.
I have also never experienced any noticeable up clocks on any of my video cards. Must be poor luck as well.

I don't know how you achieve that - must be something intentional, no? Like cherry picking cards plus water ?
Posted on Reply
#63
HisDivineOrder
So in theory this card should cost more than the Fury X because it's specially binned chips to hit a lower power, it'll have a fully uncut chip...

I just don't think a Nano that costs the same as a 980 Ti is going to do very well because the performance, with the throttling, will probably hit more around the Fury non-X...

This product seems so niche it's silly. Why can't AMD just release a Fury X without the water cooler and with custom boards?
Posted on Reply
#64
newtekie1
Semi-Retired Folder
uuuaaaaaaMaybe it needed more room to breathe. Which cards did you have?
Sapphire 290X Tri-X's. They were in my Z97 Extreme6, so the top card had a nice gap to breath. It still hit 94°C and started to throttle after about half an hour of gaming.
Posted on Reply
#65
Sony Xperia S
HisDivineOrderSo in theory this card should cost more than the Fury X because it's specially binned chips to hit a lower power, it'll have a fully uncut chip...

I just don't think a Nano that costs the same as a 980 Ti is going to do very well because the performance, with the throttling, will probably hit more around the Fury non-X...

This product seems so niche it's silly. Why can't AMD just release a Fury X without the water cooler and with custom boards?
Let's make these things clear:

- R9 Nano will be faster product than R9 290X but not that fast to threaten any Fury;
- R9 Nano won't cost more than 630$, actually we expect price tag in line of around 450$;
- It won't be niche - it should be the new standard or paving the way for new generations of small cards;
- Fury X doesn't need custom boards - you won't achieve anything if you are seeking for guinness record clock heights. It's the best with water and let it please stay with WATER !
Posted on Reply
#66
FordGT90Concept
"I go fast!1!11!1!"
HisDivineOrderI just don't think a Nano that costs the same as a 980 Ti is going to do very well because the performance, with the throttling, will probably hit more around the Fury non-X...

This product seems so niche it's silly. Why can't AMD just release a Fury X without the water cooler and with custom boards?
That's what concerns me. It'll have performance between 290X and 390X but I'm increasingly thinking that it will be priced above the 390X--maybe closer to Fury. If that's the case, might as well go with 390X. Nano would only be appealing for SFF builds.

The price should be announced tomorrow.
Posted on Reply
#67
EarthDog
btarunrI don't think with that VRM, its throttling can be relaxed enough to match Fury X performance.
Sony Xperia S- R9 Nano will be faster product than R9 290X but not that fast to threaten any Fury;
- R9 Nano won't cost more than 630$, actually we expect price tag in line of around 450$;
You'd be surprised I would imagine.......................
Posted on Reply
#68
GhostRyder
FordGT90ConceptThat's what concerns me. It'll have performance between 290X and 390X but I'm increasingly thinking that it will be priced above the 390X--maybe closer to Fury. If that's the case, might as well go with 390X. Nano would only be appealing for SFF builds.

The price should be announced tomorrow.
I agree, with this announcement the pricing is of concern along with at least for me the cooler on the card. I believe it looks like it could handle it but I am still curious to see it in action since they are restricting this to reference only as far as I can tell (Which leads me to believe it will be more than adequate).
Posted on Reply
#69
SonicZap
I'm fairly certain that the price will be high, they still have problems getting enough Fury GPUs out to sell, and another product using the same die won't help at all. They'll be grabbing money from the people who build small form factor systems, for most usage cases the R9 390 and 390X will be the better choice. If something else happens, I'm going to be surprised.
Posted on Reply
#70
the54thvoid
Super Intoxicated Moderator
HisDivineOrderWhy can't AMD just release a Fury X without the water cooler and with custom boards?
You and the world want to know. To keep it a halo product? To ensure no QA problems with the chip? to ensure thermal design envelopes?
The voltage has been tested by W1zzard and he got the result that it doesn't respond very well to over voltage, so perhaps custom cards are unnecessary. But it seems Nano may make use of a backwards approach, limit voltage input to keep tdp down tight, allowing the compact cooler.
I think Fiji in general is too immature to be tested properly. It's like when the initial Tahiti (7970) came out with conservative clocks. AMD may have been cagey to prevent chip problems but with hindsight, released the 'GHz Editions' that rivalled the GTX680.
Fiji came close to nailing it (but in PR terms, still so far away) so I think the next chip or a respin might make a huge difference.
We know DX12 will make a difference in AMD's favour so it'll be interesting next year with AMD's HBM experience and Nvidia getting their effort out.
As for Nano, we'll see tomorrow.
Posted on Reply
#71
lilhasselhoffer
Sony Xperia SLet's make these things clear:

- R9 Nano will be faster product than R9 290X but not that fast to threaten any Fury;
- R9 Nano won't cost more than 630$, actually we expect price tag in line of around 450$;
- It won't be niche - it should be the new standard or paving the way for new generations of small cards;
- Fury X doesn't need custom boards - you won't achieve anything if you are seeking for guinness record clock heights. It's the best with water and let it please stay with WATER !
I think we're on the same side, and you still make me want to reevaluate my thought process.
1) Faster than a product that is one generation old, and functionally a twice baked 7970. Kinda depressing really.
2) Price tag pulled straight from your backside. We don't expect anything, you have expectations. My only expectation is that the price tag will be north of what I would spend on the third time around for this process node. Of course, that's not a definitive number. You seem to be measuring only by "less than a fury," but "more than a 390." Seems like you've got a real winner there, with an almost $300 wide window.
3) Do you understand what niche means? The niche application for this is either an HTPC or other SFF computer, where money seems to be no object. Most people have a budget, which means the same performance could be had cheaper at the cost of space. Given that gamers generally have spacious cases, budgetary restrictions, the desire for raw performance, or some combination thereof you've got a niche product. Being king of a niche isn't bad, but believing reigning over a niche makes you a success is stupid. Blackberry ruled the niche of work smart phones, but died because their niche was too shallow.
4) Derp. Just plain derp. I'm sorry, but if you're spending that kind of money on a card, you should be able to cool it however you want. I'd be happy with Asus, MSI, Gigabyte, or another partner to come forward with a 10% increased cost card ($63 at $630 is exactly that), that supported excellent overclocking on air but required three slots. I'd be happy with someone releasing an underclocked "efficiency" version of the card that was cheaper because they cut some corners. What makes some people angry is AMD putting their foot down, especially for a product that might not be fantastic (coil whine on the pumps?) for the huge price. Cards sell better when there are options. Options are predicated on being able to choose performance targets and design to them. Saying "Fury is meant to be under water" is like saying Fury isn't meant to run, so break its knees. It already has a perfectly fine wheel chair it can get around in.



Seriously, stop trying to talk for everyone. Every time you do it makes me angry, because you assume we agree with your points. If you think something, so be it. If you tell me I think something, you'd better be prepared to retract your points when they are demonstrably unfounded.
Posted on Reply
#72
Sony Xperia S
It is silly from your side to prefer air rather than water. You know that lower temperatures have positive effects - they lower power consumption and increase life time.

But these things are mysteries for you.

Probably you are typing just to argue with someone who has better points than you.
I got used to your points and honestly - I am sick of them and want something better.
Posted on Reply
#73
EarthDog
Sony Xperia SIt is silly from your side to prefer air rather than water. You know that lower temperatures have positive effects - they lower power consumption and increase life time.
While true (but by negligible amounts).. does it really matter? There are plenty of air cooled cards that last through their useful life... in fact, nearly all of them are/do. So how is that a selling point? People will want to junk the Fury X in 3/4 years like any other card, be it water or air.

You also have to think a bit more. If it saves a couple of watts (which is being generous, actually) The difference between, say, 1.2v @ 85C vs. 1.2v @ 65C which is where water would take it, there really isn't much savings in power, is there ((NO))? With that said, wouldn't the pump, which most use more watts than a fan, negate those negligible gains for power savings ((YES))?
Posted on Reply
#74
newtekie1
Semi-Retired Folder
Sony Xperia SIt is silly from your side to prefer air rather than water. You know that lower temperatures have positive effects - they lower power consumption and increase life time.
Lower temps have a barely measurable affect on power consumption, and nothing that would matter. They also won't extend the useful life of a graphics card. If those are your reasons for using liquid cooling then you are doing it for the wrong reasons.

They are plenty of negatives to liquid cooling too, so it is far from silly to prefer air. I water cooled my systems for years, but went back to air to avoid the hassle. Now I run an AIO because they largely eliminate most of the hassle, but they still aren't as nice as easy as air cooling.
Posted on Reply
#75
uuuaaaaaa
newtekie1Sapphire 290X Tri-X's. They were in my Z97 Extreme6, so the top card had a nice gap to breath. It still hit 94°C and started to throttle after about half an hour of gaming.
Maybe the pasting job was not so good, Hawaii runs hot, but that should not happen on the tri-x cooler...
Posted on Reply
Add your own comment
Nov 21st, 2024 10:28 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts