Wednesday, June 17th 2015

AMD Radeon R9 Nano to Feature a Single PCIe Power Connector

AMD's Radeon R9 Nano is shaping up to be a more important card for AMD, than even its flaghsip, the R9 Fury X. Some of the first pictures of the Fury X led us to believe that it could stay compact only because it's liquid cooled. AMD disproved that notion, unveiling the Radeon R9 Nano, an extremely compact air-cooled graphics cards, with some stunning chops.

The Radeon R9 Nano is a feat similar to the NUC by Intel - to engineer a product that's surprisingly powerful for its size. The card is 6-inches long, 2-slot thick, and doesn't lug along any external radiator. AMD CEO Lisa Su, speaking at the company's E3 conference, stated that the R9 Nano will be faster than the Radeon R9 290X. That shouldn't surprise us, since it's a bigger chip; but it's the electrical specs, that make this product exciting - a single 8-pin PCIe power input, with a typical board power rated at 175W (Radeon R9 290X was rated at 275W). The card itself is as compact as some of the "ITX-friendly" custom design boards launched in recent times. It uses a vapor-chamber based air-cooling solution, with a single fan. The Radeon R9 Nano will launch later this Summer. It could compete with the GeForce GTX 970 in both performance and price.
Source: VideoCardz
Add your own comment

88 Comments on AMD Radeon R9 Nano to Feature a Single PCIe Power Connector

#51
mirakul
rruffPlease explain how fast vram can compensate for it's small size when a game is calling for >2GB of data for one frame.
In most case a proper optimization from driver could fix that problem, said Macri, AMD CTO.

“If you actually look at frame buffers and how efficient they are and how efficient the drivers are at managing capacities across the resolutions, you’ll find that there’s a lot that can be done. We do not see 4GB as a limitation that would cause performance bottlenecks. We just need to do a better job managing the capacities. We were getting free capacity, because with [GDDR5] in order to get more bandwidth we needed to make the memory system wider, so the capacities were increasing. As engineers, we always focus on where the bottleneck is. If you’re getting capacity, you don’t put as much effort into better utilising that capacity. 4GB is more than sufficient. We’ve had to go do a little bit of investment in order to better utilise the frame buffer, but we’re not really seeing a frame buffer capacity [problem]. You’ll be blown away by how much [capacity] is wasted.”

Read more: wccftech.com/amd-addresses-capacity-limitation-concern-hbm/#ixzz3dNO7Y7Nv
Posted on Reply
#52
btarunr
Editor & Senior Moderator
With Fiji, AMD too has a new lossless texture compression mojo, just like NVIDIA.
HumanSmokeThat's probably another one of those "facts" you found via your magic ass, right?
Nah, finance.google.com .
Posted on Reply
#53
HumanSmoke
btarunrNah, finance.google.com .
Awesome. Care to share a link.
GPU R&D costs tend to be a rare information commodity as a general rule. Last comprehensive costing for a single large GPU I've seen was the $475million in R&D Nvidia spent getting the G80 to market.
Posted on Reply
#54
btarunr
Editor & Senior Moderator
HumanSmokeAwesome. Care to share a link.
GPU R&D costs tend to be a rare information commodity as a general rule. Last comprehensive costing for a single large GPU I've seen was the $475million in R&D Nvidia spent getting the G80 to market.
No. I meant to say that a company with 1/8th the monies could catch up with NVIDIA in less than a year.
Posted on Reply
#55
rruff
mirakulIn most case a proper optimization from driver could fix that problem, said Macri, AMD CTO.
Thanks for the info! I was under the impression that the game controlled vram utilization, not the driver.
Posted on Reply
#56
rruff
btarunrWith Fiji, AMD too has a new lossless texture compression mojo, just like NVIDIA.
I thought that only improved bandwidth, not capacity? At least I don't recall it ever being mentioned as capacity enhancing. Seems like that would have come out during the 970 debacle.
Posted on Reply
#57
geon2k2
So the frame-buffer for FHD is what 1920*1080*4 (32 bit) *2 (2 frames, 1 displayed, 1 working on)/ = 16588800/1024/1024 = 15.82 MB

OMG !!! How are we going to fit 16 MB into 4096 MB?

With 4k there will be even a bigger problem. We will need 16 MB * 4 = 64 MB. Disaster ! We are DOOMED !

No this will never work, they should stop this inception before it destroys everything.

Oh well ... there is hope in this world though. We have the mighty iGPUs which can run FHD even with their tiny shared frame-buffers.
Posted on Reply
#58
AsRock
TPU addict
newtekie1Except HBM is the only reason this card can be this small, and the only reason it is capable of the performance.



No, that is an arrogant statement. The memory issue with the 970 doesn't much matter when it is kicking the crap out of everything AMD puts out, even with their 512-bit 4GB, and SLI 970s have been praised as the best bang for the buck for 4k for a good long while.

His statement was accurate, not arrogant. We are all getting excited over AMD finally doing something that nVidia did 9 months ago. And AMD has the advantage of HBM saving them an insane amount of PCB space.
But still i found the funny in it how ever it sounded. As nearly every other post is just unknown facts or same crap circles.
Posted on Reply
#59
Ruru
S.T.A.R.S.
No need to think anymore what GFX card I will be getting for my new build when Skylake rolls out.. :rolleyes:
Posted on Reply
#60
HumanSmoke
btarunrNo. I meant to say that a company with 1/8th the monies could catch up with NVIDIA in less than a year.
It's a nice achievement, but let's not get carried away. Fiji looks to be a doubled up Tonga minus the GDDR5 MC's and interface (the lack of HDMI 2.0 and FL 12_1 support, and added DCC tend to make Fiji look like it is reusing Tonga's logic blocks) . It is also only a single GPU, to which I guess you can add Iceland at the bottom end of the market. The company still don't seem to have tackled the mainstream/performance segment with a GPU that can pull double duty as enthusiast mobile, and they are still fielding a GPU in the current/future lineup that lacks TrueAudio and FreeSync support - AMD's principle broad-based marketing focus.
Posted on Reply
#61
Yorgos
ChaitanyaSince its AMD, I am going to be skeptical about all the claims until proven by a reliable 3rd party authority.
nVidia delivers what it claims... 4 GB.
Posted on Reply
#62
Brusfantomet
rruffI thought that only improved bandwidth, not capacity? At least I don't recall it ever being mentioned as capacity enhancing. Seems like that would have come out during the 970 debacle.
Unless they have some processor in the memory chip to decompress the textrues in the memory chips the increase in bandwidth is the same as the increase in ram.

Lets say the textures for a game takes 2000 MB, transferring this over a buss at 200 GB/s takes 10 ms (2GB / 200GB/s = 0,01) and it takes 2000 MB of the ram. If the data is compressed 25% it now takes 1500 MB meaning that it over the same bus takes 1.5GB / 200GB/s = 0.0075 s = 7.5 ms. But you do not unpack the textures in the memory, meaning that the 2000 MB of textures now only takes 1500 MB of ram.
Posted on Reply
#63
Assimilator
btarunrAnd it did that with 1/8th the budget.
Consumers don't care about budgets, they care about bang for buck and performance/watt. The latter is something AMD has been lacking for so long, hence why they only have 1/8th the budget to work with. Not to mention that nVIDIA already has working samples of Pascal, while AMD is probably going to pull another Hawaii and recycle Fury for the next 3 generations.
mirakulNano has 4 GB of HBM. 970 has 3.5 GB of GDDR5.
And you said "reach parity"? Oh well...
Fact: GTX 970 has 4GB GDDR5.
Fact: GTX 970 can address 4GB GDDR5.
Fact: you are a troll, and a poor one at that.
Posted on Reply
#64
mirakul
geon2k2So the frame-buffer for FHD is what 1920*1080*4 (32 bit) *2 (2 frames, 1 displayed, 1 working on)/ = 16588800/1024/1024 = 15.82 MB

OMG !!! How are we going to fit 16 MB into 4096 MB?

With 4k there will be even a bigger problem. We will need 16 MB * 4 = 64 MB. Disaster ! We are DOOMED !

No this will never work, they should stop this inception before it destroys everything.

Oh well ... there is hope in this world though. We have the mighty iGPUs which can run FHD even with their tiny shared frame-buffers.
It's not as simple as you think.

Some stupid games tend to store a lot of textures into VRAM (Sh* of Mordor for example) and eventually ask for very high amount of VRAM. The AA process adds more burden to the capacity as well.

However, this could be fixed with a proper driver, as said by Macri from AMD. He also stated that it is the higher bandwidth of HBM makes the process possible.
AssimilatorConsumers don't care about budgets, they care about bang for buck and performance/watt. The latter is something AMD has been lacking for so long, hence why they only have 1/8th the budget to work with. Not to mention that nVIDIA already has working samples of Pascal, while AMD is probably going to pull another Hawaii and recycle Fury for the next 3 generations.



Fact: GTX 970 has 4GB GDDR5.
Fact: GTX 970 can address 4GB GDDR5.
Fact: you are a troll, and a poor one at that.
Fact: Pascal won't be available until HBM2
Fact: AMD spent that budget to give consumers new tech, not new Gimmworks sh*t.
Fact: If GTX970 could address more than 3.5GB with same bandwidth, nVidia CEO would be awarded the Nobel prize
Posted on Reply
#66
deemon
AssimilatorSo it's as fast as GTX 970 and uses the same amount of power. And it's about the size of the ITX GeForce 970. So... what's newsworthy, that AMD has "only" taken a year to reach parity with nVIDIA?
also Nano seems to be a bit shorter than shortest 970:
Nano: www.techpowerup.com/img/15-06-17/170d.jpg
ASUS 970: images.bit-tech.net/content_images/2015/01/asus-geforce-gtx-970-directcu-mini-review/970dcm-3b.jpg
And this alone is quite significant for ITX builds!
Can't wait now 3rd party performance and thermal testing results vs 970 to see if it fully qualifies as ITX card.
(And didn't AMD support on consumer cards also 10bit and 12bit colors (per channel), whereas nvidia has that supported only on Quadro/Firepro and GTX cards are limited to 8bit?)
Posted on Reply
#67
HTC
I'm waiting for a version of nano with no PCIe connector. If it happens, it will probably be next year: we'll see ...
Posted on Reply
#68
deemon
HTCI'm waiting for a version of nano with no PCIe connector. If it happens, it will probably be next year: we'll see ...
with what then? USB? TB?
Posted on Reply
#69
ZeDestructor
HTCI'm waiting for a version of nano with no PCIe connector. If it happens, it will probably be next year: we'll see ...
Never gonna happen. If there's more budget, they make a bigger, similarly power-hungry core with more performance. For an example of what I mean, compare the 750Ti (no PCIe power, 75W max) to the 960 (one 6pin, 150W max), the 970 (2 6pin or 1 8pin, 225W max) and 980 (the 960 is basically a 980 chopped in 2, also 2 6pin or 1 8pin, 225W max) to see the performance tiers at various power brackets.
Posted on Reply
#70
HTC
ZeDestructorNever gonna happen. If there's more budget, they make a bigger, similarly power-hungry core with more performance. For an example of what I mean, compare the 750Ti (no PCIe power, 75W max) to the 960 (one 6pin, 150W max), the 970 (2 6pin or 1 8pin, 225W max) and 980 (the 960 is basically a 980 chopped in 2, also 2 6pin or 1 8pin, 225W max) to see the performance tiers at various power brackets.
Who say's i'm looking for performance?

I'm looking for a scaled down version of this with also scaled down levels of performance while maintaining the HBM memory type. Dunno if HBM can be with 2 GB of memory only: 4 would be nice but if it's only 2 it would still be OK for me.
Posted on Reply
#71
ZeDestructor
HTCWho say's i'm looking for performance?

I'm looking for a scaled down version of this with also scaled down levels of performance while maintaining the HBM memory type. Dunno if HBM can be with 2 GB of memory only: 4 would be nice but if it's only 2 it would still be OK for me.
You don't need HBM to feed a lower power GPU, so no, you're not getting a scaled down card for a while. You'll see DDR4/GDDR5 still for low power cards because it's sufficient.
Posted on Reply
#72
HTC
ZeDestructorYou don't need HBM to feed a lower power GPU, so no, you're not getting a scaled down card for a while. You'll see DDR4/GDDR5 still for low power cards because it's sufficient.
But since the power savings come from HBM versions of these cards (nano and fury(X)), it stands to reason that a version of these with GDDR5 would be far more power hungry then a version with HBM.

It's also the HBM that makes it possible for the card to be this small: it may be even possible for a card with around the performance i'm looking for to be even smaller, i'm guessing.
Posted on Reply
#73
ZeDestructor
HTCBut since the power savings come from HBM versions of these cards (nano and fury(X)), it stands to reason that a version of these with GDDR5 would be far more power hungry then a version with HBM.

It's also the HBM that makes it possible for the card to be this small: it may be even possible for a card with around the performance i'm looking for to be even smaller, i'm guessing.
Yes and no.. A certain amount is the engineering time involved, as well as the price of the RAM itself - HBM is expensive relative to DDR. The result of the combination of these factors is why you won't see HBM in a 75W desktop card for a while.
Posted on Reply
#74
HTC
ZeDestructorYes and no.. A certain amount is the engineering time involved, as well as the price of the RAM itself - HBM is expensive relative to DDR. The result of the combination of these factors is why you won't see HBM in a 75W desktop card for a while.
Makes sense.

Still, it all comes down to how much is saved on "real estate" VS how much is spent on the more costly HBM ram.
Posted on Reply
#75
Aquinus
Resident Wat-man
geon2k2So the frame-buffer for FHD is what 1920*1080*4 (32 bit) *2 (2 frames, 1 displayed, 1 working on)/ = 16588800/1024/1024 = 15.82 MB

OMG !!! How are we going to fit 16 MB into 4096 MB?

With 4k there will be even a bigger problem. We will need 16 MB * 4 = 64 MB. Disaster ! We are DOOMED !

No this will never work, they should stop this inception before it destroys everything.

Oh well ... there is hope in this world though. We have the mighty iGPUs which can run FHD even with their tiny shared frame-buffers.
Dude, this isn't 2D days when your frame buffer was a color lookup palate and 3D was drawing in 2D space. You have the output frame buffer for the frames to be output but, you also require the textures for those objects, you require vertices for drawing the world, references to attach textures to polygons (triangles), light sources and their data, camera positioning. Needless to say, your over simplification of hardware and how things work is astonishing and disturbing.

GPUs do a lot more than display whatever is in the frame buffer. :slap:
Posted on Reply
Add your own comment
Jul 26th, 2024 03:15 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts