Monday, January 4th 2016

AMD Demonstrates Revolutionary 14 nm FinFET Polaris GPU Architecture

AMD provided customers with a glimpse of its upcoming 2016 Polaris GPU architecture, highlighting a wide range of significant architectural improvements including HDR monitor support, and industry-leading performance-per-watt. AMD expects shipments of Polaris architecture-based GPUs to begin in mid-2016.

AMD's Polaris architecture-based 14nm FinFET GPUs deliver a remarkable generational jump in power efficiency. Polaris-based GPUs are designed for fluid frame rates in graphics, gaming, VR and multimedia applications running on compelling small form-factor thin and light computer designs.

"Our new Polaris architecture showcases significant advances in performance, power efficiency and features," said Lisa Su, president and CEO, AMD. "2016 will be a very exciting year for Radeon fans driven by our Polaris architecture, Radeon Software Crimson Edition and a host of other innovations in the pipeline from our Radeon Technologies Group."

The Polaris architecture features AMD's 4th generation Graphics Core Next (GCN) architecture, a next-generation display engine with support for HDMI 2.0a and DisplayPort 1.3, and next-generation multimedia features including 4K h.265 encoding and decoding.


AMD has an established track record for dramatically increasing the energy efficiency of its mobile processors, targeting a 25x improvement by the year 2020.
Add your own comment

88 Comments on AMD Demonstrates Revolutionary 14 nm FinFET Polaris GPU Architecture

#26
RejZoR
The anti-aliasing, I disagree. Shading techniques used in games like Doom 3 or FEAR do mask jaggy edges quite a bit, but you'll still notice them heavily on thin elements like wire fences, railings, tree branches, electrical wires hanging in the air, first person held guns (because they are static and not part of environment). Things like this drive me insane seeing stupid jaggies moving around.

I personally rather use FXAA or MLAA and lose tiny bit of sharpness and have edges smoothed than having 100% sharp textures and jaggy edges. Games use heavy post processing anyway so even without FXAA, textures will feel blurry. And FXAA/MLAA in most cases filter edges like a 24x FSAA mode. Not in all conditions but most of the time and that's great. Especially since they barely affect framerate.
Posted on Reply
#27
lilhasselhoffer
I've got to ask a fundamental question here. Do more colors really matter?

Before we go on, I've got to qualify. Every single color of light is a combination of the wavelengths we perceive. Our three types of color detecting cells respond over a relatively narrow range of energies. As such, the difference between any two colors can be represented roughly as deltaE = sqrt((g2-g1)+(a2-a1)+(b2-b1)) where the delta has to be 2.3 or greater for a human being to notice any difference in coloration. en.wikipedia.org/wiki/Color_difference

The longer explanation, with a bit of background and why the measurement is still subjective, can be found here: zschuessler.github.io/DeltaE/learn/


Short answer though, is that at some point adding more colors does not produce appreciable differences. A few year back a professional troll decided to make the point that makeup was crap by asking a simple question, was there a difference between Revlon's "Red Reinvented" and "Cherry Desirable?" The short answer is that I couldn't tell, and without the color values I wouldn't have thought them any different. www.thebestpageintheuniverse.net/c.cgi?u=fashion


With respect to monitors, does 10 bit to 12 bit produce an appreciable difference? I can't honestly say that I know, but my experience points me to the conclusion that more monitor generally trumps more accurate colors. Heck, I don't know of the last time where the difference between a slightly less blue purple would have been as much of a deal breaker as not having access to a relatively cheap 1920x1080 monitor. Personally, pixel count>refresh rate (assuming 30 Hz minimum)>color fidelity. Maybe I'm backwards, but I'd prefer Polaris to push 4K before AMD stated focusing on color depth.
Posted on Reply
#28
PP Mguire
I personally prefer resolution/performance over color. I work on calibrated IPS monitors all day but when I play a game I don't really care about that crap.
Posted on Reply
#29
Xzibit
PP MguireI personally prefer resolution/performance over color. I work on calibrated IPS monitors all day but when I play a game I don't really care about that crap.
Neither does the game. It wouldn't matter if you did or not, even if your system is capable of it. Games will be the last to adopt such standards due to time/cost.

The reason more color 10-bit/12-bit is being talked about is because its already part of 4k standards.

Broadcast TV
4k Blu-Ray
TV Manufactures / slowest to adopt

They all have their groups that have established the base of what is to be. The last thing that all 3 haven't adopted as a standard but will likely be included in the future is the luminance data.

When you have 4k standards images and shrink them to 1440p or 1080p it will look a lot better then a 1080p standards image or movie. Provided your system is capable of course. I think both AMD VSR and Nvidia DSR have proven that for gamers.
Posted on Reply
#30
lilhasselhoffer
XzibitNeither does the game. It wouldn't matter if you did or not, even if your system is capable of it. Games will be the last to adopt such standards due to time/cost.

The reason more color 10-bit/12-bit is being talked about is because its already part of 4k standards.

Broadcast TV
4k Blu-Ray
TV Manufactures / slowest to adopt

They all have their groups that have established the base of what is to be. The last thing that all 3 haven't adopted as a standard but will likely be included in the future is the luminance data.

When you have 4k standards images and shrink them to 1440p or 1080p it will look a lot better then a 1080p standards image or movie. Provided your system is capable of course. I think both AMD VSR and Nvidia DSR have proven that for gamers.
I was under the impression that super sampling was already doing this in the gaming space (though not to that large of a resolution difference). Am I mistaken?
Posted on Reply
#32
geon2k2
RejZoRThe anti-aliasing, I disagree. Shading techniques used in games like Doom 3 or FEAR do mask jaggy edges quite a bit, but you'll still notice them heavily on thin elements like wire fences, railings, tree branches, electrical wires hanging in the air, first person held guns (because they are static and not part of environment). Things like this drive me insane seeing stupid jaggies moving around.

I personally rather use FXAA or MLAA and lose tiny bit of sharpness and have edges smoothed than having 100% sharp textures and jaggy edges. Games use heavy post processing anyway so even without FXAA, textures will feel blurry. And FXAA/MLAA in most cases filter edges like a 24x FSAA mode. Not in all conditions but most of the time and that's great. Especially since they barely affect framerate.
I think we should agree to disagree, and be happy that we are all different.

Few years back I couldn't play dirt 3 without anti-aliasing, because in the menu there were some floating boxes which had jaggies on all sides and were really annoying, but the exception doesn't make it a rule. I honestly don't feel the benefit of AA while gaming. And I feel that with AA enabled there is very slight input lag even if fps is pretty much the same and in general very high.
Posted on Reply
#33
Xzibit
lilhasselhofferI was under the impression that super sampling was already doing this in the gaming space (though not to that large of a resolution difference). Am I mistaken?
SS doesn't & cant inject information that would be kept if a 4k SD image would be down sampled.
Posted on Reply
#34
Xzibit
CES 2016: AMD Shows Polaris Architecture Demo

CES 2016: AMD FreeSync working over HDMI

CES 2016: AMD Talks Polaris GPU and HDR Monitors (HDR support coming to 300 series)

CES 2016: AMD Talks Bringing HDMI Support to FreeSync (HDMI FreeSync monitor availability starting Q1 2016)
Posted on Reply
#35
FordGT90Concept
"I go fast!1!11!1!"
They said they're supporting GDDR5 and HBM. I wonder if the same silicon will do both or are they making a lower-end silicon for GDDR5 (99% sure they're separate).

I still don't get the point of FreeSync on HDMI. If you want to use FreeSync you should be buying a DisplayPort monitor. No tech in the HDMI ecosystem (except Radeon cards) will support FreeSync over HDMI. HDMI, the standard, doesn't officially support adaptive sync where DisplayPort does. I doubt the HDMI standard will ever add adaptive sync because, excepting consoles, none of the home theater equipment should fall below prescribed framerate.

I want HDR now!
Posted on Reply
#36
Xzibit
FordGT90ConceptThey said they're supporting GDDR5 and HBM. I wonder if the same silicon will do both or are they making a lower-end silicon for GDDR5 (99% sure they're separate).

I still don't get the point of FreeSync on HDMI. If you want to use FreeSync you should be buying a DisplayPort monitor. No tech in the HDMI ecosystem (except Radeon cards) will support FreeSync over HDMI. HDMI, the standard, doesn't officially support adaptive sync where DisplayPort does. I doubt the HDMI standard will ever add adaptive sync because, excepting consoles, none of the home theater equipment should fall below prescribed framerate.

I want HDR now!
I suspect HDMI will be cost efficient for lower end panels. Value series that stick to entry level VRR 35-60hz

Radeon Technologies Group Real-Time High Dynamic Range Demo
Posted on Reply
#37
FordGT90Concept
"I go fast!1!11!1!"
You'd think DisplayPort would be cheaper to implement partly because the royalties are much lower (like $0.20 versus $1 per port).

Yeah, HDR looks like what screens should look like. Like right now, looking at my task bar, it should be pitch black but it isn't because my monitor is incapable of doing the white of the open browser at the same time of black of the task bar.
Posted on Reply
#38
arbiter
I think AMD should compare to their own cards rather then nvidia. Comparing their card to a 950 which even if AMD claims is same machine, given AMD's history wouldn't shock me if there wasn't some trickery involved, aka AMD PR slides for Fury X vs 980ti that had fury x 30% faster then a 980ti as an example.

Instead of comparing to nvidia's last gen cards, use your own and show how much you have improved since your last gen. Won't look good if come april, pascal cards drop on the market and they end up roasting this. Just my opinion on the matter.
Posted on Reply
#39
Xzibit
arbiterI think AMD should compare to their own cards rather then nvidia. Comparing their card to a 950 which even if AMD claims is same machine, given AMD's history wouldn't shock me if there wasn't some trickery involved, aka AMD PR slides for Fury X vs 980ti that had fury x 30% faster then a 980ti as an example.

Instead of comparing to nvidia's last gen cards, use your own and show how much you have improved since your last gen. Won't look good if come april, pascal cards drop on the market and they end up roasting this. Just my opinion on the matter.
I agree that these showcase demos lean towards the demonstrated hardware no matter who is showing off. Maxwell is more power efficient 28nm so why not compare.

Is AMD suppose to hold off until Nvidia showcases their 16nm part and then do a comparison. Is AMD suppose to ask Nvidia to lend them a card that they haven't announced to please their fans in such comparisons ?

Nvidia compares there current gen to two prior, they don't compare there cards to a gen revision. I wonder if you offer the same level of criticism towards them.
Posted on Reply
#40
FordGT90Concept
"I go fast!1!11!1!"
Isn't GTX 950 the lowest power model based on Maxwell? Even if it is, the demonstration is moot because we don't know how capable of that Polaris chip is. It could be R9 Nano-like with a 90w cap or it could be something like Tonga which competes directly with GTX 950 in the market.

I think AMD selected GTX 950 to demonstrate Polaris can be more power efficient than Maxwell under the same workload. The comparison to Maxwell makes sense if AMD gets Polaris to market before Pascal is available (seems likely seeing how AMD is already demonstrating chips). Maxwell loses the power efficiency argument when compared to Polaris (well, duh).
Posted on Reply
#41
Xzibit
FordGT90ConceptIsn't GTX 950 the lowest power model based on Maxwell? Even if it is, the demonstration is moot because we don't know how capable of that Polaris chip is. It could be R9 Nano-like with a 90w cap or it could be something like Tonga which competes directly with GTX 950 in the market.

I think AMD selected GTX 950 to demonstrate Polaris can be more power efficient than Maxwell under the same workload. The comparison to Maxwell makes sense if AMD gets Polaris to market before Pascal is available (seems likely seeing how AMD is already demonstrating chips). Maxwell loses the power efficiency argument when compared to Polaris (well, duh).
Its the 750 but yes we don't know the specs of the Polaris GPU and the 750 probably doesn't have enough muscle to do 60 fps in that scenario and if it was choosen then you'd have arby saying why they didn't use a 950. Sane people will wait for their respected parts to be out in the market and then compare unless your a impulse buyer.
Posted on Reply
#42
arbiter
FordGT90ConceptIsn't GTX 950 the lowest power model based on Maxwell? Even if it is, the demonstration is moot because we don't know how capable of that Polaris chip is. It could be R9 Nano-like with a 90w cap or it could be something like Tonga which competes directly with GTX 950 in the market.

I think AMD selected GTX 950 to demonstrate Polaris can be more power efficient than Maxwell under the same workload. The comparison to Maxwell makes sense if AMD gets Polaris to market before Pascal is available (seems likely seeing how AMD is already demonstrating chips). Maxwell loses the power efficiency argument when compared to Polaris (well, duh).
Not likely since AMD only had a prototype for what 1.5months now or so where as Nvidia has had there's fora good 7months or so. AMD has work yet to do on chip before its ready, not likely gonna be out before pascal. If it was on a mature node then maybe could do it in 4-5 months but with new node, cutting corners is not really smart move.
Posted on Reply
#43
FordGT90Concept
"I go fast!1!11!1!"
When NVIDIA announced Pascal, they only presented a CGI rendering of the chip and said what it was about. NVIDIA hasn't demonstrated Pascal yet most likely because they're waiting on TMSC (again).

AMD likely obtained this Polaris chip a long time ago from Samsung. The Samsung Galaxy S6 had a 14nm chip and that was announced back in March. AMD announced Polaris about the first of the year and were demonstrating it a few days later.

Samsung's 14nm process is mature where TSMC's 16nm process is not.
Posted on Reply
#44
arbiter
FordGT90ConceptWhen NVIDIA announced Pascal, they only presented a CGI rendering of the chip and said what it was about. NVIDIA hasn't demonstrated Pascal yet most likely because they're waiting on TMSC (again).

AMD likely obtained this Polaris chip a long time ago from Samsung. The Samsung Galaxy S6 had a 14nm chip and that was announced back in March. AMD announced Polaris about the first of the year and were demonstrating it a few days later.

Samsung's 14nm process is mature where TSMC's 16nm process is not.
Um nvidia part was taped out like 7 months ago, reason likely nvidia hasn't said much is well to keep info about it secret. No reason to release specs or anything about it when don't need to. AMD talking about there's is a bit of a double edge sword in a sense. Just cause they haven't demonstrated it yet doesn't mean anything as there is nothing to go on. Just cause a fab was used to make a small low power ARM cpu doesn't mean its mature and good enough for a large GPU.
Posted on Reply
#45
Xzibit
arbiterUm nvidia part was taped out like 7 months ago, reason likely nvidia hasn't said much is well to keep info about it secret. No reason to release specs or anything about it when don't need to. AMD talking about there's is a bit of a double edge sword in a sense. Just cause they haven't demonstrated it yet doesn't mean anything as there is nothing to go on. Just cause a fab was used to make a small low power ARM cpu doesn't mean its mature and good enough for a large GPU.
For it being a secret you sure talk like you know a lot about it. :laugh:
Posted on Reply
#46
FordGT90Concept
"I go fast!1!11!1!"
arbiterJust cause they haven't demonstrated it yet doesn't mean anything as there is nothing to go on. Just cause a fab was used to make a small low power ARM cpu doesn't mean its mature and good enough for a large GPU.
Except that AMD already demonstrated a large (compared to ARM anyway which have well under a billion) GPU. GTX 950 has 3 billion transistors so the Polaris demo had to have 2+ billion to keep pace.
Posted on Reply
#47
Xzibit
FordGT90ConceptExcept that AMD already demonstrated a large (compared to ARM anyway which have well under a billion) GPU. GTX 950 has 3 billion transsistors so the Polaris demo had to have a lot to keep pace.
Just FYI
AnandtechThe MXM modules in the picture are almost component-for-component identical to the GTX 980 MXM photo we have on file. So it is likely that these are not Pascal GPUs, and that they're merely placeholders.

On that note, while DRIVE PX 2 was the focus of NVIDIA’s presentation, it was GTX Titan X that was actually driving all of the real-time presentations
Posted on Reply
#48
arbiter
XzibitFor it being a secret you sure talk like you know a lot about it. :laugh:
For someone to claim that there is an issue when they don't know a damn thing either is worse. Yea i don't know if they are just keeping it a secret or if they got issues. But keeping it a secret is more likely as if there was issues would probably heard. Someone claiming there is a problem based on info they pulled outta their #(*@, well they are ones that started this all and you should be given them crap first.
FordGT90ConceptWhen NVIDIA announced Pascal, they only presented a CGI rendering of the chip and said what it was about. NVIDIA hasn't demonstrated Pascal yet most likely because they're waiting on TMSC (again).
Posted on Reply
#49
Xzibit
arbiterFor someone to claim that there is an issue when they don't know a damn thing either is worse. Yea i don't know if they are just keeping it a secret or if they got issues. But keeping it a secret is more likely as if there was issues would probably heard. Someone claiming there is a problem based on info they pulled outta their #(*@, well they are ones that started this all and you should be given them crap first.
Please point to where I pointed out there was an issue?

All I see is your usual AMD thread trolling (Not just in this forum).
Posted on Reply
#50
deemon
RejZoRViewing angles are meaningless for gaming imo. For comfortable gaming you're facing monitor dead on anyway.
www.lagom.nl/lcd-test/viewing_angle.php
Put your browser to fullscreen (F11) scroll a bit downward so your entire display is covered with the "lagom" text on grey background.
I see a huge color distortion on my TN panel even when viewing "dead on anyway" so much that on top of the screen the red text is actually already cyan. Whereas on my IPS and MVA panel I do not see this color distortion on his picture.
Posted on Reply
Add your own comment
Dec 24th, 2024 13:57 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts