Monday, January 4th 2016
AMD Demonstrates Revolutionary 14 nm FinFET Polaris GPU Architecture
AMD provided customers with a glimpse of its upcoming 2016 Polaris GPU architecture, highlighting a wide range of significant architectural improvements including HDR monitor support, and industry-leading performance-per-watt. AMD expects shipments of Polaris architecture-based GPUs to begin in mid-2016.
AMD's Polaris architecture-based 14nm FinFET GPUs deliver a remarkable generational jump in power efficiency. Polaris-based GPUs are designed for fluid frame rates in graphics, gaming, VR and multimedia applications running on compelling small form-factor thin and light computer designs.
"Our new Polaris architecture showcases significant advances in performance, power efficiency and features," said Lisa Su, president and CEO, AMD. "2016 will be a very exciting year for Radeon fans driven by our Polaris architecture, Radeon Software Crimson Edition and a host of other innovations in the pipeline from our Radeon Technologies Group."
The Polaris architecture features AMD's 4th generation Graphics Core Next (GCN) architecture, a next-generation display engine with support for HDMI 2.0a and DisplayPort 1.3, and next-generation multimedia features including 4K h.265 encoding and decoding.
AMD has an established track record for dramatically increasing the energy efficiency of its mobile processors, targeting a 25x improvement by the year 2020.
AMD's Polaris architecture-based 14nm FinFET GPUs deliver a remarkable generational jump in power efficiency. Polaris-based GPUs are designed for fluid frame rates in graphics, gaming, VR and multimedia applications running on compelling small form-factor thin and light computer designs.
"Our new Polaris architecture showcases significant advances in performance, power efficiency and features," said Lisa Su, president and CEO, AMD. "2016 will be a very exciting year for Radeon fans driven by our Polaris architecture, Radeon Software Crimson Edition and a host of other innovations in the pipeline from our Radeon Technologies Group."
The Polaris architecture features AMD's 4th generation Graphics Core Next (GCN) architecture, a next-generation display engine with support for HDMI 2.0a and DisplayPort 1.3, and next-generation multimedia features including 4K h.265 encoding and decoding.
AMD has an established track record for dramatically increasing the energy efficiency of its mobile processors, targeting a 25x improvement by the year 2020.
88 Comments on AMD Demonstrates Revolutionary 14 nm FinFET Polaris GPU Architecture
Viewing angle and gamma it's red at the bottom, gets choppy about 2/3 of the way up, and turns cyan above that. Viewing angle and brightness I see shades of violet instead of a solid color. I doubt there are "issues" with the TMSC process; they are just way behind Samsung which is why AMD can demo (and likely deliver) the next generation of GPUs first.
So I'm not holding my breath about power consumption, AMD has already kind of shot themselves in the foot by picking a less power efficient process.
In normal position, like 1-2cm of top left corner is kinda turning to cyan. When you factor in the fact that you never have a monochrome image like this in games or in Windows, it becomes entirely irrelevant "problem".
The model of my monitor is ASUS VG248QE with TN panel, 144Hz screen and 1ms pixel response. Yeah, that's how far TN's have come. Stop bloody comparing them to TN's from 2005 already.
arstechnica.com/apple/2015/10/samsung-vs-tsmc-comparing-the-battery-life-of-two-apple-a9s/
The interesting thing is that the real difference seems to come out in very heavy CPU intensive tasks. So, if you make a GPU, there is a good chance the load power consumption will be higher using Samsung's 14nm than TMSC's 16nm.
It could be the reviewer they keep quoting got a high leak one. Bad luck
Is there a specific model I should be looking at? The newer Samsungs (e.g. Galaxy S6) run on Cortex-A53 or Cortex-A57.
Considering their operating systems are completely different and Apple does its own thing for GPUs (for Metal), I don't think it's really a 1:1 comparison.
Edit: Samsung Galaxy S6 has 8 cores as a Big-Little architecture--nothing like Apple. Actually, it appears Appl A9 is only a dual core. Samsung's processor should have substantially more performance when power saving and when under heavy load.
Edit: It would be best to compare APL0898 to APL1022. Here we go: bgr.com/2015/10/08/iphone-6s-a9-processor-samsung-tsmc-batterygate/
The identical chip, in identical phones, the only difference is 14nm FinFET vs. 16nm FinFET. The Samsung chip clearly uses more power. It doesn't get more identical than that. It isn't just one reviewer, the results have been confirmed all over the net. Even Apple confirmed there is a 2-3% difference in normal use.
It also isn't a 3 Watt difference, not sure where that came from... I don't even think these chips use 3 Watts a full load.
Remember that in case of Apple's chips 16nm TSMC is significantly better (I recall 20%-ish less power consumption, correct me if I'm wrong) than 14nm Samsung. Logical reasoning is strong within you.
AMD's power consumption was badmouthed into oblivion even though mainstream cards were nowhere as bad, e.g. 380x about 20%-ish more total power consumed, while also being faster.
If they'd compare to own cards, uneducated public wouldn't figure that it is much better than nVidia's. It's also brigthness/contrast, so, yep, it does.
A 10-bit image will have banding on a 8-bit monitor. Most people if they haven't recently bought a monitor are likely on a 6-bit+FRC monitor and don't even know it.
First, I'm going to give you the facts that the monitor I'm looking at right now isn't 10-bit, so the premise on its face is silly. If it can render the "better" image then 8 bit is capable of it now.
Now, what you've shown is banding. The color bands you've shown represent only 26 distinct colors (and that's really pushing it). 26 distinct color values would be 3 bit color (approximately) if my math isn't off. 26/3 = 8.67, two cubed is 8. The difference between 8 bit and 10 bit above is then akin to the difference between 3 bit and 8 bit. I'm having an immensely difficult time buying that little bit of math.
Moreover, you've started equating grayscale with color scale. Sorry, but that's silly too. If you're including grayscale as a measure then your 8 bit colors are now 3(2^8)*2^8 = 4(2^8). Can you seriously tell that many distinct colors apart? I'll give you a hint, biology says you can't.
So what I'm seeing in the "10 bit" color spectrum is the 8 bit spectrum I'd already be able to see. There is some minor banding, but not really enough to notice unless intensely focused upon. If I already can't tell the difference between color values, what exactly is the difference between having more of them to utilize?
What you've demonstrated is marketing BS, that people would use to sell me a TV. This is like arguing that a TV is better because it supports a refresh rate of 61 Hz, above a TV with 60 Hz. While technically correct, the difference is entirely unappreciable with the optical hardware we were born with. I can see the difference between 24 and 48 Hz resolutions (thank you Peter Jackson). I can't tell the difference between the RGB color of 1.240.220 and 1.239.220. I don't see how adding an extra couple of values between colors I already can't differentiate is particularly useful.
This is why I'm asking why 10 bit color is useful. It's another specification that makes precious little sense when you understand the math, and even less when you understand the biology. Tell me, did you buy it when people said adding a yellow LED to the TV screen produced "better" yellows (Sharp's Aquos)? Did you suddenly rush out and get a new data standard that included an extra value to accommodate that new LED? I'd say no. I'd say that it was a cheap tactic to sell a TV on a feature that was impossible to discern. That's largely what 10 bit is to me, a feature that can't reasonably improve my experience, but I will be told is why I should by this new thing.
Please sell me a GPU that can run two 1080p monitors with all of the eye candy on high, not a chunk of silicon which requires a couple of thousand dollars of monitor replacement to potentially be slightly better than what I've got now. Especially not when the potential improvement is functionally impossible for my eyeballs to actually see.
(24Hz and 48Hz are not resolutions, but refresh rates; Yes, I can not see a difference between 1,240,220 and 1,239,220 either, but I can see a clear difference between 0,128,0 and 0,129,0 for example - also when I make a gradient from 0,0,0 to 255,255,255 on my monitor, I do see distinct lines between colors - true, some go over smoother that others, but still there too many visible lines in the picture and the gradient is not smooth; I don't buy a TV because it has better stats on paper - I have to see the difference myself and it has to be big enough for me to be convinced - quite often it has been clearly visible. Same way I want to see 10bit picture with my own eyes in shop compared to 8bit before I buy anything (unless ofc the price difference is so small that it doesn't matter anyway, but I doubt that this will be the case when consumer 10bit displays launch); You mistake me for hardware vendor - I am not (read: I don't sell you anything, no new displays, no new GPU-s nor new eyeballs that can actually see different colors);
[INDENT]High-dynamic-range imaging (HDRI or HDR) is a technique used in imaging and photography to reproduce a greater dynamic range of luminosity than is possible with standard digital imaging or photographic techniques.
The aim is to present the human eye with a similar range of luminance as that which, through the visual system, is familiar in everyday life. The human eye, through adaptation of the iris (and other methods) adjusts constantly to the broad dynamic changes ubiquitous in our environment. The brain continuously interprets this information so that most of us can see in a wide range of light conditions. Most cameras, on the other hand, cannot.
HDR images can represent a greater range of luminance levels than can be achieved using more 'traditional' methods, such as many real-world scenes containing very bright, direct sunlight to extreme shade, or very faint nebulae.
[/INDENT]
In other words:
"Now, what you've shown is banding. The color bands you've shown represent only 26 distinct colors (and that's really pushing it). 26 distinct color values would be 3 bit color (approximately) if my math isn't off. 26/3 = 8.67, two cubed is 8. The difference between 8 bit and 10 bit above is then akin to the difference between 3 bit and 8 bit. I'm having an immensely difficult time buying that little bit of math."
Let's do the math here. The spectrum is functionally continuous and adding extra bits to color information doesn't produce stronger base colors, so we're going to say what each bit represents of the spectrum:
2^1 = 1 bit, 50%
2^2 = 2 bit, 25%
2^3 = 3 bit, 12.5%
2^4 = 4 bit, 6.25%
2^5 = 5 bit, 3.125%
2^6 = 6 bit, 1.5625%
2^7 = 7 bit, 0.78125%
2^8 = 8 bit, 0.390625%
2^9 = 9 bit, 0.1953125%
2^10 = 10 bit, 0.0976562%
That means, according to your cited differences, that a step down from 12.5% to 0.390625% (delta = 12.109) is the same as a step down from 0.39% to 0.09% ( delta = 0.2929). Those numbers don't lie, and I call bullshit on them. Your infographic is attempting to convey differences, that aren't in any way representative of reality. If you'd like to contest that, please point out exactly where the error in my mathematics lies. I'll gladly change my opinion if I've somehow missed the point somewhere.
In short, what I'm telling you is that your example is crap, which is what I thought I said clearly above. Everything I've observed from 10 bit is functionally useless, though you're free to claim otherwise. What I have observed is variations in intensity and increases in frame rates making a picture objectively better. HDR (intensity of colors being pushed out), pixel count (standard more monitor is good argument), and frame rate (smoother motion) are therefore demonstrably what cards should be selling themselves on (if they want my money), not making a rainbow slightly more continuous in its color spectrum. Unless you missed the title of the thread, this is still about Polaris.
So we're clear, I have no personal malice here. I hate it when companies market new technology on BS promises, like the yellow LED and 10 bit being an enormous improvement over 8 bit. In the case of yellow LEDs, I've literally never been able to determine a difference. In the case of 10 bit color, I've yet to see a single instance of where slightly "off" colors due to not having an extra couple of shades of one color is going to influence me more than dropping from 30 Hz refresh to 25 Hz refresh. If you are of the opposite opinion, I'm glad to allow you to entertain it. I just hope you understand how much of an investment you're going to have to make for that slight improvement, while the other side of it requires significantly less cost to see appreciable improvement.
At 9-bit, there would be a 254.5 step between 254 and 255. At 10-bit, there would be 254.25, 254.5. and 254.75 steps between 254 and 255. Of course it doesn't work like that in binary but in practice that's the difference.
In terms of the eyes being able to perceive the difference at 8-bit is highly subjective. I can see the transition line between 254 and 253 but I can't really make out the other two. I think 9-bit would be good but I'm leaning towards the idea that 10-bit is excessive.