Sunday, May 29th 2016
Next-Gen Radeon "Polaris" Nomenclature Changed?
It looks like AMD is deviating from its top-level performance-grading, with its next-generation Radeon graphics cards. The company has maintained the Radeon R3 series for embedded low-power APUs, Radeon R5 for integrated graphics solutions of larger APUs; Radeon R7 for entry-thru-mainstream discrete GPUs (eg: R7 360, R7 370); and Radeon R9 for performance-thru-enthusiast segment (eg: R9 285, R9 290X). The new nomenclature could see it rely on the second set of model numbers (eg: 4#0) to denote market-positioning, if a popular rumor on tech bulletin boards such as Reddit holds true.
A Redditor posted an image of a next-gen AMD Radeon demo machine powered by a "Radeon RX 480." Either "X" could be a variable, or it could be series-wide, prefixing all SKUs in the 400 series. It could also be AMD marketing's way of somehow playing with the number 10 (X), to establish some kind of generational parity with NVIDIA's GeForce GTX 10 series. The placard also depicts a new "Radeon" logo with a different, sharper typeface. The "RX 480" was apparently able to run "Doom" (2016) at 2560x1440 @ 144 Hz, with the OpenGL API.
Source:
Reddit
A Redditor posted an image of a next-gen AMD Radeon demo machine powered by a "Radeon RX 480." Either "X" could be a variable, or it could be series-wide, prefixing all SKUs in the 400 series. It could also be AMD marketing's way of somehow playing with the number 10 (X), to establish some kind of generational parity with NVIDIA's GeForce GTX 10 series. The placard also depicts a new "Radeon" logo with a different, sharper typeface. The "RX 480" was apparently able to run "Doom" (2016) at 2560x1440 @ 144 Hz, with the OpenGL API.
72 Comments on Next-Gen Radeon "Polaris" Nomenclature Changed?
Smaller chips are easier to design.
Consoles are in AMD's pocket.
Yadayada master plan.
But then, the only config faster than 1070 will be dual chip.
Many users avoid such configs, because at least at the moment they do give trouble.
Not only could you NOT get any boost from second GPU chip, but it might crash on you altogether.
If you price it higher than 1070, people won't buy it.
Also: 232mm2 + 232mm2 = 464mm2
And with single chip being 115-140w, dual chip will be somewhere on 170-210w levels, so more than 1080.
PS
Heck, 295x wipes the floor with TitaniumX/980Ti OC in so many games...
As multi-GPU code in games takes over, Crossfire/SLI becomes less important. That should translate to fewer crashes.
Yeah, power will be more but that is no surprise. GCN is capable of async multithreading and that comes at a cost. GPUs can't power down parts like CPUs can because their pipelines are much more complex.
295X2 launched at $1500. RX 480 I could see launching at $600-700 which puts it squarely in the same price bracket as GTX 1080.
Haven't we seen this fail miserably already with the FX processor line? It is the same, two = one for more performance and 'more cores' approach they've adopted so many times. I really do hope AMD coincides the dual GPU solution with some very big steps in the GCN arch as well, most notably with regards to perf/watt and efficiency or this will fail. So far it's looking promising, but we've also been thére before.
FWIW, I really do hope AMD surprises us in a good way, and makes choosing for them this time around a true no-brainer. They need it and with Nvidia releasing 1080 at this price point, the market evidently also needs it.
They R&D in both GPUs and CPUs, yet their budget is smaller than nVidia's alone (and like 10 times smaller than Intel's).
And then there is that "smaller chips are cheaper to design" thing.
This time they have API support and consoles (nothing like that was going for them in Buldozer times), so let's see how it goes. I'll keep my fingers crossed. =/
The better comparison is Athlon 64 FX-60: a dual-core CPU when single-core CPUs were all the rage. Just 10 years later and budget CPUs are dual-cores.
If the speculation is correct that RX 480 is a dual-GPU Polaris 10, it could be the equivalent of an Athlon 64 X2--the affordable alternative to the FX-60; the series of chips that slowly but surely conquered the market until Core 2 Duo debuted. Before this, all that was available on the market were FX-60s (the biggest, baddest GPUs with a price to match).
And normal devs are not usually lazy, they are not plain stupid breaking things and adding unneeded time and money costs. Vulkan in current state is still more a PR gimmick. I cannot see anyone writing in pure C# their engine and engine creation tools from scratch. UE4 and others are still in development. It consists of functions rooted in openGL. Same functions doing same, just a compiler that translates to SPIRV and fed to the vulkan driver to render the scene. OPENGL doesn't disappear anywhere.
you can write a translation layer between vulkan and a ogl application (has been done, cant find links) and that alone _will_ improve performance, no one in their right mind will call that a vulkan enabled application but still it works.
have you seen and coded all the game engines available that use vulkan? have you even coded anything that uses vulkan/ogl/olges for that matter?why are you trying to pass your half assed assumptions as truths? gosh... ok no need to answer anything, i can see what the response would be already, you cant even distinguish between the standard and vendor specific extensions (and those who chose to use them know damn well what happens)...VULKAN doesn't guarantee performance improvement at all. Especially in GPU bound games, and Doom can run on a P4 coffee machine as proven in here in forums, on contrary it may introduce less performance, especially at high resolutions. It's same as with Mantle it was... actually it is just MantleGL. Same problems and average resume. More stutter, no difference with any reasonable i5 (sadly it means faster than any AMD CPU up to date now).
Look at first try with Talos principle, it ran worse on Vulkan(sure sure blame the dev). Look at Dota2 update, @github people are reporting actually lower performance. Stutters (obviously due to buggy shader code when casting magic). I won't expect any magic from Doom too.
Too much PR bullcrap IMHO. It is all raw technology and shiny term as RGB LED thingies are being packed everywhere. Just because it needs to be so ffs.
That "nvidia's"[1] thing You speak of is probably VK_NV_glsl_shader, which, surprise! surprise! has made it into the core Vulkan spec since vulkan version 1.0.5 (2016-03-04). What it does is it allows loading GLSL shaders directly, skipping the translation/compilation to SPIR-V step. (i.e. instead of [GLSL code] –> [GLSL to SPIR-V compiler] –> [ISA-specific SPIR-V compiler] –> [GPU-ISA-specific machine code] it allows to do [GLSL code] –> [ISA-specific GLSL compiler] –> [GPU-ISA-specific machine code], skipping the SPIR-V step.)
[1] The only thing Nvidia owns about it is coming up with the idea and writing the extension spec. There are no IP claims (duh, no IP to claim here) and it does not depend on any hardware capabilities / [lack of] limitations, so there absolutely no reason for other vendors to not implement it. And they now have to, to conform to vulkan 1.0.5 or later. Although, unlike in OpenGL, where extensions are enabled unless explicitly disabled, in Vulkan, most of functionality, not only extensions, are disabled unless the programmer asks to enable them, if any. This is to avoid those situations that sometimes happen on OpenGL where some stuff gets implicitly enabled and unexpectedly gets in the way on code that was written without taking that stuff into account (possibly because it did not even exist at the time of writing)
P.S. Both in Vulkan and OpenGL, as long as there are no IP claims and as long as the hardware allows it, "vendor specific" extensions are not that "vendor specific" at all. They are free to implement those on their drivers. Which they often do. For example, on my Nvidia GPU, with the OpenGL implementation I have, maybe some 1/4 of all "vendor specific" the extensions implemented are under "NV" (nvidia, duh), rest being under "AMD", "ATI" and many other vendors (I count 12 different vendors here)
EDIT: Well, yeah, Vulkan needs a lot more work on the game dev side and a LOT more optimization work, again, on the game side.
And yeah, Talos released with slower Vulkan perf. Because it was still an early beta implementation and people were basically doing a beta test.
Right now, in most cases, it is actually running faster than any other renderers Talos has (it has D3D9, D3D11, OpenGL, OpenGL ES, Vulkan and a software renderers).
And no, their Vulkan renderer does not work like an OpenGL –> Vulkan wrapper.
Sauce: I am on first-name basis with their lead programmer. \_:)_/
no queue prioritization, limited fencing, almost hard set pipelining, etc...
vulkan is _not_ a resource name change, but you... errm... someone can spend some time creating a shim to emulate opengl on top of vulkan and still get some performance increase with just that
and why should that be your problem? thats the developers problem, wat da hell? ofc ogl wont go anywhere, the spec will be kept frozen and vendor will maintain compatibility to it in the future for old software sake.
if devs want an half-assed implementation to cut development costs, they will stick with direct3d, easier development, easier error handling and debugging, easier device binding/management handling, etc, the driver will help you alot, even when you are doing stuff wrong. no aaa team will lose time&money porting their stuff to opengl, other than indie teams experimenting with some api just to be compliant. i guess it wasnt talos principle then, but im sure there was some company that created a shim for ogl->vulkan and it actually improved performance by more than 15% (does anyone have any insight on this? i cant remember what it was and thus cant find any links about it)
doom will run faster in vulkan than in opengl, there is no way this wont happen. unless they start to castrate the functionality or decide to not use the api as it was intended on purpose clearly you know nothing about what you are talking about, just what you read online (and not even documentation based). take a swig at it, build something with both apis, test and compare them both for yourself and then you might actually have some basis to coherently trash talk it. but making an opinion on something you know very little about (just what other told you) is not the best, i mean, come on, you have a mind of your own dont you? thats like disliking a brad of hammers just cause people online are reporting that they hit their finger nails every time they use hammers from that brand...
Maybe You have this in mind?
And when it comes to games, The Talos Principle was literally the first game with Vulkan support (and was IIRC officially the "Vulkan launch title", along with being the only Vulkan game [available to public] for a while), so, quite a headscratcher what else it could be. the good ol' mythbusters' "failure is always an option" catchphrase applies here quite a bit. You just can never tell when the devs of any game get their next random mass-brainfart and what the results would follow from it.
@truth teller please put of some steam. I guess you really don't want to accept a mature dialogue. OpenGL will not go away. I already explained why. They both have strengths and weaknesses.
So assuming all this information. We have Polaris. I agree on the speculation that it hasn't changed much from Fury. Just as Intel Tick phase. AMD will gain from vulkan due to crap DX11 drivers and Vulkan drivers will perform better just because they don't have to do anything than just delivering bare access to to the GPU resource, so AMD will try to play their Joker. I also read on steam the dev comments about Talos Vulkan development, wished them luck, as it is a tough job really. Luckily the game doesn't consist of complex scenes. Is he a neighbour also?
I wonder how cryengine, being a an ultimate inefficient code cemetery would run on Vulkan... I guess like a turd :D
So yes, in that sense, that extension does not depend on any hardware capabilities other than being able to, well, run shader code to begin with.
When it comes to these graphics API specs, the only hardware-related concern is "does the hardware lack something that makes it straight impossible to implement this part of the spec?". e.g. "we want tessellation. Can this hardware do that? Does it have the required logic for it?"
Although, do keep in mind that when it comes to OpenGL, at least, it is perfectly conformant behaviour to instead of using hardware acceleration, perform [whatever] in software. Either full hardware acceleration, mixed hw acceleration with software "emulation" and purely running in software are all fully legit in the eyes of the spec. As long as it is producing correct results, the driver can claim support for a capability / extension, regardless if done on hardware or in software.
Actually, small bits of it are still sometimes done in software. "And You will never notice if it is done right."
Direct3D, on the other hand, AFAIK, quite strictly defines what has to be done in hardware...
I do see what You did there, though. You took a quote out of context, to use it as a "seed" to make an unrelated point. Don't do that. It's kind of a d*** move. We are all adults here – if You want to make a point, just simply do so. No need for an out-of-context quote to "justify" making the point ;]
P.S. I know there's a predisposition that "software rendering == slow". That is often true, but not always. I have a three different software OpenGL implementations installed that I can use at will if I want to. (I mostly use them for validating stuff). The point is, though: since I have a beefy CPU, I can run some fairly recent and fairly graphics-intensive games purely in software and still get playable framerates. "not too shabby for purely software rendering, eh?"
P.P.S. That's it: I'm out. This has already gone off topic enough and I seem to be writing walls of text, from certain point of view, mostly for naught.
Thus, this is my last reply on this thread. Peace out, bros!
It's a good card, no doubt, but having to wait for Vega to get a response to GTX 1080 is going to suck.
What can we say? They still haven't learned. They present a way to break open the market with a guy that has broken english, no PR skills and nearly broke the damn GPU as well. Linus nearly needs to drag the info out of him.
I mean they could have done this so much better. A 480 at 199 bucks is pretty astounding. Why put it out so clumsily and so vaguely!?!? If they drop Hawaii performance at 199, that's going to turn heads.