• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

AMD's GPU Roadmap for 2016-18 Detailed

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,763 (7.42/day)
Location
Dublin, Ireland
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard Gigabyte B550 AORUS Elite V2
Cooling DeepCool Gammax L240 V2
Memory 2x 16GB DDR4-3200
Video Card(s) Galax RTX 4070 Ti EX
Storage Samsung 990 1TB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
AMD finalized the GPU architecture roadmap running between 2016 and 2018. The company first detailed this at its Capsaicin Event in mid-March 2016. It sees the company's upcoming "Polaris" architecture, while making major architectural leaps over the current-generation, such as a 2.5-times performance/Watt uplift and driving the company's first 14 nanometer GPUs; being limited in its high-end graphics space presence. Polaris is rumored to drive graphics for Sony's upcoming 4K Ultra HD PlayStation, and as discrete GPUs, it will feature in only two chips - Polaris 10 "Ellesmere" and Polaris 11 "Baffin."

"Polaris" introduces several new features, such as HVEC (h.265) decode and encode hardware-acceleration, new display output standards such as DisplayPort 1.3 and HDMI 2.0; however, since neither Polaris 10 nor Polaris 11 are really "big" enthusiast chips that succeed the current "Fiji" silicon, will likely make do with current GDDR5/GDDR5X memory standards. That's not to say that Polaris 10 won't disrupt current performance-thru-enthusiast lineups, or even have the chops to take on NVIDIA's GP104. First-generation HBM limits the total memory amount to 4 GB over a 4096-bit path. Enthusiasts will have to wait until early-2017 for the introduction of the big-chip that succeeds "Fiji," which will not only leverage HBM2 to serve up vast amounts of super-fast memory; but also feature a slight architectural uplift. 2018 will see the introduction of its successor, codenamed "Navi," which features an even faster memory interface.



View at TechPowerUp Main Site
 
Next gen memory? I know memory bandwidth plays a role in all this, but I don't think it's that big compared to GPU's actually being able to process all the data.
 
Yeah, HBM2 already being replaced by 2018 seems ridiculous.

I'm surprised they aren't calling it HDMI 2.0a (supports FreeSync) too. I guess they want to get the message out that Polaris will have HDMI 2 because Fiji not having it was pretty disappointing.
 
Yeah, HBM2 already being replaced by 2018 seems ridiculous.

I'm surprised they aren't calling it HDMI 2.0a (supports FreeSync) too. I guess they want to get the message out that Polaris will have HDMI 2 because Fiji not having it was pretty disappointing.

HDMI 1.4a+ can use FreeSync although its not official.

HDMI plans on officially supporting Dynamic-Sync in HDMI 2.0b.
 
Last edited:
azE7pqm_460s.jpg
 
Well... Year 2016 is still a miss for enthusiast segment...

I feel sorry for those peps driving 4K screens and wanting to game something on them with some reasonable FPS...
 
Neither AMD nor NVidia are providing me options to replace my 295x2 for at least 8 months :(
 
R9 295X2 is a beast. It pisses on GTX 980Ti in pretty much all cases. By a lot.
 
R9 295X2 is a beast. It pisses on GTX 980Ti in pretty much all cases. By a lot.

Except the little inconvenience called CF-profiles or their absence.
 
Well... Year 2016 is still a miss for enthusiast segment...

I feel sorry for those peps driving 4K screens and wanting to game something on them with some reasonable FPS...

But thats what happens when some one says jump and they jump, and not ask how high.
 
... and how much more watts it burns through?

Who cares? Are you one of those who would buy a Ferrari and then constantly worry and bitch about the mileage? It's a top of the line liquid cooled card. Only thing people who buy such cards care about what kind of framerate they get out of it. Nothing else.
 
I'm surprised they aren't calling it HDMI 2.0a (supports FreeSync) too. I guess they want to get the message out that Polaris will have HDMI 2 because Fiji not having it was pretty disappointing.


I agree, it was disappointing and so too IMO was the lack of HEVC decode. It just made the so called new silicon feel like recycled hardware that was recycled one too many times without sufficient updates.

I'm glad AMD is going to address this. When I was buying my last video card I was looking for one to drive a 4K UHD Smart TV which only had HDMI 2.0 inputs. Therefore AMD was taken out of the running automatically and I would have preferred to have had a viable choice between AMD and nVidia rather then just nVidia.
 
R9 295X2 is a beast. It pisses on GTX 980Ti in pretty much all cases. By a lot.

Performance summary shows the terrible weakness of dual card support.

perfrel_3840_2160.png



and for the terrible car analogy bollocks.


perfwatt_3840_2160.png


Essentially, the 295X2 is the fastest card when all conditions are met.

But those conditions aren't met enough of the time (same for sli).

Single card FTW.
 
Well... Year 2016 is still a miss for enthusiast segment...

I feel sorry for those peps driving 4K screens and wanting to game something on them with some reasonable FPS...
It's not that bad. When SLI works, which it does on most AAA titles, 4k is achievable. 3 Titan X's piss all over The Division because it has excellent support even at 4k. Scaling is superb with no microstutter. Other games not so much, but really it all depends. For games that don't support SLI properly we need Pascal and above.
 
It's not that bad. When SLI works, which it does on most AAA titles, 4k is achievable. 3 Titan X's piss all over The Division because it has excellent support even at 4k. Scaling is superb with no microstutter. Other games not so much, but really it all depends. For games that don't support SLI properly we need Pascal and above.

Agree... it is hit or miss as always. What a wonderful world we live in... leave few K's dough... and still have to struggle :D
 
Well, least we know some cards are coming out this year. Its just a matter of when because I want to see the next R9 490X or if they stick with the naming scheme R9 Fury X to compare to the next gen Nvidia GTX 1080ti (Or whatever its to be called). Either way, I will be replacing my current cards with two of whatever the top end card from either side that peaks my interest is.

Performance summary shows the terrible weakness of dual card support.

perfrel_3840_2160.png



and for the terrible car analogy bollocks.


perfwatt_3840_2160.png


Essentially, the 295X2 is the fastest card when all conditions are met.

But those conditions aren't met enough of the time (same for sli).

Single card FTW.
Yea, that's a downside of SLI/CFX which is why when I do it these days I always grab the top end cards so at least I have some performance to fall back on when there is not one available. Though to be fair with only a few exceptions the profiles for the games that really need it come out in a reasonable amount of time.

... and how much more watts it burns through?
You don't buy a high end card and worry about power consumption. Like the example used above, you by high performance, your going to use some watts. Besides dual GPU though, you can run any modern GPU off a $50 PSU in this day and age so it really does not matter unless you need to run (Or want to run) multiple cards.
 
Performance summary shows the terrible weakness of dual card support.

I have to agree too... I often chuckle in those indie game reviews... made by RCoon itself :D. Almost always 50% load :D

Truly more power isn't the issue... the space, and cold air is... the hotter it gets the more throttle it can create again, thus getting out of sycn and stutter... yet another struggle.

I really hope that DX12 will do something about that... ditch the old AFR as such... the next hope is nvlink... if nvidia leaves one link for adding SLI using that connection, thus making possible to access neighboring RAM pool without latency tax, it would be also something.

All this needs just good coders... the hardware is there as usualy... we lack time.
 
Some people like me don't mind power consumption from the wall, but more power consumption equates to more heat output. That's bad for me as I hate a hot room.
 
Those graphs are meaningless when you bring in the price. R9 295X2 was like 3 times cheaper than GTX 980Ti. If I didn't have miniATX case back then, I'd most likely have the R9 295X2. And the fact I generally have very little trust in multi-GPU design, because it has never really worked well. Maybe DX12 will change that, but I doubt it.
 
Yet you live in Texas? :eek:
Trust me, I'd rather not live here. Taking a trip to Colorado to seek out potential areas to move to. In the meantime, I have a 12,000 BTU window unit coming from Amazon to help cope with the heat this summer. It's gonna be toasty.

Those graphs are meaningless when you bring in the price. R9 295X2 was like 3 times cheaper than GTX 980Ti. If I didn't have miniATX case back then, I'd most likely have the R9 295X2. And the fact I generally have very little trust in multi-GPU design, because it has never really worked well. Maybe DX12 will change that, but I doubt it.
You wot m8? Cheapest I've ever seen a 295x2 was 400 bucks used. A 980ti can be had new for 559 all USD. Wouldn't say that's a 3rd the cost of a 980ti.
 
Back
Top