# AMD 7nm "Vega" by December, Not a Die-shrink of "Vega 10"



## btarunr (Aug 22, 2018)

AMD is reportedly prioritizing its first 7 nanometer silicon fabrication allocations to two chips - "Rome," and "Vega 20." Rome, as you'll recall, is the first CPU die based on the company's "Zen 2" architecture, which will build the company's 2nd generation EPYC enterprise processors. "Vega 20," on the other hand, could be the world's first 7 nm GPU. 

"Vega 20" is not a mere die-shrink of the "Vega 10" GPU die to the 7 nm node. For starters, it is flanked by four HBM2 memory stacks, confirming it will feature a wider memory interface, and support for up to 32 GB of memory. AMD at its Computex event confirmed that "Vega 20" will build Radeon Instinct and Radeon Pro graphics cards, and that it has no plans to bring it to the client-segment. That distinction will be reserved for "Navi," which could only debut in 2019, if not later.





*View at TechPowerUp Main Site*


----------



## Vayra86 (Aug 22, 2018)

Sadness


----------



## Mussels (Aug 22, 2018)

I was planning on getting a 2080Ti, but seeing those prices and now seeing this news...
(yes i know it'll be 2019, but the 20x0 cards will still be over $1.5K here in aus by then)


----------



## DeathtoGnomes (Aug 22, 2018)

Mussels said:


> I was planning on getting a 2080Ti, but seeing those prices and now seeing this news...
> (yes i know it'll be 2019, but the 20x0 cards will still be over $1.5K here in aus by then)


This news has been in the works for sometime, not surprised AMD is trying to steal Nvidias steam from 20x0 . Navi may not be til 2nd half of 2019.


----------



## cucker tarlson (Aug 22, 2018)

Mussels said:


> I was planning on getting a 2080Ti, but seeing those prices and now seeing this news...
> (yes i know it'll be 2019, but the 20x0 cards will still be over $1.5K here in aus by then)


*AMD at its Computex event confirmed that "Vega 20" will build Radeon Instinct and Radeon Pro graphics cards, and that it has no plans to bring it to the client-segment. *

enthusiast gamers are at the very end of their scope,forget they're gonna prioritize or innovate in that segment.


----------



## TheinsanegamerN (Aug 22, 2018)

And AMD, once again, leaves an entire segment to nvidia for a third generation in a row. 

AMD sould just sell radeon by this point. They can do really well with GPUs, or CPUs, but not both. Sell Radeon to somebody that can actually produce decent GPUs. Vega was a year late and many watts too high, and is about to get eclipsed by a new generation of GPUs from nvidia.


----------



## THE_EGG (Aug 22, 2018)

Mussels said:


> I was planning on getting a 2080Ti, but seeing those prices and now seeing this news...
> (yes i know it'll be 2019, but the 20x0 cards will still be over $1.5K here in aus by then)


Yep, I was so sad when I saw $1899 for the Ti  

Retail pricing for AIB cards isn't looking much better either....


----------



## R0H1T (Aug 22, 2018)

DeathtoGnomes said:


> This news has been in the works for sometime, not surprised AMD is trying to steal Nvidias steam from 20x0 . Navi may not be til 2nd half of 2019.


Is "Navi" the next gaming chip, end of the line for GCN?


----------



## cucker tarlson (Aug 22, 2018)

It's the next polaris, to be used in next gen consoles and mid range gpus.


----------



## mtcn77 (Aug 22, 2018)

TheinsanegamerN said:


> And AMD, once again, leaves an entire segment to nvidia for a third generation in a row.
> 
> AMD sould just sell radeon by this point. They can do really well with GPUs, or CPUs, but not both. Sell Radeon to somebody that can actually produce decent GPUs. Vega was a year late and many watts too high, and is about to get eclipsed by a new generation of GPUs from nvidia.


AMD is winning on density. They have features unsupported by Directx, still, which could turn the tables. You are free to doubt that balance however the case on gpu-mining should provide pointers on how they really stack up against one another.


----------



## Vayra86 (Aug 22, 2018)

R0H1T said:


> Is "Navi" the next gaming chip, end of the line for GCN?



I think they will still use GCN in some revised form, but just glue them together like TR/EPYC


----------



## Vya Domus (Aug 22, 2018)

Vayra86 said:


> Sadness





Mussels said:


> I was planning on getting a 2080Ti, but seeing those prices and now seeing this news...



This is going to be a turning point. If the 20 series is successful and people pay up it will likely mark the end of any effort AMD will make to compete in the high end mainstream PC market. Should they come up with something better and much cheaper, they'll be at a disadvantage because they'll have much lower margins on their products. And if they want to have the same margins then they'll have to ask the same prices, either way the consumer will be screwed.

You reap what you sow, dear consumer. Anyone can ask whatever amount of money they want but that price becomes common place only when it is accepted on a large scale. Nvidia keeps rising the bar, are people going to accept it ? That's all it comes down to.


----------



## wurschti (Aug 22, 2018)

TheinsanegamerN said:


> And AMD, once again, leaves an entire segment to nvidia for a third generation in a row.
> 
> AMD sould just sell radeon by this point. They can do really well with GPUs, or CPUs, but not both. Sell Radeon to somebody that can actually produce decent GPUs. Vega was a year late and many watts too high, and is about to get eclipsed by a new generation of GPUs from nvidia.



*cough* Intel *cough*


----------



## DeathtoGnomes (Aug 22, 2018)

Vayra86 said:


> I think they will still use GCN in some revised form, but just glue them together like TR/EPYC


"glue" is such an Intel word.


----------



## BluesFanUK (Aug 22, 2018)

C'mon AMD, you've already got Intel running scared, focus on Nvidia now.


----------



## Fluffmeister (Aug 22, 2018)

A 32GB HBM2 7nm chip is gonna be expensive, it's no surprise they are focusing on the HPC/Pro market where money is. Volta already has large chunks of the market sewn up and Turing based Quadros are going reign supreme in the pro sector... they need 7nm up their competitiveness.


----------



## cucker tarlson (Aug 22, 2018)

Even with 7nm and 32gb, they'll have a hard time against quadro rtx's. 48gb,nvlink,rt/dl specific hardware + software support. They'd have a more chance to succeed if they ran against geforce cards in the enthusiast segment not quadros and teslas.


----------



## Vayra86 (Aug 22, 2018)

Vya Domus said:


> This is going to be a turning point. If the 20 series is successful and people pay up it will likely mark the end of any effort AMD will make to compete in the mainstream PC market. Should they come up with something better and much cheaper, they'll be at a disadvantage because they'll have much lower margins on their products. And if they want to have the same margins then they'll have to ask the same prices, either way the consumer will be screwed.
> 
> You reap what you sow, dear consumer. Anyone can ask whatever amount of money they want but that price becomes common place only when it is accepted on a large scale.



True. But it was never a healthy market to begin with, when ATI fell you could already see this scenario. Ironically most GPU makers have themselves to blame for failing; if you dont score designnwins and capitalize on them, you just lose. Its a tough industry but also one where real progress and innovation gets rewarded well.

I dont think this is even up to consumers really. We get the worst chips on each wafer!


----------



## mtcn77 (Aug 22, 2018)

cucker tarlson said:


> Even with 7nm and 32gb, they'll have a hard time against quadro rtx's. 48gb,nvlink,rt/dl specific hardware + software support. They'd have a more chance to succeed if they ran against geforce cards in the enthusiast segment not quadros and teslas.


IF has 2 times more bandwidth than NVLink, afaik.


----------



## cucker tarlson (Aug 22, 2018)

IF is on die.


----------



## TEAMRED (Aug 22, 2018)

Seeing the specs, I suspect 2080 will have over 10% improvement over 1080ti in general. That's probably why they defined a new benchmark and try to impress you with that. I hate the fact they didn't go for 7nm and try to ask the consumers to pre-pay their immature technology, wait for 30 series or Na'vi


----------



## mtcn77 (Aug 22, 2018)

cucker tarlson said:


> IF is on die.


Still, that is how TR works.


----------



## cucker tarlson (Aug 22, 2018)

TEAMRED said:


> Seeing the specs, I suspect 2080 will have over 10% improvement over 1080ti in general. That's probably why they defined a new benchmark and try to impress you with that. I hate the fact they didn't go for 7nm and try to ask the consumers to pre-pay their immature technology, wait for 30 series or Na'vi


Really ? I really thought you'd be for early adoption of rt cores. My word....


----------



## mtcn77 (Aug 22, 2018)

TEAMRED said:


> Seeing the specs, I suspect 2080 will have over 10% improvement over 1080ti in general. That's probably why they defined a new benchmark and try to impress you with that. I hate the fact they didn't go for 7nm and try to ask the consumers to pre-pay their immature technology, wait for 30 series or Na'vi


45 milliseconds to render a scene which a competing 4xV100 multi-gpu took 55 milliseconds to render is the benchmark you mention.


----------



## TheinsanegamerN (Aug 22, 2018)

mtcn77 said:


> AMD is winning on density. They have features unsupported by Directx, still, which could turn the tables.


 Yes, all those special features, that AMD fans were screaming would save AMD. DX12, async, Mantle, vulkan, tressFX, the list goes on.

Special features dont matter unless you can get most developers to use it, and only nvidia's gameworks has seen such success. For 5 years "special features" were going to be AMD's ace up their sleeve, and for 5 years Nvidia has dominated them on sales. AMD needs to focus less on special features they cant support and more on producing fast GPUs.



> You are free to doubt that balance however the case on gpu-mining should provide pointers on how they really stack up against one another.


Performance in one application =? performance overall. You could just as easily point to nvidia's gaming performance and CUDA performance in pro applications and say "You can doubt it, but that shows how they really stack up".

Regardless of how good VEGA is (which is highly subjective based on application), VEGA was over a year late to market, power hungry, with very little OC capability, was hampered by minuscule availability and HBM production. The result was Nvidia capturing a huge portion of the market using now 2 year old GPUs because AMD never bothered to show up. You cant just leave an entire generation behind and expect people to continue supporting your brand.

AMD now considering leaving a second generation to nvidia does two things. First, it creates an even stronger idea that AMD simply cant compete on the high end, reinforcing the "mindshare" that many AMD fans are convinced exists. in reality, it is people being uncertain about investing in a brand when said brand cannot consistently show UP to compete. The second is that it gives nvidia a captive market to milk for $$$, which helps keep them economically ahead of AMD, able to make bigger investments in development of new tech, and perpetually keeping AMD in a position of catching up.


----------



## cucker tarlson (Aug 22, 2018)

mtcn77 said:


> Still, that is how TR works.


yes, but that's multi chip on one die, not single big chips like vega 20. that's what I meant, nvidia has a more efficient way to connect those.


----------



## mtcn77 (Aug 22, 2018)

TheinsanegamerN said:


> Yes, all those special features, that AMD fans were screaming would save AMD. DX12, async, Mantle, vulkan, tressFX, the list goes on.
> 
> Special features dont matter unless you can get most developers to use it, and only nvidia's gameworks has seen such success. For 5 years "special features" were going to be AMD's ace up their sleeve, and for 5 years Nvidia has dominated them on sales. AMD needs to focus less on special features they cant support and more on producing fast GPUs.
> 
> ...


You cannot expect to compete when developers won't utilise your hardware properly. Let's see how long it took them to support specific feature sets in AMD hardware: right until Nvidia launched their next. So it is pointless to argue when you are losing on time to market however much you beat the competition on paper.



cucker tarlson said:


> yes, but that's multi chip on one die, not single big chips like vega 20. that's what I meant, nvidia has a more efficient way to connect those.


_Right,_ because TR and Epyc's are small chips?


----------



## ShurikN (Aug 22, 2018)

Vayra86 said:


> I think they will still use GCN in some revised form, but just glue them together like TR/EPYC


I believe a certain RTG head honcho (don't know the name) stated in an interview that there will be no Infinity Fabric-like solution on GPUs in near future.


----------



## mtcn77 (Aug 22, 2018)

ShurikN said:


> I believe a certain RTG head honcho (don't know the name) stated in an interview that there will be no Infinity Fabric-like solution on GPUs in near future.


I recall it was mentioned as a possibility.
∞, fabric-gpu, or fabric-cpu


----------



## Vya Domus (Aug 22, 2018)

TheinsanegamerN said:


> it creates an even stronger idea that AMD simply cant compete on the high end, reinforcing the "mindshare" that many AMD fans are convinced exists.



The "mindshare" that AMD fanboys have is that AMD can't compete and this reinforces it ? What the hell ?


----------



## siluro818 (Aug 22, 2018)

TheinsanegamerN said:


> And AMD, once again, leaves an entire segment to nvidia for a third generation in a row.
> 
> AMD sould just sell radeon by this point. They can do really well with GPUs, or CPUs, but not both. Sell Radeon to somebody that can actually produce decent GPUs. Vega was a year late and many watts too high, and is about to get eclipsed by a new generation of GPUs from nvidia.


You keep forgetting that AMD produces both the Playstation and Xbox consoles - current and next-gen - and that wouldn't be the case if they didn't own ATI.
Also the cards that focus on the middle and budget market segments comprise something like 90% of the whole market, so it makes sense for them to ignore the top ranks.
These GPUs grab headlines, but it's only enthusiasts who actually buy them.


----------



## mouacyk (Aug 22, 2018)

The aunty is letting the nephew have his fun.  Raja needs to step it up with that Intel R&D budget.  2020 is too far away.


----------



## RejZoR (Aug 22, 2018)

Vayra86 said:


> I think they will still use GCN in some revised form, but just glue them together like TR/EPYC



If it works, who cares really. Ryzen is a proof of that. It's "glued" together CPU, but it works and it's super cost efficient. And as it turns out, still very power efficient too.

Looks like they're gonna keep Vega for compute where it really shines and focus on making Navi gamer focused. Really, NVIDIA's way of splitting compute and gaming rendering into two tiers works. Where AMD trying to combine both just doesn't get the momentum. Maybe it turns out GCN can work well with ray tracing with "minor" changes, but it's still losing steam for classic game rendering. But that's all so far away it's hard to say anything.


----------



## mtcn77 (Aug 22, 2018)

RejZoR said:


> If it works, who cares really. Ryzen is a proof of that. It's "glued" together CPU, but it works and it's super cost efficient. And as it turns out, still very power efficient too.
> 
> Looks like they're gonna keep Vega for compute where it really shines and focus on making Navi gamer focused. Really, NVIDIA's way of splitting compute and gaming rendering into two tiers works. Where AMD trying to combine both just doesn't get the momentum. Maybe it turns out GCN can work well with ray tracing with "minor" changes, but it's still losing steam for classic game rendering. But that's all so far away it's hard to say anything.


Afaik, ray-tracing got adoption because rasterization hit its limits. This is not so bad for AMD in that sense, they had less rasterizers installed.


----------



## FordGT90Concept (Aug 22, 2018)

So all of the eggs are in the Navi basket yet, it feels like we're looking across the Sahara desert for a glimmer of light reflecting off of a distant diamond when the sun is just so.  Sadness, indeed.


----------



## londiste (Aug 22, 2018)

mtcn77 said:


> AMD is winning on density. They have features unsupported by Directx, still, which could turn the tables. You are free to doubt that balance however the case on gpu-mining should provide pointers on how they really stack up against one another.


Density of what? Transistor density is pretty much the same as chips are manufactured in the same place. In general AMD has had larger chips competing with smaller Nvidia chips.
Both vendors have features unsupported by DirectX, mostly because they are not that mainstream or just not that useful.
GPU mining does not have a single "this is good" GPU. There are different algorithms for different coins that favor different arhcitectures or aspects of GPUs as well as memory systems.



Vya Domus said:


> This is going to be a turning point. If the 20 series is successful and people pay up it will likely mark the end of any effort AMD will make to compete in the high end mainstream PC market. Should they come up with something better and much cheaper, they'll be at a disadvantage because they'll have much lower margins on their products. And if they want to have the same margins then they'll have to ask the same prices, either way the consumer will be screwed.


AMD has had lower margins for a while now. When it comes to 20 series, Nvidia very likely does not have their usual margins either. These are expensive cards to produce.



mtcn77 said:


> IF has 2 times more bandwidth than NVLink, afaik.


Both are scalable in a very large degree. Neither is specifically slower or faster. The specific implementation of interconnect is to match some kind of optimization point for the use case.



mtcn77 said:


> _Right,_ because TR and Epyc's are small chips?


Yes, they are. 2 or 4 of Zen/Zen+ 209/213mm² dies.


----------



## RejZoR (Aug 22, 2018)

I think Navi will be an assembly of 4x enhanced Polaris cores at 7nm glued together on single PCB. There were hints of this glueing together since Vega emerged and they have tons of experience from Ryzen. Or maybe smaller Vega cores.


----------



## Camm (Aug 22, 2018)

AMD has three years to completely capitalise on Intel's failings before Intel can realistically catch up in the CPU space. Furthermore, AMD has an incredibly solid compute architecture in Vega, and a market thats ready to *only* pay a few thousand compared to tens of thousands of cards, compared to the gaming market that keeps whinging for 'competition', but when faster, cheaper, more efficient, or all three attributes are available, still buy Nvidia.

AMD GPU enthusiasts, I wouldn't be expecting anything until mid 2019 at the earliest, and in all honesty, I wouldn't expect a remotely competitive archtitecture until 2020.


----------



## londiste (Aug 22, 2018)

RejZoR said:


> I think Navi will be an assembly of 4x enhanced Polaris cores at 7nm glued together on single PCB. There were hints of this glueing together since Vega emerged and they have tons of experience from Ryzen. Or maybe smaller Vega cores.


It won't. There was a guy high up in AMD's GPU part that specifically said Navi is not MCM.


----------



## mtcn77 (Aug 22, 2018)

londiste said:


> Density of what? Transistor density is pretty much the same as chips are manufactured in the same place. In general AMD has had larger chips competing with smaller Nvidia chips.
> Both vendors have features unsupported by DirectX, mostly because they are not that mainstream or just not that useful.
> GPU mining does not have a single "this is good" GPU. There are different algorithms for different coins that favor different arhcitectures or aspects of GPUs as well as memory systems.
> 
> ...


Can I quote you on that? Directx is very favourable towards Nvidia. Thread workgroup sizes 'match' Nvidia, 2 kernels to reach peak size. Need I say more?


----------



## R0H1T (Aug 22, 2018)

Camm said:


> *AMD has three years to completely capitalise on Intel's failings before Intel can realistically catch up in the CPU space*. Furthermore, AMD has an incredibly solid compute architecture in Vega, and a market thats ready to *only* pay a few thousand compared to tens of thousands of cards, compared to the gaming market that keeps whinging for 'competition', but when faster, cheaper, more efficient, or all three attributes are available, still buy Nvidia.
> 
> AMD GPU enthusiasts, I wouldn't be expecting anything until mid 2019 at the earliest, and in all honesty, I wouldn't expect a remotely competitive archtitecture until 2020.


One year realistically speaking, AMD needs to win in servers. AMD will gain enough space in desktops, notebooks eventually but they need the *enterprise* market.


----------



## londiste (Aug 22, 2018)

mtcn77 said:


> Can I quote you on that? Directx is very favourable towards Nvidia. Thread workgroup sizes 'match' Nvidia, 2 kernels to reach peak size. Need I say more?


Yeah, can you elaborate? Why is that favourable towards Nvidia?


----------



## ShurikN (Aug 22, 2018)

mtcn77 said:


> I recall it was mentioned as a possibility.
> fabric-gpu, or fabric-cpu


Yeah, but not for Navi.


----------



## renz496 (Aug 22, 2018)

mtcn77 said:


> AMD is winning on density. *They have features unsupported by Directx, still, which could turn the tables.* You are free to doubt that balance however the case on gpu-mining should provide pointers on how they really stack up against one another.



that's the problem. AMD like to push things that is hard for developer to take advantage of. what happen to stuff like primitive shaders? it seems even AMD like to ditch proper support even before developer start using it.


----------



## mtcn77 (Aug 22, 2018)

renz496 said:


> that's the problem. AMD like to push things that is hard for developer to take advantage of. what happen to stuff like primitive shaders? it seems even AMD like to ditch proper support even before developer start using it.


No, hardware journalism, such as guru3D, took it upon themselves to disclaim the result - mind you, not anybody else's, their own result. We both know what happened to Nvidia Series when it were enabled...



londiste said:


> Yeah, can you elaborate? Why is that favourable towards Nvidia?


2048 is 2 kernels of 1024. 2560 is not. You cannot have a third kernel without finishing the other two. Essentially, you are always at 80% of your peak whether the code allows it or not.


----------



## AnarchoPrimitiv (Aug 22, 2018)

Four stacks of HBM2 would have about 2TB/s bandwidth, right?


----------



## efikkan (Aug 22, 2018)

TEAMRED said:


> Seeing the specs, I suspect 2080 will have over 10% improvement over 1080ti in general. That's probably why they defined a new benchmark and try to impress you with that. I hate the fact they didn't go for 7nm and try to ask the consumers to pre-pay their immature technology, wait for 30 series or Na'vi


Turing is a new architecture with a completely different SM structure than Pascal, assuming they would scale similarily would be a mistake.



mtcn77 said:


> AMD is winning on density.


If theoretical specs mattered, AMD would be king.



mtcn77 said:


> You cannot expect to compete when developers won't utilise your hardware properly. Let's see how long it took them to support specific feature sets in AMD hardware: right until Nvidia launched their next. So it is pointless to argue when you are losing on time to market however much you beat the competition on paper.


For the last 2-3 years, there have been far more AMD partner games than Nvidia partner games, primarily due to consoles. Lack of "utilization" is not the problem, but lack of hardware improvements.



RejZoR said:


> I think Navi will be an assembly of 4x enhanced Polaris cores at 7nm glued together on single PCB. There were hints of this glueing together since Vega emerged and they have tons of experience from Ryzen. Or maybe smaller Vega cores.


Not for Navi, so not anytime soon.


----------



## mtcn77 (Aug 22, 2018)

efikkan said:


> If theoretical specs mattered, AMD would be king.
> 
> 
> For the last 2-3 years, there have been far more AMD partner games than Nvidia partner games, primarily due to consoles. Lack of "utilization" is not the problem, but lack of hardware improvements.


It is the same hardware since GCNEvergreen HD5000 - Directx maximum thread count is still 1024.
It is the same hardware since HD6900 series - consoles just started harnessing EQAA for spatial domain supersampling as in FarCry 4 CLUT rendering. We are still waiting on its integration with checkerboard rendering which brings...
Checkerboard rendering - hardware was available to render a rotated-grid since HD5000. Just became widespread with consoles using the checkerboard pattern in Frostbite games.


----------



## qubit (Aug 22, 2018)

Proper competition for NVIDIA! Yeah, we can dream...


----------



## charles4691 (Aug 22, 2018)

And the hype talk will begin. Sigh


----------



## R0H1T (Aug 22, 2018)

AnarchoPrimitiv said:


> Four stacks of HBM2 would have about *2TB/s* bandwidth, right?


HBM3 possibly, certainly not with HBM2.


----------



## londiste (Aug 22, 2018)

efikkan said:


> Turing is a new architecture with a completely different SM structure than Pascal, assuming they would scale similarily would be a mistake.


Based on hat we know so far, it should be unchanged from Volta. Volta performance scaled linearly enough with Pascal.


----------



## carex (Aug 22, 2018)

if AMD launches Navi today they can easily throw nvidia out from the chart
without infinity fabric and 7nm its not possible ....as die size will be issue ....and i can see 4x dies (6144 cores) to challenge the Titanxxx

but in the end ....they have to launch something otw dream on.
Seriously how lazy they are  SAD


----------



## FordGT90Concept (Aug 22, 2018)

AnarchoPrimitiv said:


> Four stacks of HBM2 would have about 2TB/s bandwidth, right?


Probably more like 1.2 TB/s.  Vega10 with two stacks is about 640 MB/s if memory serves.  Very doubtful they're changing from HBM2.


----------



## efikkan (Aug 22, 2018)

carex said:


> if AMD launches Navi today they can easily throw nvidia out from the chart


Even if they did launch it now at 7 nm (not possible yet), it would still barely exceed Vega. Navi is not going to be a high end chip.


----------



## CheapMeat (Aug 22, 2018)

I like how it's doom and gloom because the very highest end isn't dirt cheap and people "need" to get the very highest end every single time one is released.


----------



## Foobario (Aug 22, 2018)

cucker tarlson said:


> yes, but that's multi chip on one die, not single big chips like vega 20. that's what I meant, nvidia has a more efficient way to connect those.


You do realize that IF can attach an EPYC to a Vega 20 or even to a Xilinx FPGA with a little collaboration between the companies.  Considering that AMD and Xilinx have been sharing booth space at various tech events the collaboration is well underway.

On the last conference call Lisa Su confirmed that Rome and Vega 20 will be offered as a combined package to customers using the IF interconnect.


----------



## carex (Aug 22, 2018)

3 more events to go for AMD this year .......lets see


----------



## ShurikN (Aug 22, 2018)

carex said:


> 3 more events to go for AMD this year .......lets see
> View attachment 105813


Most of those, if not all 3, will probably be about Zen2 (Epyc), Vega instinct, AI learning etc... Maybe some DirectX Ray-tracing. I doubt AMD will have anything worthwhile for the consumers until 7nm matures. And that's in 2019.


----------



## gamerman (Aug 22, 2018)

if ...IF amd dare release something gpu ,,of coz december... january and so on.. prepare over 400W reference tdp. even 7nm manufactor..

but i can even bet..december 2018 amd not release nothing else than old vega with 7nm. junk

2020 when intel coming gpu market,,,its amd gpu end for ever.


----------



## efikkan (Aug 22, 2018)

The biggest problem with Vega 20 is that it doesn't offer much for consumers; it's inteded for professional uses with fp64 and massive memory bandwidth. Volumes on 7 nm will be very limited initially, and wasting precious dies on consumer products with minimal gains over Vega 10 sounds like a strange move.


----------



## yeeeeman (Aug 22, 2018)

I like all the sad comments here. They prove one simple thing: people have money to throw at stuff that they don't need. I mean, look at how many people own a fast Pascal card which can play everything at either 1080P or 4K with 60FPS and complain about high prices for Turing cards. Why do you even need something better if you can play everything? To feed the never ending obsession of having the best/latest hardware parts.


----------



## Super XP (Aug 22, 2018)

Vya Domus said:


> This is going to be a turning point. If the 20 series is successful and people pay up it will likely mark the end of any effort AMD will make to compete in the high end mainstream PC market. Should they come up with something better and much cheaper, they'll be at a disadvantage because they'll have much lower margins on their products. And if they want to have the same margins then they'll have to ask the same prices, either way the consumer will be screwed.
> 
> You reap what you sow, dear consumer. Anyone can ask whatever amount of money they want but that price becomes common place only when it is accepted on a large scale. Nvidia keeps rising the bar, are people going to accept it ? That's all it comes down to.



People need to STOP paying high prices for Nvidia GPUs. Doing so wrecks the PC Gaming market. 

The Consumer should be demanding cheaper prices all across the board.  And that ain't happening.  People seem to be OK bending over for Nvidia.


----------



## john_ (Aug 22, 2018)

Many where hoping AMD to produce something good so they can go and buy cheaper Intel and Nvidia hardware. For me, I am glad to see AMD throwing R&D money where it will make money, not where people would end up laughing at it's face saying "Thank you for helping us buy cheaper products from your competitors" adding that "AMD hardware is for the poor".

It's obvious that we are in a "bulldozer" era for GPUs, so Nvidia will go unchallenged for the next 2-3 years. You want better GPUs from AMD and better competition? Support Ryzen as an option to those who ask you for a hardware advice.


----------



## Fx (Aug 22, 2018)

Vya Domus said:


> You reap what you sow, dear consumer. Anyone can ask whatever amount of money they want but that price becomes common place only when it is accepted on a large scale. Nvidia keeps rising the bar, are people going to accept it ? That's all it comes down to.



As of right now, I have no inclination to purchase the 2080 Ti even though I can afford it. The purchase would make me feel absolutely disgusted with myself. The second half of that principle is that I don't want to be a part of the consumers that reinforce nvidia's perception that these price ranges are acceptable by purchasing one.

I'm not sure how I am going to upgrade from my 980 Ti, but the position that nvidia is placing me in is bs.


----------



## carex (Aug 22, 2018)

ShurikN said:


> Most of those, if not all 3, will probably be about Zen2 (Epyc), Vega instinct, AI learning etc... Maybe some DirectX Ray-tracing. I doubt AMD will have anything worthwhile for the consumers until 7nm matures. And that's in 2019.


true amd such a hopeless competitor they dont care about regular consumers



yeeeeman said:


> I like all the sad comments here. They prove one simple thing: people have money to throw at stuff that they don't need. I mean, look at how many people own a fast Pascal card which can play everything at either 1080P or 4K with 60FPS and complain about high prices for Turing cards. Why do you even need something better if you can play everything? To feed the never ending obsession of having the best/latest hardware parts.


true but not every game especially @4K ....120fps will be the last point for me.



Super XP said:


> People need to STOP paying high prices for Nvidia GPUs. Doing so wrecks the PC Gaming market.
> 
> The Consumer should be demanding cheaper prices all across the board.  And that ain't happening.  People seem to be OK bending over for Nvidia.



dont worry all the midrange cards still holds top 4 place the place in steam survey list .....especially 1050ti like ..... and what u think they will have that much quantity to sell i mean see the die size.




gamerman said:


> if ...IF amd dare release something gpu ,,of coz december... january and so on.. prepare over 400W reference tdp. even 7nm manufactor..
> 
> but i can even bet..december 2018 amd not release nothing else than old vega with 7nm. junk
> 
> 2020 when intel coming gpu market,,,its amd gpu end for ever.


nah if they dare....which i doubt as well..... 7nm is incredibly efficient as much as 60% or more + the die size will be half as well so they can increase the transistor count. so no 350+ watt unless they nonsensically overclock like vega 64.


----------



## semitope (Aug 22, 2018)

TheinsanegamerN said:


> And AMD, once again, leaves an entire segment to nvidia for a third generation in a row.
> 
> AMD sould just sell radeon by this point. They can do really well with GPUs, or CPUs, but not both. Sell Radeon to somebody that can actually produce decent GPUs. Vega was a year late and many watts too high, and is about to get eclipsed by a new generation of GPUs from nvidia.



Part of the reason they aren't doing the highest end is that they are using GPUs elsewhere. So why would they sell radeon? consoles, segments below the high end, pro GPUs. Why on earth would they sell just because they aren't performing faster than 1080ti?


----------



## prtskg (Aug 22, 2018)

Seems like I am among the rare few who are happy with this news. Going separate dies for compute and gaming will keep the GPU good for both jobs which AMD wasn't able to do earlier because of its economic situation. Now things are different as they have money again. Hopefully their GPU from now onwards will be not just jack of all trades but master of specific field for which they are made.


----------



## semitope (Aug 22, 2018)

prtskg said:


> Seems like I am among the rare few who are happy with this news. Going separate dies for compute and gaming will keep the GPU good for both jobs which AMD wasn't able to do earlier because of its economic situation. Now things are different as they have money again. Hopefully their GPU from now onwards will be not just jack of all trades but master of specific field for which they are made.



oh yeah this is good. I said this months ago but didn't even realize this is the indication. They've needed to split the two so they could address the issues with their chips for a while. All that compute power and still not pushing out the pixels like they should. 

Maybe they get both subsidized. Sony and MS pay them to develop gaming GPUs and maybe Tesla and others for compute GPUs.


----------



## mtcn77 (Aug 22, 2018)

Features missing from AMD's arsenal that I know for great potential;

Shadow cache and primitive discard acceleration,
Rotated grid sampling and coverage lookup table(virtual supersampling),
Rapid packed math.
Come on, FP16 got upgraded 30% in GCN 1.2 - we still haven't seen it on pc - now, it is up by a further 22%(rapid packed math). Trust me, we don't see any highlights unless Microsoft pushes the standard, not the vendors themselves.


----------



## ShurikN (Aug 22, 2018)

john_ said:


> Many where hoping AMD to produce something good so they can go and buy cheaper Intel and Nvidia hardware. For me, I am glad to see AMD throwing R&D money where it will make money, not where people would end up laughing at it's face saying "Thank you for helping us buy cheaper products from your competitors" adding that "AMD hardware is for the poor".


If I could give you a +100 for this, I would.


----------



## efikkan (Aug 22, 2018)

mtcn77 said:


> Features missing from AMD's arsenal that I know for great potential;
> Shadow cache and primitive discard acceleration,
> Rotated grid sampling and coverage lookup table(virtual supersampling),
> Rapid packed math.
> ...


fp16, which AMD calls "Rapid Packed Math", should be one of the biggest no-brainers of them all. The theory is simple; supported cards (Vega, GP100, GV100) doubles the FPU throughput vs. fp32. Most of the shader workload in games is what we call "fragment processing", which typically accounts for 60-80% of the workload. And the implementation is rather simple, just adjust the data types in the shaders and the code, and fp16 is still plenty for most games in HDR. In theory this could yield boosts of ~20-40% when not otherwise bottlenecked, but therein lies the problem, we all know that Polaris and Vega have plenty of FPU throughput already. When they struggle to saturate the resources they already have, effectively "adding" more resources will only yield marginal gains.

fp16 will eventually become the norm, but it wouldn't save AMD now.


----------



## mtcn77 (Aug 22, 2018)

efikkan said:


> fp16, which AMD calls "Rapid Packed Math", should be one of the biggest no-brainers of them all. The theory is simple; supported cards (Vega, GP100, GV100) doubles the FPU throughput vs. fp32. Most of the shader workload in games is what we call "fragment processing", which typically accounts for 60-80% of the workload. And the implementation is rather simple, just adjust the data types in the shaders and the code, and fp16 is still plenty for most games in HDR. In theory this could yield boosts of ~20-40% when not otherwise bottlenecked, but therein lies the problem, we all know that Polaris and Vega have plenty of FPU throughput already. When they struggle to saturate the resources they already have, effectively "adding" more resources will only yield marginal gains.
> 
> fp16 will eventually become the norm, but it wouldn't save AMD now.


Perhaps, you are taken for a ride by the elusivity of zero-risk bias. It is not as easy as you think. The short-shaders are ganged into a longer shader, like VLIW packing, but in actual pipeline. VLIW doesn't occur on the same lane.


----------



## Amite (Aug 22, 2018)

and please don't forget to get Crossfire ready for the launch.


----------



## Mr.Mopar392 (Aug 22, 2018)

gamerman said:


> if ...IF amd dare release something gpu ,,of coz december... january and so on.. prepare over 400W reference tdp. even 7nm manufactor..
> 
> but i can even bet..december 2018 amd not release nothing else than old vega with 7nm. junk
> 
> 2020 when intel coming gpu market,,,its amd gpu end for ever.



seriously you sound so ignorant!


----------



## yogurt_21 (Aug 22, 2018)

I get it "Navi" as in we will navi make this gpu available for desktop gamers, navi. 

Navi is supposed to be for the PS5 I thought... so what no love for desktops?


----------



## Flyordie (Aug 23, 2018)

Vayra86 said:


> True. But it was never a healthy market to begin with, when ATI fell you could already see this scenario. Ironically most GPU makers have themselves to blame for failing; if you dont score designnwins and capitalize on them, you just lose. Its a tough industry but also one where real progress and innovation gets rewarded well.
> 
> *I dont think this is even up to consumers really. We get the worst chips on each wafer*!




Unless you got a Liquid RX Vega or get a Threadripper CPU. lol


----------



## Prima.Vera (Aug 23, 2018)

yeeeeman said:


> I like all the sad comments here. They prove one simple thing: people have money to throw at stuff that they don't need. I mean, look at how many people own a fast Pascal card which can play everything at either 1080P or 4K with 60FPS and complain about high prices for Turing cards. Why do you even need something better if you can play everything? To feed the never ending obsession of having the best/latest hardware parts.


I have a very expensive GTX 1080 card and I can barely hit in the latest games 40-50fps on 3440x1440 on my 100Hz capable monitor. You do understand the frustration on wanting a better card to pass the 75fps barrier at least....


----------



## Nkd (Aug 23, 2018)

TheinsanegamerN said:


> And AMD, once again, leaves an entire segment to nvidia for a third generation in a row.
> 
> AMD sould just sell radeon by this point. They can do really well with GPUs, or CPUs, but not both. Sell Radeon to somebody that can actually produce decent GPUs. Vega was a year late and many watts too high, and is about to get eclipsed by a new generation of GPUs from nvidia.



You wouldn't want AMD selling GPU division at this point when they earned crap load more revenue from CPUs and GPUs (due to mining demand). AMD Is finally having the budget to expedite GPU development. Do you hear the silence? Yep that is actually a good thing. Lisa has them laser focused not on marketing or anything but on actual development I believe. She has hired some smart people and even shifted some of Ryzen brains to fine tune GPUs at RTG. So she means business. Mark my word since they are succeeding on the CPU side I think they will come out swinging in a few years in the GPU department. Word is they are pulling all the muscle to get the next gen architecture out as soon as they can. They already have a fast track on GPU. I think Navi will be a home run with mainstream users and then they will bring out true next gen part soon after that I believe.



yogurt_21 said:


> I get it "Navi" as in we will navi make this gpu available for desktop gamers, navi.
> 
> Navi is supposed to be for the PS5 I thought... so what no love for desktops?



thats not what was said. Its said they developed Navi in partnership with Sony. So a big pool of engineers were devoted to navi to work along sony engineers for Navi. So Navi could have some tricks up its sleeve.  Depends on how far they can push it. I think Navi is probably already done though. The next gen architecture that was labeled "next gen" in their road map, lol! It's suppose to ready in 2020 I think. But They might be able to push it out sooner since they have more budget to throw at it now.


----------



## AlwaysHope (Aug 23, 2018)

Lisa Su is a better CEO than most give her credit for.


Nkd said:


> You wouldn't want AMD selling GPU division at this point when they earned crap load more revenue from CPUs and GPUs (due to mining demand). AMD Is finally having the budget to expedite GPU development. Do you hear the silence? Yep that is actually a good thing. Lisa has them laser focused not on marketing or anything but on actual development I believe. She has hired some smart people and even shifted some of Ryzen brains to fine tune GPUs at RTG. So she means business. Mark my word since they are succeeding on the CPU side I think they will come out swinging in a few years in the GPU department. Word is they are pulling all the muscle to get the next gen architecture out as soon as they can. They already have a fast track on GPU. I think Navi will be a home run with mainstream users and then they will bring out true next gen part soon after that I believe.
> 
> 
> 
> thats not what was said. Its said they developed Navi in partnership with Sony. So a big pool of engineers were devoted to navi to work along sony engineers for Navi. So Navi could have some tricks up its sleeve.  Depends on how far they can push it. I think Navi is probably already done though. The next gen architecture that was labeled "next gen" in their road map, lol! It's suppose to ready in 2020 I think. But They might be able to push it out sooner since they have more budget to throw at it now.




Agreed, Lisa Su is a better CEO than most giver her credit for.


----------



## NeuralNexus (Aug 23, 2018)

AlwaysHope said:


> Lisa Su is a better CEO than most give her credit for.
> 
> 
> 
> Agreed, Lisa Su is a better CEO than most giver her credit for.



Her Uncle is Jensen Huang, so I guess she looks up to him for direction.


----------



## evernessince (Aug 23, 2018)

TheinsanegamerN said:


> And AMD, once again, leaves an entire segment to nvidia for a third generation in a row.
> 
> AMD sould just sell radeon by this point. They can do really well with GPUs, or CPUs, but not both. Sell Radeon to somebody that can actually produce decent GPUs. Vega was a year late and many watts too high, and is about to get eclipsed by a new generation of GPUs from nvidia.



I wouldn't say that.  Going from 14nm to 7nm should give Vega a significant performance boost.


----------



## R0H1T (Aug 23, 2018)

evernessince said:


> I wouldn't say that.  Going from 14nm to 7nm should give Vega a significant performance boost.


Yes It will, now only if they'd release a GPU for gaming.


----------



## Zubasa (Aug 23, 2018)

NeuralNexus said:


> Her Uncle is Jensen Huang, so I guess she looks up to him for direction.


Lets just hope for F's sake she doesn't look to him for pricing direction


----------



## R0H1T (Aug 23, 2018)

Zubasa said:


> Lets just hope for F's sake she doesn't look to him for pricing direction


Or Ferraris & leather jackets?


----------



## FordGT90Concept (Aug 23, 2018)

yogurt_21 said:


> I get it "Navi" as in we will navi make this gpu available for desktop gamers, navi.
> 
> Navi is supposed to be for the PS5 I thought... so what no love for desktops?


We're getting PS5 scraps.  Oh the hue-manatee!



NeuralNexus said:


> Her Uncle is Jensen Huang, so I guess she looks up to him for direction.


They're distant cousins.


----------



## Vya Domus (Aug 23, 2018)

CheapMeat said:


> I like how it's doom and gloom because the very highest end isn't dirt cheap and people "need" to get the very highest end every single time one is released.



You seriously think the 2060, 2050 and whatever else are going to be as cheap as they used to be ? The massive price hike is going to trickle down to all of their products.

It's absolutely baffling that you wouldn't see this as an issue.


----------



## londiste (Aug 23, 2018)

You are presenting the principles of pricing and marketing products as a philosophical question. Whether the product naming and line should be changed is not a technical issue but pure marketing. I am sure marketing departments are working hard to figure out the practical aspects of all this.

There are limits to how products can be priced. Given what Nvidia's 20-series technically is, I strongly suspect they would make loss at last-gen prices. The same applies to AMD. If they could have sold Vegas maybe $50 cheaper these would have been reasonably successful, instead Vegas were and are in large part available for above MSRP. They simply are not able to do that if they want to make some profit.


----------



## Vya Domus (Aug 23, 2018)

FordGT90Concept said:


> We're getting PS5 scraps.



What AMD will make for the next generation of consoles is much more relevant than people think. Particularly what are they going to do about all this ray-tracing stuff. Will they include dedicated hardware for it ? If not, all this will likely remain into obscurity, RTX isn't as easy to throw into a game like Gameworks is. When the next generation of game engines will be written and the hardware for ray-tracing is absent from consoles it's probably not going to go well for all of this stuff. Nvidia can't keep persuading developers to work extra just for one platform forever.

I am hoping that DXR is a sign of the possibility that the next generation of consoles does in fact have this capability.


----------



## londiste (Aug 23, 2018)

Vya Domus said:


> RTX isn't as easy to throw into a game like Gameworks is.


Why is that? Nvidia seems to have a fairly comprehensive software package for RTX that developers were able to put into game rather quickly. Both Nvidia and Devs have said their RTX software is easy enough to integrate into both games as well as game engines. Didn't DICE (Battlefield V dev) say they got RTX and cards less than two weeks ago?

Based on what we know, RTX is an implementation of DXR. Major engine developers have already or are implementing this into their engines. Adoption remains an open question but more in terms of adoption speed, no whether raytacing-based solutions are the future.


----------



## FordGT90Concept (Aug 23, 2018)

Vya Domus said:


> What AMD will make for the next generation of consoles is much more relevant than people think. Particularly what are they going to do about all this ray-tracing stuff. Will they include dedicated hardware for it ? If not, all this will likely remain into obscurity, RTX isn't as easy to throw into a game like Gameworks is. When the next generation of game engines will be written and the hardware for ray-tracing is absent from consoles it's probably not going to go well for all of this stuff. Nvidia can't keep persuading developers to work extra just for one platform forever.
> 
> I am hoping that DXR is a sign of the possibility that the next generation of consoles does in fact have this capability.


There is 0% chance of ray tracing in any console in the next five years, probably ten years.  A $1000, 250w card can barely manage it at 1080p.  Sony/Microsoft want more like 100w (for the complete system) and 4K (because that's what people are buying).

I suspect what Navi will have is a dedicated scaler so the game can render at any resolution and get scaled to any resolution at virtually no cost.  That's something PC gamers could use--especially if it is something can happen on the fly.  If a game fails to deliver 30 fps at 1080p, for example, it can drop the renderer to 900p and scale it up to 1080p to give it an artificial frame rate boost.  That's something consoles really need.


----------



## Vya Domus (Aug 23, 2018)

londiste said:


> Why is that?



This is why : https://www.techspot.com/news/76073-shadow-tomb-raider-unable-maintain-60fps-geforce-rtx.html I would also point out that it's not as flawless as it seems, visually. Crystal Dynamics also said this is supposedly "early work" and that the RTX feature will be added after release. What does all that tell you ? RTX takes a lot of time and effort to implement, Nvidia is doing a terrific job with their smoke and mirrors but they can't hide that fact completely.



londiste said:


> Didn't DICE (Battlefield V dev) say they got RTX and cards less than two weeks ago?



And what does that tell you ? Nothing, the BFV demo was also sub 60 fps according to the people that saw it. Who the hell knows how much time does it take for a proper implementation, which clearly isn't in this case.



FordGT90Concept said:


> There is 0% chance of ray tracing in any console in the next five years, probably ten years.  A $1000, 250w card can barely manage it at 1080p.  Sony/Microsoft want more like 100w (for the complete system) and 4K (because that's what people are buying).



It's not impossible to get some dedicated hardware in there without destroying power consumption. I would also argue that ray-tracing is currently used in wasteful way, it would make much more sense to use this stuff for approximating volumes and then have a normal shadow pass where you can use that information for shading rather than having the entire shadow pass stage done through ray-tracing. That would offer a subtle improvement in visuals and it wouldn't be very expensive, I wouldn't completely discard the possibility of ray-tracing for the next generation of consoles.


----------



## londiste (Aug 23, 2018)

It does not necessarily tell you otherwise either. Early builds of both games with new tech. Once both cards and games (or RT patch at least in case of SoTR) are out we will see how they fare.
According to some sources, even reviewer drivers are not really ready yet.



FordGT90Concept said:


> There is 0% chance of ray tracing in any console in the next five years, probably ten years.  A $1000, 250w card can barely manage it at 1080p. Sony/Microsoft want more like 100w (for the complete system) and 4K (because that's what people are buying).


Xbox 360 went down from initial 180W power consumption to half that by the Slim version. Pretty much the same for PS3.
Both Xbox One and PS4 did target 110-120W but these are decidedly strange with their midrange hardware inside instead of state-of-the-art CPU/GPU that previous generations had. Xbox One X and PS4 pro are both in the range of 170-180W again.



FordGT90Concept said:


> I suspect what Navi will have is a dedicated scaler so the game can render at any resolution and get scaled to any resolution at virtually no cost.  That's something PC gamers could use--especially if it is something can happen on the fly.  If a game fails to deliver 30 fps at 1080p, for example, it can drop the renderer to 900p and scale it up to 1080p to give it an artificial frame rate boost.  That's something consoles really need.


GPUs have had that for generations.
Do you mean hardware support for some advanced algorithm like checkerboard? Btw, DLSS is exactly that


----------



## FordGT90Concept (Aug 23, 2018)

Vya Domus said:


> It's not impossible to get some dedicated hardware in there without destroying power consumption. I would also argue that ray-tracing is currently used in wasteful way, it would make much more sense to use this stuff for approximating volumes and then have a normal shadow pass where you can use that information for shading rather than having the entire shadow pass stage done through ray-tracing. That would offer a subtle improvement in visuals and it wouldn't be very expensive, I wouldn't completely discard the possibility of ray-tracing for the next generation of consoles.


This doesn't make any sense.  RTRT performance is determined by number of rays cast per frame.  The fewer the rays, the noisier the scene.  What NVIDIA is doing is cutting back on the number of rays and using AI to denoise the result.  When it comes to raytracings, shadows are simply the absence of light.  They don't get special treatment like they do with rasterizing.

GCN Can't even match 1 Gray/s. RTX 2080 Ti does 10 Gray/s and the result is mediocre.

RTRT simply isn't AMD's focus.  Literally no one is demanding it outside of the professional market and they already use Radeon Rays on Radeon Pro cards.


----------



## londiste (Aug 23, 2018)

Vya Domus said:


> I would also argue that ray-tracing is currently used in wasteful way, it would make much more sense to use this stuff for approximating volumes and then have a normal shadow pass where you can use that information for shading rather than having the entire shadow pass stage done through ray-tracing. That would offer a subtle improvement in visuals and it wouldn't be very expensive, I wouldn't completely discard the possibility of ray-tracing for the next generation of consoles.


Just the opposite. What RTX as well as the entire current drive towards raytracing is extremely heavy optimization. The solutions are using raytracing (that is inherently wasteful) in the most optimal and efficient way possible.

What you describe is exactly how RTX does things. What raytracing provides here is both simplicity and accuracy. Raytraced shadow map is more accurate than usual methods and it allows doing more complex things the most simple way. When calculating shadow maps for many objects and many light sources, current ways tend to need multiple passes for some of these things and are running into performance bottlenecks fast. Raytracing does not really care.


----------



## Vya Domus (Aug 23, 2018)

FordGT90Concept said:


> This doesn't make any sense.  RTRT performance is determined by number of rays cast per frame.  The fewer the rays, the noisier the scene.  What NVIDIA is doing is cutting back on the number of rays and using AI to denoise the result.  When it comes to raytracings, shadows are simply the absence of light.  They don't get special treatment like they do with rasterizing.
> 
> GCN Can't even match 1 Gray/s. RTX 2080 Ti does 10 Gray/s and the result is mediocre.



What I am describing isn't pure ray-tracing, that's the point. For example current global illumination techniques already involve the use of rays to determine indirect lighting, you can use this new hardware to improve upon that rather than introducing a completely different pass. It's extremely wasteful performance wise and you are not even getting proper results.


----------



## londiste (Aug 23, 2018)

Vya Domus said:


> What I am describing isn't pure ray-tracing, that's the point. For example current global illumination techniques already involve the use of rays to determine indirect lighting, you can use this new hardware to improve upon that rather than introduce a completely different pass. It's extremely wasteful performance wise and you are not even getting proper results.


Dude, this is *EXACTLY* what RTX does 
Raytracing-based Global Illumination is one of the specific things Nvidia has been showcasing.
Full real-time raytracing is way beyond what current hardware can do.


----------



## FordGT90Concept (Aug 23, 2018)

RTRT without faking it takes petaflops of compute power.


----------



## Vya Domus (Aug 23, 2018)

londiste said:


> Dude, this is *EXACTLY* what RTX does



It's simply isn't, RTX and DXR adds an additional stage just like when tessellation was introduced and then you have dedicated hardware to be used for that particular stage. These stages are independent and are intertwined with traditional shaders, RTX cannot be used to accelerate existing shading structures which is what I was saying would be more useful for the time being.


----------



## londiste (Aug 23, 2018)

What do you mean? It replaces some existing things (lighting and shadowing are the most obvious ones here) and then works its result back into the traditional rendering pipeline.


----------



## Vya Domus (Aug 23, 2018)

It's doesn't replace anything, it's an addition.


----------



## londiste (Aug 23, 2018)

Vya Domus said:


> It's doesn't replace anything, it's an addition.


How about generating shadow maps for example?


----------



## Vya Domus (Aug 23, 2018)

londiste said:


> How about generating shadow maps for example?



What about it ? It used to be you would do depth-testing with z-buffers and now you can also use rays. It doesn't replace anything.


----------



## londiste (Aug 23, 2018)

Vya Domus said:


> What about it ? It used to be you would do depth-testing with z-buffers and now you can instead use rays. It doesn't replace anything.


OK, what do you mean by replacing? Do you expect a whole new paradigm of rendering? As was already said, full-on realtime raytracing is simply not viable.

You are correct, shadows, lighting, reflections, all are part of the current rendering and are all done on shader units with various cool algorithms. What is currently being done and presented as part of RTX is effectively replacing these parts of existing rendering pipeline with raytraced solutions that are capable of providing more accurate representation as well as potential performance uplift (at least in theory) by offloading some work from traditional shader units to dedicated RT/AI units.


----------



## Vya Domus (Aug 23, 2018)

londiste said:


> What is currently being done and presented as part of RTX is effectively replacing these parts



They wished that was the case, unfortunately it's not, it would make things completely unplayable to have all the shading be done through ray-tracing. For the time being nothing is replaced completely that's why this is an addition not a replacement and they made that pretty clear too. Z-buffering isn't going anywhere.



londiste said:


> OK, what do you mean by replacing?



You tell me. You said it replaces things, you are going circles on your own statements.


----------



## londiste (Aug 23, 2018)

Why would all shading have to be done through ray-tracing?


----------



## Vya Domus (Aug 23, 2018)

londiste said:


> Why would all shading have to be done through ray-tracing?





londiste said:


> *shadows, lighting, reflections, all are part of the current rendering*  ... What is currently being done and presented as part of RTX is *effectively replacing these parts of existing rendering pipeline with raytraced solutions   *



You said it. Are you trolling or do you genuinely do not have recollection of anything you say ?


----------



## Vayra86 (Aug 23, 2018)

londiste said:


> Why would all shading have to be done through ray-tracing?



You are guessing but you dont need to guess that RT will only add frame time compared to non RT shader action. Its blatantly obvious and because RT limits performance, those shaders wont do extra work because theyre waiting for RT calculations. Even x80ti wont alleviate that with its "10 giga rays" (still laughing at Jensen saying this btw, and a crowd buying it like candy was even more hilarious to watch.)

If you buy into this, honestly, you just dont get it.


----------



## londiste (Aug 23, 2018)

Nevermind. I give up.


----------



## lewis007 (Aug 23, 2018)

With every passing day AMD looses relevance as a graphics company, in the mean time Nvidia will make a shit ton of money. I never thought I would find myself saying this, but Intel's line of graphics can't come soon enough.


----------



## ShurikN (Aug 23, 2018)

evernessince said:


> I wouldn't say that.  Going from 14nm to 7nm should give Vega a significant performance boost.


Vega is a lost cause for gaming at the moment, a die shrink wont make it any better. It was made for computing from the get go.
Navi on the other hand is a gaming gpu first and foremost (if the PS5 rumours are true). I wish AMD would focus on 7nm Navi for 2019 and forget about Vega/Polaris all together.
nVidia wont come out with anything interesting until the same time frame, and Turing is just a stopgap. It's nothing revolutionary like nVidia hypes it to be.


----------



## Midland Dog (Aug 23, 2018)

im calling rip


----------



## CheapMeat (Aug 23, 2018)

Vya Domus said:


> You seriously think the 2060, 2050 and whatever else are going to be as cheap as they used to be ? The massive price hike is going to trickle down to all of their products.
> 
> It's absolutely baffling that you wouldn't see this as an issue.



IF I DON'T GET MY RTX CARD I'M GOING TO LITERALLY DIE!!! REEEEEEE!!!


----------



## cyneater (Aug 23, 2018)

TheinsanegamerN said:


> Yes, all those special features, that AMD fans were screaming would save AMD. DX12, async, Mantle, vulkan, tressFX, the list goes on.
> 
> .



Vulkan when AMD fix the drivers it works on windows....
But it works on linux :S


----------



## Super XP (Aug 23, 2018)

john_ said:


> Many where hoping AMD to produce something good so they can go and buy cheaper Intel and Nvidia hardware. For me, I am glad to see AMD throwing R&D money where it will make money, not where people would end up laughing at it's face saying "Thank you for helping us buy cheaper products from your competitors" adding that "AMD hardware is for the poor".
> 
> It's obvious that we are in a "bulldozer" era for GPUs, so Nvidia will go unchallenged for the next 2-3 years. You want better GPUs from AMD and better competition? Support Ryzen as an option to those who ask you for a hardware advice.



The thing is AMD GPUs may be behind in performance,  but in no way can they not play games. 

My RX 580 plays games at 2K high setting with absolutely no issues. 

Nvidia obviously has a lead in GPUs, and for good reason, that's there strength. AMD competes in a lot more segments, CPUs being the big one,  over Nvidia. 

Hopefully NAVI helps AMD steer in the right direction. GPUs? At least for now all they need is solid price / performance / low power draw, till there ducks get in order. 

Ryzen is a beast. Threadripper was a very smart move that caught the competition by surprise.  Love It.


----------



## carex (Aug 23, 2018)

ShurikN said:


> Vega is a lost cause for gaming at the moment, a die shrink wont make it any better. It was made for computing from the get go.
> Navi on the other hand is a gaming gpu first and foremost (if the PS5 rumours are true). I wish AMD would focus on 7nm Navi for 2019 and forget about Vega/Polaris all together.
> nVidia wont come out with anything interesting until the same time frame, and Turing is just a stopgap. It's nothing revolutionary like nVidia hypes it to be.


i strongly believe navi is similar to vega .....and there is no such thing as different gaming gpu only gaming drivers
vega for me is the backbone of all upcoming gpus as it uses infinity fabric inside in vega 56/ 64
it automatically optimise's all games for infinity fabric like crossfire ....about the power efficiency 2500u is a serious example what vega really can do
but yeah they have to use the infinity fabric outside as well otw amd will be dead


----------



## Morgoth (Aug 23, 2018)

i hope this new vega gets a low price around 300 usd, not what nvidia is doing now.. rtx 2070 starts at 499 wtf
i mis the days when 3dfx was alive offering competition between ati,nvidia,3dfx
i hope navi puls it off
and intel gets a good enough gpu to compete in 2020 im done with these over priced products


----------



## R0H1T (Aug 23, 2018)

lewis007 said:


> With every passing day AMD looses relevance as a graphics company, in the mean time Nvidia will make a shit ton of money. I never thought I would find myself saying this, but Intel's line of graphics can't come soon enough.


As long as AMD have the consoles, which they do for the next gen, they're not getting anymore irrelevant today than they were yesterday.
It's only the marketshare that's shifting but that could change directions pretty quickly in the next year or so, as we've seen in the past or with CPUs.


----------



## efikkan (Aug 23, 2018)

yogurt_21 said:


> I get it "Navi" as in we will navi make this gpu available for desktop gamers, navi.
> 
> Navi is supposed to be for the PS5 I thought... so what no love for desktops?


Desktop gaming GPUs is no longer the focus for AMD, their focus is custom SOCs, and the desktop gets whatever they can easily re-purpose.



Vya Domus said:


> You seriously think the 2060, 2050 and whatever else are going to be as cheap as they used to be ? The massive price hike is going to trickle down to all of their products.
> 
> It's absolutely baffling that you wouldn't see this as an issue.


The price hike is quite a bit over Pascal, but not the highest we've seen (GeForce 8800 Ultra was over $1000 corrected for inflation).
Still we have to acknowledge that MSRP prices have been a little low lately, especially for AMD with expensive choices like HBM and large dies. You might remember they tried to sell Vega at $100 over MSRP, and that's probably no accident. The cost of competing is probably one of the reasons why AMD is targeting "Vega level performance" with Navi, instead of more ambitious designs. The danger is that they will soon barely have any products in even the  mid-range, leaving them with the very low margin low-end market.



Super XP said:


> The thing is AMD GPUs may be behind in performance,  but in no way can they not play games.
> 
> My RX 580 plays games at 2K high setting with absolutely no issues.


It's fine that you are satisfied with your investment, but that's irrelevant when it comes to the competitiveness of a product. Even with Polaris/Vega vs. Pascal, AMD have the inferior choices throughout the mid-range. Surely many will be satisfied e.g. with a RX 580, but what good is that when there are better choices?

Buyers should always go with the choice they believe to be the best deal within their budget. Even prior to Turing, Polaris/Vega is a hard sell, unless the buyer have very specific requirements. With the availability of Turing this is going to get much worse. AMD's top model Vega 64 will soon be competing with GTX 2060 at ~$300 and half the TDP. AMD simply can't make a profit then.



R0H1T said:


> As long as AMD have the consoles, which they do for the next gen, they're not getting anymore irrelevant today than they were yesterday.


They are lagging further and further behind. If this continues they'll loose future console deals as well.


----------



## JRMBelgium (Aug 23, 2018)

More then 60% of all gamers still play at 1080p. My Vega 56 runs all the latest games just fine on Ultra 60fps+. Battlefield 1 for example runs at 100+ fps with High settings and 140% resolution scale.
I've borrowed a 1080 and with my Vega 56 overclocked, we're talking about 5fps difference in BF1. So why would I need anything faster at the moment?

And it's not just me, the 1050TI and the 1060 are still the most popular graphics cards today. And AMD still has the RX 4xx and RX 5xx to compete with those cards.

The fact that AMD has nothing to compete with the 1080TI or newer just means that they have nothing in the market for the 5% of gamers who can actually afford this hardware. I think some of you are a little bit to dramatic.

I'm concinved that AMD will release a new gamers GPU next year, and I'm honestly convinced that it will only provide 1080TI performance ( perhaps a tiny bit faster ), but for 500$ and not for the crazy prices Nvidia is charging. Keep in mind that their research budget is seriously limited compared to Nvidia.

I've waited 4 years to replace my 290x for the Vega 56. And I don't mind waiting another year for the next decent upgrades. When I upgrade, I want at least 25-50% performance upgrade. I'm sticking with AMD. Not once have I ever had any hardware or software issues ( exept for a faulty driver perhaps ).

If you have a Vega 56 or 1070Ti, just wait it out and then decide if you want to upgrade to Nvidia's 2xxx or AMD's next gen. The ones that upgrade now to Nvidia 2xxx will end up paying twice the amount that the people will pay a year from now.

If there are games at the moment that can not run at 60fps ( minimum framerate ) on a Ryzen 2700x combined with a Vega 56, then it's a crappy game to begin with. Becease that hardware is more then powerfull enough for beatifull modern graphics on high framerates


----------



## ppn (Aug 23, 2018)

I cant stand anything less than 1440p on 24".

Just cut the memory to 2048 bit and release it already.

the 27% bump in memory speed of the new HBM2 945 ->1200 Mhz. the core overclock 37% on 7nm and this baby is 4K ready. just like 1080Ti/2080

What are you waiting for AMD, tell.me


----------



## efikkan (Aug 23, 2018)

ppn said:


> the 27% bump in memory speed of the new HBM2 945 ->1200 Mhz. the core overclock 37% on 7nm and this baby is 4K ready. just like 1080Ti/2080


A 37% overclock would be too optimistic on a Vega 20, AMD would have to cut the fp64 support then. Still the production volume on 7nm would be very low.


----------



## x86overclock (Aug 23, 2018)

R0H1T said:


> Is "Navi" the next gaming chip, end of the line for GCN?


 Yes. Because of Nvidia mainly. Nvidia does not have Asynchronous Compute hardware, instead they have their own proprietary simulated Asynch Compute instruction which is used by most game developers because they use Nvidia hardware to develop those games. When you use Nvidia hardware to develop a game you can only use "CUDA". CUDA automatically implements Nvidia optimization instructions including their proprietary simulated Asynch Compute instruction which only benefits Geforce Cards and hinders Radeon performance. Radeon's GCN relies on Microsoft DX12's Asynchronous Compute which is not implemented in 95% of games out there. The titles that do use DX12's Asynchronous Compute like Forza 7 and Wolfenstein 2 The New Colossus, Both the RX Vega 56 and RX Vega 64 surpass The Nvidia Geforce 1080ti in performance by almost 25%. Because of this AMD will not be persuing Asynchronous Compute after Navi in the gaming segment but they will keep it in the professional segment.



Morgoth said:


> i hope this new vega gets a low price around 300 usd, not what nvidia is doing now.. rtx 2070 starts at 499 wtf
> i mis the days when 3dfx was alive offering competition between ati,nvidia,3dfx
> i hope navi puls it off
> and intel gets a good enough gpu to compete in 2020 im done with these over priced products


I have confidence Intel will do an outstanding job on their discrete GPUs.  But Vega 7nm won't be coming to the gaming segment as stated above. Navi 7nm will be coming to the gaming segment in 2019 and from rumors it may be appearing in the early spring.


----------



## Basard (Aug 23, 2018)

I'm not sure what all the fuss is about with people needing to "stop buying expensive cards...."  Just look at the latest steam survey ffs.  Just because "nobody" is buying AMD cards doesn't mean that they aren't still using shitty cards.  Hell, if not for so much overtime this year, I'd still be using a 780 that I paid $100 for from a friend.  According to the survey, just over 3.5% of people are using a 1080 or 1080ti.  Yeah, the world doesn't revolve around Steam, but it's a decent enough indicator.

Who knows what the hell AMD is doing.... Get some HBM onto a TR sized package, give it a single CCX (8c/16t) and a real Vega GPU. They can call it X599.


----------



## x86overclock (Aug 23, 2018)

TheinsanegamerN said:


> Yes, all those special features, that AMD fans were screaming would save AMD. DX12, async, Mantle, vulkan, tressFX, the list goes on.
> 
> Special features dont matter unless you can get most developers to use it, and only nvidia's gameworks has seen such success. For 5 years "special features" were going to be AMD's ace up their sleeve, and for 5 years Nvidia has dominated them on sales. AMD needs to focus less on special features they cant support and more on producing fast GPUs.
> 
> ...


AMD is not targeting the high end market on their next gen Navi. Navi is taking the Polaris approach by selling a GPU that is equivalent to a Geforce GTX1080's performance and only charging $300 for it but that might change soon because there is going to be a big price drop on existing Geforce GTX 1000 series in order to move inventory to make room for the new RTX 2000 series.


----------



## evernessince (Aug 24, 2018)

ShurikN said:


> Vega is a lost cause for gaming at the moment, a die shrink wont make it any better. It was made for computing from the get go.
> Navi on the other hand is a gaming gpu first and foremost (if the PS5 rumours are true). I wish AMD would focus on 7nm Navi for 2019 and forget about Vega/Polaris all together.
> nVidia wont come out with anything interesting until the same time frame, and Turing is just a stopgap. It's nothing revolutionary like nVidia hypes it to be.



Vega 56 is a decent gaming GPU.  1070 Ti performance and decent power consumption.  They can gain 35% at least just from the node jumps and reduce power consumption + die size.  That's 2070 performance at very likely the same power consumption.  They can drop the price to $350 thanks to the reduced size.  That's a hell of allot better then the $600 Nvidia is charging for the 2070.

Not much is known about Navi, AMD has been extremely quiet about it.


----------



## warrior420 (Aug 24, 2018)

Sounds good to me.  For some reason people forget that AMD has to get this architecture in the hands of developers first, ie their Radeon Pro GPU line.  Developers need to be able to build around it first, and apply the new technologies to the new software/hardware systems.  And of course this is all for compatibility's sake.  These things take time.  The wait will be worth it.  My Vega 64 is doing great for me


----------



## JRMBelgium (Aug 24, 2018)

I think all Vega users are very satisfied with their product. Most of them will upgrade to AMD again next year.


----------



## londiste (Aug 24, 2018)

Jelle Mees said:


> I think all Vega users are very satisfied with their product. Most of them will upgrade to AMD again next year.


Are you sure?
130W more power for around the same performance as GTX1080. Also, considering the custom cards coming to market late, a lot of Vega cards are reference. That cooler is noisy as hell even compared to Geforce's reference blower.

Mining and GPGPU stuff is where Vega is awesome. But where gaming is concerned, Vega is simply outclassed.
Now when GTX1080s and GTX1070Tis are around 50€ cheaper than Vega 64 and Vega56 respectively (at least in Europe) I honestly cannot see the appeal.

Turing is out and seems to have the usual 20-25% generational performance boost with a huge price increase. AMD might still reconsider bringing 7nm Vega to consumer space. We have not heard anything about the frequency potential but 7nm should bring a considerable uplift so there is a real chance


----------



## JRMBelgium (Aug 24, 2018)

londiste said:


> Are you sure?
> 130W more power for around the same performance as GTX1080. Also, considering the custom cards coming to market late, a lot of Vega cards are reference. That cooler is noisy as hell even compared to Geforce's reference blower.
> 
> Mining and GPGPU stuff is where Vega is awesome. But where gaming is concerned, Vega is simply outclassed.
> ...



I have Vega 56. Performance per watt is better then on gtx 970 or 980ti. Why was is no drama when Nvidia had this performance per watt? Noise is about 3dba higher then 1080ti. A good case can compensate for this.

Vega 64 is another story. But in all honesty, anyone who does a little bit of research before purchase knew that wit a simple  bios flash you get Vega 64 performance...

And normally, the AMD cards shiuld not be more expensive. Mining hype makes Vega 56 more expensive... Vega outclassed? Not really. People just expected it to beat Nvidia wich is not realistic with AMD's budget.


----------



## londiste (Aug 24, 2018)

Jelle Mees said:


> I have Vega 56. Performance per watt is better then on gtx 970 or 980ti. Why was is no drama when Nvidia had this performance per watt? Noise is about 3dba higher then 1080ti. A good case can compensate for this.
> 
> Vega 64 is another story. But in all honesty, anyone who does a little bit of research before purchase knew that wit a simple  bios flash you get Vega 64 performance...
> 
> And normally, the AMD cards shiuld not be more expensive. Mining hype makes Vega 56 more expensive... Vega outclassed? Not really. People just expected it to beat Nvidia wich is not realistic with AMD's budget.


This is a simple question of timeline:
2014: Maxwell - GTX980/GTX970
2016: Pascal - GTX1080
2017: Vega - Vega64/Vega56

1080Ti is a good 25-30% faster than Vega64. 30-35% compared to Vega56 (Edit: TPU reviews' performance summary says 43% faster than Vega56 and 28% faster than Vega64 at 1440p).
1080Ti also uses less power than Vega64, about 50W less.

Mining hype was not and is not what makes Vega more expensive. AMD needs to retain some profit margin for Vega. GTX1080/GTX1070Ti are cheaper to produce.

Budget is not all that relevant here. All the GPUs are manufactured in the same foundry. Just look at the objective measurements and specs of Vega compared to GP102/GP104 and you see why it was expected to perform roughly at the level of 1080Ti.

Edit:
Specs. Homework is to figure out which GPU is which.
Die size: 314 mm² vs 486 mm² (vs 471 mm²)
Transistors: 7.2 b vs 12.5 b (vs 12 b)
TDP/Power: 180 W vs 295 W (vs 250 W)
RAM bandwidth: 320 GB/s vs 484 GB/s (vs 484 GB/s)
FP32 Compute: 8.2 TFLOPS vs 10.2 TFLOPS (vs 10.6 TFLOPS)

Edit2: 7nm would be able to change the die size situation as well as power and hopefully clocks but we do not know how manufacturing prices compare. The last semi-confirmed accounts said 7nm is still more expensive. More expensive is OK when dealing with high price/margin scenarios like Instinct or Radeon Pro but is prohibitive in getting it to consumers.


----------



## pkrol (Aug 24, 2018)

cucker tarlson said:


> *AMD at its Computex event confirmed that "Vega 20" will build Radeon Instinct and Radeon Pro graphics cards, and that it has no plans to bring it to the client-segment. *
> 
> enthusiast gamers are at the very end of their scope,forget they're gonna prioritize or innovate in that segment.



To be fair why would they. NVIDIA is entrenched in the minds of enthusiasts. Unless they leap frog NVIDIA there is no way they will have great sales. May as well focus on the business segment that only cares about ROI.


----------



## londiste (Aug 24, 2018)

pkrol said:


> NVIDIA is entrenched in the minds of enthusiasts.


The hell it is. Come out with a better card (objectively or subjectively) and there is nothing entrenched. AMD has RX580 competing very successfully with GTX1060.


----------



## cucker tarlson (Aug 24, 2018)

Jelle Mees said:


> I have Vega 56. Performance per watt is better then on gtx 970 or 980ti.


lol, no it isn't, not at resolution you're playing, maybe slightly better at higher resolutions but they're basically in the same league







oc vs oc 980Ti will be better
btw look where gtx 1070 is on 16nm and ddr5, 1.5x over a 14nm hbm2 amd card....



Jelle Mees said:


> IWhy was is no drama when Nvidia had this performance per watt?



Do you understand the concept of time and node size ? Maxwell was lightyears ahead of amd with maxwell






that forced amd to use hbm on fury x,that's why it had a measly 4gb vram and cost $650, and even then it was good 20% behind 980ti in perf/wat


----------



## Valantar (Aug 24, 2018)

cucker tarlson said:


> *AMD at its Computex event confirmed that "Vega 20" will build Radeon Instinct and Radeon Pro graphics cards, and that it has no plans to bring it to the client-segment. *
> 
> enthusiast gamers are at the very end of their scope,forget they're gonna prioritize or innovate in that segment.


While it's (obviously) disappointing that AMD has yet to respond to Nvidia's performance/power gains since Pascal, and competition is _desperately_ needed in the consumer GPU space, what they're doing makes sense in terms of a) keeping AMD alive, and b) letting them bring a truly competitive product to market _in time_.

Now, to emphasize: this sucks for us end-users. It really sucks. I would much rather live in a world where this wasn't the situation. But it's been pretty clear since the launch of Vega that this is the planned way forward, which makes sense in terms of AMD only recently returning to profitability and thus having to prioritize heavily what they spend their R&D money on.

But here's how I see this: AMD has a compute-centric GPU architecture, which _still _beats Nvidia (at least Pascal) in certain perf/w and perf/$ metrics when it comes to compute. At the very least, they're far more competitive there than they are in perf/W for gaming (which again limits their ability to compete in the high end, where cards are either power or silicon area limited). They've decided to play to their strengths with the current arch, and pitch it as an alternative to the Quadros and Teslas of the world. Which, as it looks right now, they're having reasonable success with, even with the added challenge that the vast majority of enterprise compute software is written for CUDA. Their consistent focus on promoting open-source software and open standards for writing software has obviously helped this. The key here, though, is that Vega - as it stands today - is a decently compelling product for this type of workload.

The question of what they could have done to improve gaming performance, as this is obviously where Vega lags behind Nvidia the most. This is an extremely complicated question. According to AMD around launch time, the majority of the increase in transistor count between Polaris and Vega was spent on increasing clock speeds, which ... well, didn't really do all that much. Around 200 MHz (1400-ish to 1600-ish), or ~14%. It's pretty clear they'd struggle to go further here. Now, I've also seen postings about 4096 SPs being a "hard" limit of the GCN architecture for whatever reason. I can't back that up, but at least it would seem to make sense in light of there being no increase between the Fury X and Vega 64. So, the architecture might need significant reworking to accommodate a wider layout (though I can't find any confirmation that this is actually the case). They're not starved for memory bandwidth (given that the Vega 56 and 64 match or exceed Nvidia's best). So what can they improve without investing a massive amount of money into R&D? We know multi-chip GPUs aren't ready yet, so ... there doesn't seem to be much. They'll improve power draw and possibly clock speeds by moving to new process nodes, but that's about it.

In other words: it seems very likely that AMD needs an architecture update far bigger than anything we've seen since the launch of GCN. This is a wildly expensive and massively time-consuming endeavor. Also note that AMD has about 1/10 the resources of Nvidia (if that), and have until recently been preoccupied with reducing debt and returning to profitability, all while designing a from-scratch X86 architecture. In other words: they haven't had the resources to do this. Yet.

But things _are_ starting to come together. Zen is extremely successful with consumers, and is looking like it will be the same in the enterprise market. Which will give AMD much-needed funds to increase R&D spending. Vega was highly competitive in compute when it launched, and still seems to do quite well, even if their market share is a fraction of Nvidia's. It's still bringing in some money. All the while, this situation has essentially forced AMD to abandon the high-end gaming market. Is this a nice decision? No, as a gamer, right now, I don't like it at all. But for the future of both gaming and competition in the GPU market in general, I think they're doing the right thing. Hold off today, so that they can compete tomorrow. Investing what little R&D money they had into putting some proverbial lipstick on Vega to sell to gamers (which likely still wouldn't have let them compete in the high end) would have been crazy expensive, but not given them much back, and gained them nothing in the long run. Yet another "it can keep up with 2nd-tier Nvidia, but at 50W more at $100 less" card wouldn't have gained AMD much of an increased user base, given Nvidia's mindshare advantage. But if prioritizing high-margin compute markets for Vega for now, and focusing on Zen for really making money allows them to produce a properly improved architecture in a year? That's the right way to go, even if it leaves me using my Fury X for a while longer than I had originally planned to.

Of course, it's entirely possible that the new arch will fall flat on its face. I don't think so, but it's _possible_. But it's far more certain that yet another limited-resource, short-term GCN refresh would be even worse.


----------



## cucker tarlson (Aug 24, 2018)

But do Zen sales improve RTG R&D budget at all ? They split into a separate branch uder separate name. Seems not. If they cared about enthusiast gamers they'd release a bigger polaris on ddr5x, a card that is gaming oriented, that'd be sure to be better than vega,which is not gaming oriented at all. 1.45x difference from going 2.1x the die size and HBM2 ? Are you kidding me ? Nvidia hit 2.05x performance increase from 1060 to 1080Ti from 2.3x die size increase and ddr5x. Polaris seems like a better gaming architecture than Vega,despite slightly lower clocks. Then they release a compute-oriented Vega and do blind tests for gamers using freesync as a bargaining card, are you effin serious........


----------



## FordGT90Concept (Aug 24, 2018)

Lisa Su is in charge of AMD and RTG.  She knows that RTG has fallen behind while AMD has catapulted ahead.  She also knows that APUs sell well so having a good graphics core to attach to CPUs is important to AMD's CPU business.

I don't think Navi is GCN based.  I think it's a new architecture which is why AMD has been quiet other than about Vega 20.  Vega 20 will likely be AMD's last compute oriented card for a long while. Navi is focused on consumers and consoles.


----------



## cucker tarlson (Aug 24, 2018)

FordGT90Concept said:


> Lisa Su is in charge of AMD and RTG.  She knows that RTG has fallen behind while AMD has catapulted ahead.  She also knows that APUs sell well so having a good graphics core to attach to CPUs is important to AMD's CPU business.
> 
> I don't think Navi is GCN based.  I think it's a new architecture which is why AMD has been quiet other than about Vega 20.


That's the reason AMD only focuses on gaming in mid-range,console,apu segments. Enthusiast gaming has not been their focus with Vega,and will not be sa long as they continue using it,which seems unlikely given how well Vega does at compute tasks,despite the gargantuan power draw. They're refining it with 7nm, it'll be much better than Vega 10, and that can only mean one thing - they plan to sell less Vegas at $600 for gamers, more Vegas at +$1000 for HPC.


----------



## Valantar (Aug 24, 2018)

cucker tarlson said:


> But do Zen sales improve RTG R&D budget at all ? They split into a separate branch uder separate name. Seems not. If they cared about enthusiast gamers they'd release a bigger polaris on ddr5x, a card that is gaming oriented, that'd be sure to be better than vega,which is not gaming oriented at all. 1.45x difference from going 2.1x the die size and HBM2 ? Are you kidding me ? Nvidia hit 2.05x performance increase from 2.3x die size increase and ddr5x. Polaris seems like a better gaming architecture than Vega,despite slightly lower clocks.


So you think AMD's board and CEO would just leave RTG to die without even trying to save it if it couldn't sustain itself? Yeah, that doesn't strike me as likely. The GPU and compute markets are too big to abandon when you're the 2nd-largest player in the market, even if that's a 2-player market. Not to mention that the separation of RTG and whatever the CPU-making division is called is still only an administrative division within AMD. Both affect AMDs success and finances, both put money into (or take money out of) what is ultimately the same piggy bank. If one division is struggling and needs heavy R&D investments, and one is doing well and doesn't need as much, it would be pretty damn stupid not to shuffle that money over.

And you're entirely right: RTG could have put out a cheaper-to-produce "big Polaris" with GDDR5X, which would likely have been a very compelling gaming card - but inherently inferior to Vega in the higher-margin enterprise segment (lack of RPM/FP16 support, no HBCC). Not to mention that - even with AMD's lego-like architectures - designing and getting this chip into production (including designing a GDDR5X controller for the first time, which would likely _only _ see use in that one product line) would have been very expensive. Not even remotely as expensive as Vega or a new arch, but enough to make a serious dent - and thus push development of a new arch back even further. Short-term gains for long-term losses, or at least postponing long-term gains? Yeah, not the best strategy.


----------



## cucker tarlson (Aug 24, 2018)

Valantar said:


> So you think AMD's board and CEO would just leave RTG to die without even trying to save it if it couldn't sustain itself? Yeah, that doesn't strike me as likely. The GPU and compute markets are too big to abandon when you're the 2nd-largest player in the market, even if that's a 2-player market. Not to mention that the separation of RTG and whatever the CPU-making division is called is still only an administrative division within AMD. Both affect AMDs success and finances, both put money into (or take money out of) what is ultimately the same piggy bank. If one division is struggling and needs heavy R&D investments, and one is doing well and doesn't need as much, it would be pretty damn stupid not to shuffle that money over.
> 
> And you're entirely right: RTG could have put out a cheaper-to-produce "big Polaris" with GDDR5X, which would likely have been a very compelling gaming card - but inherently inferior to Vega in the higher-margin enterprise segment (lack of RPM/FP16 support, no HBCC). Not to mention that - even with AMD's lego-like architectures - designing and getting this chip into production (including designing a GDDR5X controller for the first time, which would likely _only _ see use in that one product line) would have been very expensive. Not even remotely as expensive as Vega or a new arch, but enough to make a serious dent - and thus push development of a new arch back even further. Short-term gains for long-term losses, or at least postponing long-term gains? Yeah, not the best strategy.


Sad,but true.


----------



## FordGT90Concept (Aug 24, 2018)

Vega was made at GloFo.  I'm not sure why they did but that's likely the primary reason why Vega is relatively power hungry compared to Pascal.  Vega 20 not only has architectural tweaks, it is on a process that's half the size.  Depending on a number of factors, it could give RTX 2080 Ti a run for its money.  We already know that Vega 10 was memory starved and Vega 20 remedies that by doubling the bandwidth.  That change by itself likely makes it competitive with GTX 1080 Ti/ RTX 2080.


----------



## londiste (Aug 24, 2018)

Valantar said:


> But here's how I see this: AMD has a compute-centric GPU architecture, which _still _beats Nvidia (at least Pascal) in certain perf/w and perf/$ metrics when it comes to compute. At the very least, they're far more competitive there than they are in perf/W for gaming (which again limits their ability to compete in the high end, where cards are either power or silicon area limited). They've decided to play to their strengths with the current arch, and pitch it as an alternative to the Quadros and Teslas of the world. Which, as it looks right now, they're having reasonable success with, even with the added challenge that the vast majority of enterprise compute software is written for CUDA. Their consistent focus on promoting open-source software and open standards for writing software has obviously helped this. The key here, though, is that Vega - as it stands today - is a decently compelling product for this type of workload.


It looks like Impact is their main sales vehicle for Vega. AMD very cleverly sidestepped the challenges you list and found a niche - FP16, that is their key to this. Nothing in even remotely same price range does FP16 as well. It is useful for AI Training and they capitalize heavily on it.

Some leaks for 7nm Vega have hinted to additional specialized compute units, similarly to tensor cores in Nvidia's Volta/Turing. These are suspected to be aimed at AI (training). That would actually make a lot of sense, especially with the roughly 2x transistor density of 7nm process as well as not having to alter the base architecture and its limits (yet).


----------



## ppn (Aug 24, 2018)

We need AMD to release 7nm VEGA64 with GDDR6. and that is all there is to it.


----------



## Valantar (Aug 24, 2018)

cucker tarlson said:


> That's thre reason AMD only focuses on gaming in mid-range,console,apu segments. Enthusiast gaming has not been their focus with Vega,and will not be sa long as they continue using it,which seems unlikely given how well Vega does at compute tasks,despite the gargantuan power draw. They're refining it with 7nm, it'll be much better than Vega 10, and that can only mean one thing - they plan to sell less Vegas at $600 for gamers, more Vegas at +$1000 for HPC.


Which is the "smart" thing for them to do. $600 Vegas for gamers don't make sense now anyhow, even if they were 7nm Vegas. Even if they gained 20% clock speed (unlikely) at the same power, they wouldn't be competitive with Turing, and AMD would have to keep selling their chips at lower margins than Nvidia in the consumer space (although less of a disadvantage given the massive die size of TU102 and TU104 with the RT cores).

If AMD/RTG can live out this lull by selling Polaris 10/20/30/whatever-minor-tweak for ever-lower prices at roughly performance-per-$ parity with Nvidia, while putting as much effort and money as possible into making their upcoming arch as good as possible, that's a far better solution than churning out half-assed attempts at putting lipstick on Polaris by spreading their limited R&D funds thinner.



FordGT90Concept said:


> Vega was made at GloFo. I'm not sure why they did but that's likely the primary reason why Vega is relatively power hungry compared to Pascal.  Vega 20 not only has architectural tweaks, it is on a process that's half the size.  Depending on a number of factors, it could give RTX 2080 Ti a run for its money.


Doubtful. If the reports I've seen of 64 CUs being a hard limit in GCN are to be trusted, that just means a smaller die with higher clocks or less power (or both). If we can trust Nvidia's numbers somewhat (and ignore their faux-AA tensor core stuff), AMD would need a 50%+ performance increase to beat the 2080, let alone the Ti. That's not happening, even with a 14-to-7nm transition.

Also, the process isn't the key issue - the GTX 1050 and 1050Ti are made by GloFo on the same process as AMD, and roughly match the other Pascal cards for perf/W. This is mainly an arch issue, not a process issue.



ppn said:


> We need AMD to release 7nm VEGA64 with GDDR6. and that is all there is to it.


Why? The lower price of the RAM would likely be offset by designing a new chip with a new memory controller and going through the ~1/2-year process of getting it fabbed. Zero performance gain, _at best_ a $100 price drop, and that's if AMD eats the entire design-to-silicon cost. Not to mention that total board power draw would increase, forcing them to lower clocks.


----------



## cucker tarlson (Aug 24, 2018)

FordGT90Concept said:


> Vega was made at GloFo.  I'm not sure why they did but that's likely the primary reason why Vega is relatively power hungry compared to Pascal.


Excuses.... polaris or 1050ti are made at glofo too,yet they don't have such massive power consumption issues. Vega is power hungry primarily cause they just wanted more tflops.


----------



## Valantar (Aug 24, 2018)

cucker tarlson said:


> Excuses.... polaris or 1050ti are made at glofo too,yet they don't have such massive power consumption issues. Vega is power hungry primarily cause they just wanted more tflops.


Yes and no. Vega is power hungry because it's a big die pushed as far as it will go in terms of clocks. If you scroll up to your own post #134, you'll see the Vega 64 and 56 straddling the RX 580 and 570 in terms of perf/W. Of course, with HBM, they should have had an advantage (of around 20-30W saved), but that's likely been eaten by pushing clocks even further. Still, Vega and Polaris have very, very similar perf/W overall. There's no reason to believe a "big Polaris" would have been noticeably more efficient - it would just have been cheaper.


----------



## Adam Krazispeed (Aug 24, 2018)

Fluffmeister said:


> A 32GB HBM2 7nm chip is gonna be expensive, it's no surprise they are focusing on the HPC/Pro market where money is. Volta already has large chunks of the market sewn up and Turing based Quadros are going reign supreme in the pro sector... they need 7nm up their competitiveness.




BUT BUT BUT????? what about yields on 7nm, they cant be 100% not even 60% of functioning Dies.... not @ 7nm  (Cough, Intel Cough 10nm Cough ) and 2x of the hbm could be replaced with a larger die with 20-40% more ROPS and TMUs and AMD could maybe compete against Inte....l    i mean NVIDIA>> OPPS..... lol

the point is AMD GPUS need MORE ROPS, (TMUS are usually high cont (Fury X) &Vega 64), but also with 2-3x the theoretical Fil Rates doubled @ same base/boost clocks with a ROP,TMU count equivalent say the ...

@  A ..... 64 ROP'S AND 256 TMU'S COUNT!!!!

fury x is 67 GPPS (GPixels per /)
               267 GTPS (GTexels Per /s..... (i  forgot the texel fillrates but its just an example!) 
@ A 1050 mhz gpu CLOCK

now

VEGA20  7nm

NOT 64, but more like 96 rops remember this number!!! 96 ROPS....


but at 64 rops the fillrates are only slightly higher because of the core gpu clock/boost clocks...
64 rops @ 1ghz gpu clock should be at least doubled from the fury x  on the Pixels fillrate like 64 X1ghz + 64.0 Gtexels p/s but with a x2 perf. it be 64 x2 x1ghz = 128.0 Gtexels p/s thAT what we need then make it up to 96 rops then amd would have a monster card
\\

MOST CARDS have much higher texels fillrates, but pixel rates need to be almost just as high to run all the pixels especially on 4k and up like 8k resolutions need that raw pixel fillrate power up in the hundreds 2-300 or even higher would make a difference on a cardlike the fury x with the same 267 gtexels /s texture rate but if we could just double the pixel fillrate i guarantee you the gpu would perform 100% better, especialy on 4k and higher resolutions!!!


----------



## FordGT90Concept (Aug 24, 2018)

Valantar said:


> Doubtful. If the reports I've seen of 64 CUs being a hard limit in GCN are to be trusted, that just means a smaller die with higher clocks or less power (or both).


Not a hard limit, a memory chokepoint limit.  People that mined with the card overclocked the memory and underclocked the core because there's not enough bandwidth to supply 64 CUs.  Fury X was starved too.  Vega 64 only has a little bit more bandwidth than Fury X.



cucker tarlson said:


> Excuses.... polaris or 1050ti are made at glofo too,yet they don't have such massive power consumption issues. Vega is power hungry primarily cause they just wanted more tflops.


12 billion transistors versus, what? 3.3 billion and 5.7 billion?  GloFo 14nm was better suited to smaller chips.


----------



## cucker tarlson (Aug 24, 2018)

Valantar said:


> Yes and no. Vega is power hungry because it's a big die pushed as far as it will go in terms of clocks. If you scroll up to your own post #134, you'll see the Vega 64 and 56 straddling the RX 580 and 570 in terms of perf/W. Of course, with HBM, they should have had an advantage (of around 20-30W saved), but that's likely been eaten by pushing clocks even further. Still, Vega and Polaris have very, very similar perf/W overall. There's no reason to believe a "big Polaris" would have been noticeably more efficient - it would just have been cheaper.


That's cause rx580 was just going overboard with clocks to gain anything over 1060. When I said polaris, I meant rx480.


However bad the situation is for those who buy xx80 nvidia cards,it is still a lot better than for those who stick to AMD in that segment. You can question the pricing of new cards and how useful rtx will be in the early adoption days, but nvidia has a new architecture out and they make cards available for gamers instantly. Vega 7nm will be out this year, but will take a friggin year for gamers to see one. I still think - better shoot rays with nvidia than shoot yourself in the foot by waiting for amd and getting disappointed again.


----------



## jabbadap (Aug 24, 2018)

FordGT90Concept said:


> Lisa Su is in charge of AMD and RTG.  She knows that RTG has fallen behind while AMD has catapulted ahead.  She also knows that APUs sell well so having a good graphics core to attach to CPUs is important to AMD's CPU business.
> 
> I don't think Navi is GCN based.  I think it's a new architecture which is why AMD has been quiet other than about Vega 20.  Vega 20 will likely be AMD's last compute oriented card for a long while. Navi is focused on consumers and consoles.



Not so sure about what navi really is. It might be last of GCN or not. Rumors are that Navi is especially made for Sony's next console Playstation 5, which will have Ryzen+Navi SOC or seperate ryzen cpu + navi dpgu.


----------



## Valantar (Aug 24, 2018)

Adam Krazispeed said:


> (...)


If you're going to go that technical, please pay some attention to punctuation and presenting your argument. I normally understand that stuff, but I can't make heads nor tails of your post.


cucker tarlson said:


> That's cause rx580 was just going overboard with clocks to gain anything over 1060. When I said polaris, I meant rx480.



Yet Vega 56 matches the RX480 - and isn't clocked as high as the 64. Again: Vega is pushed to its limit in terms of clocks, just like RX 5XX polaris, and is thus very, very similar in terms of clock scaling and perf/W.



FordGT90Concept said:


> Not a hard limit, a memory chokepoint limit.  People that mined with the card overclocked the memory and underclocked the core because there's not enough bandwidth to supply 64 CUs.  Fury X was starved too.  Vega 64 only has a little bit more bandwidth than Fury X.


Well, that begs the question why AMD's arch needs so much more memory bandwidth than Nvidia's for the same performance. I'm not saying you're wrong, but I don't think it's that simple. Also, if memory bandwidth was the real limitation (and there's no architectural max limit on CUs), releasing 7nm Vega 20 (with 4x HBM2) _would_ make sense. Yet that's not happening. I'm choosing to interpret that as a sign, but you're of course welcome to disagree here. At the very least, it'll be interesting to see the CU count of Vega 20.


----------



## FordGT90Concept (Aug 24, 2018)

Valantar said:


> Well, that begs the question why AMD's arch needs so much more memory bandwidth than Nvidia's for the same performance. I'm not saying you're wrong, but I don't think it's that simple. Also, if memory bandwidth was the real limitation (and there's no architectural max limit on CUs), releasing 7nm Vega 20 (with 4x HBM2) _would_ make sense. Yet that's not happening. I'm choosing to interpret that as a sign, but you're of course welcome to disagree here. At the very least, it'll be interesting to see the CU count of Vega 20.


Eh?  Vega 20 is 7nm w/ 4 HBM2 stacks.  Look at the original post of this thread.

Vega 20 has at least 64 CU but it's 50/50 on having more than that.  Depends on whether or not 64 CU is still memory starved with the wider bus probably.



jabbadap said:


> Not so sure about what navi really is. It might be last of GCN or not. Rumors are that Navi is especially made for Sony's next console Playstation 5, which will have Ryzen+Navi SOC or seperate ryzen cpu + navi dpgu.


Zen in a console is extremely unlikely.  It will definitely be an APU like XB and PS currently have.  Sony clearly made demands of AMD that Vega wasn't capable of filling.  What demands though, I don't know.  Those demands have taken Navi off of the GCN path and put AMD on a different path.  GCN may continue to live on as a compute focus card but it seems likely that gaming products have been forked to whatever Navi is.


----------



## Valantar (Aug 24, 2018)

FordGT90Concept said:


> Eh?  Vega 20 is 7nm w/ 4 HBM2 stacks.  Look at the original post of this thread.
> 
> Vega 20 has at least 64 CU but it's 50/50 on having more than that.  Depends on whether or not 64 CU is still memory starved with the wider bus probably.
> 
> ...


Apparently two words slipped out of my post there. Let me correct myself:


Valantar said:


> Also, if memory bandwidth was the real limitation (and there's no architectural max limit on CUs), releasing 7nm Vega 20 (with 4x HBM2) *for gaming* _would_ make sense. Yet that's not happening. I'm choosing to interpret that as a sign, but you're of course welcome to disagree here. At the very least, it'll be interesting to see the CU count of Vega 20.


There. Make sense now?

As for the latter part of your post: Raven Ridge is also Zen (just like non-APU designs like Summit Ridge). As such, a non-Jaguar APU would be Zen in a console.


----------



## londiste (Aug 24, 2018)

Valantar said:


> Well, that begs the question why AMD's arch needs so much more memory bandwidth than Nvidia's for the same performance. I'm not saying you're wrong, but I don't think it's that simple. Also, if memory bandwidth was the real limitation (and there's no architectural max limit on CUs), releasing 7nm Vega 20 (with 4x HBM2) _would_ make sense.


Delta compression? Both have it but Nvidia's has so far been considered (and measured) to be much more effective.
Bandwidth might be a problem but not severe enough to solve with a solution as expensive as that.



FordGT90Concept said:


> Zen in a console is extremely unlikely.  It will definitely be an APU like XB and PS currently have.


APU is just CPU and GPU on one die/package. Nothing really stops AMD from replacing Jaguar cores with Zen/+/2. AMD can easily do a Raven Ridge with larger GPU and very likely is doing exactly that.


----------



## FordGT90Concept (Aug 24, 2018)

They care more about lower power than they do about CPU prowess.  AMD is likely working on an ultra low power fork of Zen to replace Jaguar and that fork is what will end up in the next gen consoles.  It will not be Zen cores.

Raven Ridge has too much CPU and too little GPU to be used in a gaming console.



Valantar said:


> Apparently two words slipped out of my post there. Let me correct myself:
> 
> There. Make sense now?


Indeed, and as you pointed out, Vega 20 is not intended for gamers at all.


----------



## Valantar (Aug 24, 2018)

FordGT90Concept said:


> They care more about lower power than they do about CPU prowess.  AMD is likely working on an ultra low power fork of Zen to replace Jaguar and that fork is what will end up in the next gen consoles.  It will not be Zen cores.


Zen already scales very, very well to low power, sustaining 2GHz across 4c8t in 15W on 14nm for the 1st revision. Assuming the PS5 will use 7nm and either Zen+ or Zen2, they don't need a bespoke fork. Also, an AMD rep at Hot Chips recently indicated that the APUs (successors to Raven Ridge) will keep scaling to lower power over the following generations without losing performance. AMD designed Zen to fill as wide a space as Intel's Core, and by all accounts they've succeeded.


----------



## jabbadap (Aug 24, 2018)

FordGT90Concept said:


> Eh?  Vega 20 is 7nm w/ 4 HBM2 stacks.  Look at the original post of this thread.
> 
> Vega 20 has at least 64 CU but it's 50/50 on having more than that.  Depends on whether or not 64 CU is still memory starved with the wider bus probably.
> 
> ...



So you think that when they make custom zen soc for some unknown Chinese console manufacturer. They won't do one for Sony?
https://www.anandtech.com/show/1316...han-subor-z-console-with-custom-amd-ryzen-soc



londiste said:


> Delta compression? Both have it but Nvidia's has so far been considered (and measured) to be much more effective.
> Bandwidth might be a problem but not severe enough to solve with a solution as expensive as that.
> 
> APU is just CPU and GPU on one die/package. Nothing really stops AMD from replacing Jaguar cores with Zen/+/2. AMD can easily do a Raven Ridge with larger GPU and very likely is doing exactly that.



DCC and tiled base raster are both superior on nvidia's side. AMD says that ROPs are enough on GCN, but that can make difference too.


----------



## Valantar (Aug 24, 2018)

FordGT90Concept said:


> Indeed, and as you pointed out, Vega 20 is not intended for gamers at all.


You know you can't disprove a point by refusing to address it, right? To reiterate: if memory bandwidth is the main performance bottleneck of Vega 10, Vega 20 fixes that. (At an additional cost, sure, but with 2x B/W they should be able to increase CU count noticeably, lower clocks, and seriously improve perf/W, which is where they lag the most today.) So they should be able to release a far more powerful *gaming* GPU with Vega 20. Yet they're not doing that, not even when yields improve. Doesn't that say something about B/W not being the bottleneck for gaming? 



FordGT90Concept said:


> Raven Ridge has too much CPU and too little GPU to be used in a gaming console.


Has anyone here argued that we'll see Raven Ridge in a console?


----------



## FordGT90Concept (Aug 24, 2018)

Valantar said:


> Zen already scales very, very well to low power, sustaining 2GHz across 4c8t in 15W on 14nm for the 1st revision.


Jaguar can go as low as 3.9w.  It will no doubt be Zen-based but it won't be Ryzen.  For one, the extra transistors SMT requires isn't worth the performance gain for Microsoft/Sony.  They'll want a lean 8c/8t processor over feature-rich 4c/8t.



Valantar said:


> Yet they're not doing that, not even when yields improve. Doesn't that say something about B/W not being the bottleneck for gaming?


Nope, they'd rather sell these chips at $2000+ each to compute customers over <$1000 each to gamers.  Games rarely use 6 GiB VRAM, nevermind 32 GiB.  They would have to sell two SKUs of the chip, one with thicker stacks than the other.  It's a lot of work and a lot money to go down that path so, Vega 20 focuses only on compute.  AMD is investing their consumer resources on Navi.


----------



## Valantar (Aug 24, 2018)

FordGT90Concept said:


> Jaguar can go as low as 3.9w.


And Zen, given a 2x or more IPC advantage to Jaguar, can exceed it at perf/W. Even if it might bottom out a bit higher in terms of absolute wattage. Especially at 7nm. Both MS and Sony will be looking for actual increases in CPU performance, and more than 8 cores is unlikely, so they'll want Zen, and they'll want it at at least 2GHz.


FordGT90Concept said:


> It will no doubt be Zen-based but it won't be Ryzen.  For one, the extra transistors SMT requires isn't worth the performance gain for Microsoft/Sony.  They'll want a lean 8c/8t processor over feature-rich 4c/8t.


That's a rather meaningless thing to say. The cores in the PS4 and XBONE aren't "Jaguar" either (IIRC they're "Jaguar-derived"), but the difference is mainly academic. The implementation in a custom SoC requires changes to the design, so it'll never be an identical port, but saying "it'll be Zen, but not Ryzen" is a meaningless distinction (particularly as the marketing name "Ryzen" already encompasses at least three variants of the Zen arch). Nobody here is saying we'll see a straight port of neither Raven Ridge nor a whole Summit Ridge Zeppelin to a console. If you're understanding us this way, you're trying pretty hard to misunderstand. 




FordGT90Concept said:


> Nope, they'd rather sell these chips at $2000+ each to compute customers over <$1000 each to gamers.  Games rarely use 6 GiB VRAM, nevermind 32 GiB.  They would have to sell two SKUs of the chip, one with thicker stacks than the other.  It's a lot of work and a lot money to go down that path so, Vega 20 focuses only on compute.  AMD is investing their consumer resources on Navi.


You did see that that was my point above, right? That this is a plan to get back to profitability, and they're putting off gaming until they can refresh the arch?

But honestly, I don't doubt for a second that AMD would launch a top-end Vega 20 consumer flagship GPU if they could compete with the 2080Ti. Even if it was a limited-run $1200+ exclusive, it would help win/maintain valuable mindshare while waiting for Navi, not to mention maintain developer interest. But there's no indication that they can. And while sad, that's okay (as sh*t does happen, after all), and they're better off not trying at all than delivering overpriced or half-arsed attempts until Navi can get them back in the game (as that would hurt them in terms of mindshare). If it can't, that's another story, but considering the amount of engineering talent they have and their relations to the console industry, I'm not particularly worried. But we'll have to wait a year or so to see what AMD can bring to the table.


----------



## Citizenx (Aug 25, 2018)

TheinsanegamerN said:


> And AMD, once again, leaves an entire segment to nvidia for a third generation in a row.
> 
> AMD sould just sell radeon by this point. They can do really well with GPUs, or CPUs, but not both. Sell Radeon to somebody that can actually produce decent GPUs. Vega was a year late and many watts too high, and is about to get eclipsed by a new generation of GPUs from nvidia.



Don't be silly. Vega was not a year too late, delayed yes but it did deliver.alomg with Polaris. AMD just isn't focused on the high-end gamers at the moment which is perfectly fine because it would have been a wasted battle. Instead AMD's Polaris is great on 14nm and does really well in mid-end and mobile. Vega was initially designed for 7nm for Datacenters. The 14nm FinFET still uses the Vega architecture and provides performance benefits despite being produced on a less favourable GloFlo process which does not like higher clockspeeds. Still, Vega64 was the best on FPU intensive applications and still is wonderful for the Radeon Pro and Insticnt products. Optimized code yields better performance compared to competition we've witnessed that already. On the APUs or Mobile side of things Vega is incredibly efficient. Efficiency is key because AMD is prepping for 7nm which will play nicely with such architectures especially once MCM becomes common practice. The ML DL and HPC markets are huge and Vega20 is going straight for that. MCM designs are the future and Vega is future-proofed. High-End gaming is tiny piece of the pie but that will also come once AMD has established leadership in 7nm GPUs, CPUs and the HSA that exists between them. So to shamelessly say AMD should sell off RTG, is being extremely naive, uneducated and foolish. RTG is in good hands, and has the backing of the World's most powerful CPU and their engineering teams.


----------



## Basard (Aug 25, 2018)

I wish you guys would argue more with pictures and graphs, maybe some memes.....  All this reading is hurting my ignorant brain.


----------



## RichF (Aug 26, 2018)

RejZoR said:


> Looks like they're gonna keep Vega for compute where it really shines and focus on making Navi gamer focused.


I have no doubt executives both companies have been, and are now, looking into ways to maximize sales via crypto mining.

Gamers above the "console" level are low priority for AMD in particular. It sold out the Vega cards that gamers were mocking so heavily, even though I warned them that Vega will be considered a success by AMD when miners cause it to sell out (which it did). Most gamers couldn't see past their noses to realize that AMD doesn't care who buys their cards beyond which group will bring in the most sales. All the fake complaint from both companies about mining is pure marketing. No corporation is going to be sad to have sales that exceed supply, beyond simply wishing there was more supply.

They may not be able to do it, but I have no doubt that people at both companies are trying to create new coins that will cause another crypto boom so they can sell out their products. AMD, in particular, has a strong incentive to try this strategy, since Nvidia's bribing of game developers has given that company quite a bit of advantage. Also, Nvidia is larger and has can afford to cater more to gamers. When you're smaller and poorer you have to focus more, which AMD did. Gamers haven't provided enough demand to move all of AMD/ATI's supply (even when the cards were superior to Nvidia's, as with the 5870). Crypto, although less reliable, offers a sweet target to try to hit again and again. We'll get whatever scraps are left from that market and the prosumer/pro/supercomputing stuff.

Microsoft and Sony also don't want people to stop propping up their artificial BS duopoly in "consoles". All those are are small form factor x86 PCs. The same can be accomplished without paying taxes to MS and Sony, with Linux and Vulkan/OpenGL on the already standard x86 hardware. And, you do pay serious taxes to prop up these "console" walled gardens, like the terrible Jaguar CPU and the absurdity of having to release the same game for three platforms despite them being made from the same x86 hardware standard. So, there may be some payola to AMD on the side from both MS and Sony to not make it affordable to buy into the PC gaming platform. Monopolies are even more extreme than duopolies, in terms of the ability of the controlling company to raise prices artificially — which is what we are seeing right now from Nvidia.



FordGT90Concept said:


> Jaguar can go as low as 3.9w.


Jaguar is a garbage CPU that exists in the market for two reasons only:

1) Intel didn't bother to create anything decent to compete with it in time.
2) (most importantly) Sony and Microsoft have a duopoly.

Sony and MS didn't need Jaguar to go to 3.9W. Gamers would have been better off with a Piledriver clocked low than with Jaguar, especially for a second console iteration. Artificially higher prices go hand-in-hand with duopolies, monopolies, and cartels. Obsolete and inadequate products that wouldn't survive on their own merits in a competitive market are what get sold by monopolists because they make more money, even though they deliver less to consumers than what the consumers demand. The beauty, for corporations, of monopoly (and duopoly, to a lesser, but still very important extent) is consumer capture. Captured consumers have no choice beyond either paying too much or being shut out of a market.


----------



## Valantar (Aug 26, 2018)

RichF said:


> ...


You're not entirely wrong, but considering that (native) games on Linux on average perform significantly worse than the same games on Windows, which is again outperformed by consoles at equivalent hardware levels (or at least as close as you can get), it's not quite as easy as you're making it out to be. Consoles are walled gardens, but that brings with it the benefit of low-level hardware access (moreso than Vulkan or DX12 seems able to provide) and the benefit of developing for a fixed hardware platform and thus learning how to best utilize its strengths and avoid its weaknesses over time - just look at how good late X360 or PS3 games look compared to the ones launched within the first year of those consoles' life cycles. Also, consoles are _very_ cheap for what they offer (a very decent gaming experience for $250, just add any TV or monitor? Yeah, you're not getting that in a PC), and offer a level of convenience that is worth quite a lot to a lot of people. In short: consoles aren't going anywhere, and not because of unfair practices from MS or Sony. I prefer PC gaming, but it is undoubtedly both more expensive and more complicated. I don't mind. But I'm not getting rid of my consoles any time soon eiter, even though they don't see much use compared to my PC. Each has its distinctive strengths, and the price advantage is most definitely on the console side. So much for this being a product of a duopoly, I suppose?

As for monopolies, duopolies and cartels - there's no doubt that unfair business practices abound in the tech world (as they do in all major fields of international business, as there's no regulation or oversight to speak of), but I think you're going a bit too far here. Jaguar, while indeed being a garbage CPU, exists because it was a cheap, low silicon area, low-power multi-core X86 CPU which AMD could fit into an APU design at the right time and place. Intel's Atom designs of the same era were no slower, but couldn't be fit into an APU, so they weren't an option. The world has moved on in the 5-6 years since, but consoles are slow in terms of hardware development (the Xbox 360 still used its 2005-era PowerPC CPU in 2012 when the XBOne launched). There's nothing inherently wrong with this, even if it's outdated tech by today's standards. A faster console hardware replacement cycle doesn't make sense economically (most people wouldn't buy new consoles that quickly, and small developers would struggle to adapt to new architectures at that pace). There's no doubt that there's a lot to be gained from Jaguar-derived cores being phased out of consoles, but on the other hand, it's _amazing_ what developers are able to do with 8 of these garbage cores. I fully welcome the next generation's move to Zen, but every design like this necessitates tradeoffs, and for 2012-13, Jaguar was an excellent choice.

As for AMD and Nvidia trying to invent new cryptocurrencies - it might be, but that sounds pretty out there, in particular in the current climate. Nobody is going to be interested in the new, "hot" crypto that's easy to mine if nobody is willing to buy or sell it. Also, difficulty is not what's behind the bubble bursting, but the simple fact that a system based purely on gambling and BS claims of value isn't sustainable over time. In other words, inventing a new currency changes nothing. If anything, there's a glut of currencies, and they're not helping anything. There's no doubt that both Nvidia and AMD have profited nicely off the crypto boom, and no doubt enjoyed this, but they've known from the get-go that this wasn't a sustainable market, nor one that encourages long-term sales or brand loyalty.


----------



## WikiFM (Aug 31, 2018)

After some hours readying about process nodes, I discovered that since 2012 each foundry creates its own process node. I concluded that TSMC is ready with its 7nm because its 7nm are simplier than the 10 nm of Intel. How did I get that idea? Well according to official guidelines about the physical properties of transistors of the ITRS, the specs hasn't been fullfilled by TSMC since its 16 nm, which is more similar to its 20 nm than the official 16/14nm spec. Even its 12nm is more similar to its 20nm than the official 16/14 nm spec.

But TSMC hasn't been the only one cheating, Samsung's 10 nm is actually 14 nm according to the official specs, and Samsung's 14 nm is actually more similar to its 20 nm than the official 16/14 nm spec too. I feel so stupid for not knowing all these before, I really believed that each process node was the same for every foundry.

Sources: https://en.wikichip.org/wiki/WikiChip Nodes 22 nm to 7 nm
https://www.semiconductors.org/clientuploads/Research_Technology/ITRS/2015/0_2015 ITRS 2.0 Executive Report (1).pdf pages 38, 48


----------



## FordGT90Concept (Aug 31, 2018)

It's branding more than science, at least what goes on the press releases.


----------



## R0H1T (Aug 31, 2018)

WikiFM said:


> After some hours readying about process nodes, I discovered that since 2013 each foundry creates its own process node. I concluded that TSMC is ready with its 7nm because its 7nm are simplier than the 10 nm of Intel. How did I get that idea? Well according to official guidelines about the physical properties of transistors of the ITRS, the specs hasn't been fullfilled by TSMC since its 16 nm, which is more similar to its 20 nm than the official 16/14nm spec. Even its 12nm is more similar to its 20nm than the official 16/14 nm spec.
> 
> But TSMC hasn't been the only one cheating, Samsung's 10 nm is actually 14 nm according to the official specs, and Samsung's 14 nm is actually more similar to its 20 nm than the official 16/14 nm spec too. I feel so stupid for not knowing all these before, *I really believed that each process node was the same for every foundry.*
> 
> ...


Except that was never the case, ever. For instance Intel's (new) 10nm isn't the same 10nm they demonstrated previously, IIRC it's less dense than they'd hoped for.


----------



## WikiFM (Aug 31, 2018)

R0H1T said:


> Except that was never the case, ever. For instance Intel's (new) 10nm isn't the same 10nm they demonstrated previously, IIRC it's less dense than they'd hoped for.


My comparison was with the old 10 nm, the one released in the i3 8121U. So now even Intel won't meet the official 10 nm spec as he has always done before with previous nodes.


----------



## StrayKAT (Aug 31, 2018)

FordGT90Concept said:


> It's branding more than science, at least what goes on the press releases.



Branding still has a drastic effect though. Look how the delays messed with Intel's stock. Even though their 10nm isn't all that different from others.. and who are also experiencing setbacks. Since they didn't "brand" their 10nm as 7nm, it makes it look worse than others.


----------



## Valantar (Aug 31, 2018)

WikiFM said:


> My comparison was with the old 10 nm, the one released in the i3 8121U. So now even Intel won't meet the official 10 nm spec as he has always done before with previous nodes.


a) "Released" is a strong word in this case.

b) Process nodes have never been the same across vendors. If so, they'd have to cooperate on R&D. Not that that's a bad idea (I'd say it's a great idea, frankly), but it's not happening.

c) It's been thrown around quite a lot that Intel 10nm (at least the "old" one) was comparable in most metrics to the upcoming 7nm nodes from other fabs. Intel is generally seen as conservative in the naming of their nodes.

d) Node naming is pure marketing for _all _vendors, Intel as much as anyone else. It's been a long time since node name = feature size, if that ever was the case. The standards set by ITRS seem to be viewed as guidelines at best.


----------



## ppn (Sep 2, 2018)

So 21 TFLOPS speaks of potential 5120 shaders at 2025Mhz.


----------



## Bytales (Sep 3, 2018)

Vya Domus said:


> This is going to be a turning point. If the 20 series is successful and people pay up it will likely mark the end of any effort AMD will make to compete in the high end mainstream PC market. Should they come up with something better and much cheaper, they'll be at a disadvantage because they'll have much lower margins on their products. And if they want to have the same margins then they'll have to ask the same prices, either way the consumer will be screwed.
> 
> You reap what you sow, dear consumer. Anyone can ask whatever amount of money they want but that price becomes common place only when it is accepted on a large scale. Nvidia keeps rising the bar, are people going to accept it ? That's all it comes down to.



Nope, i have sworn i wont ever buy Nvidia and Intel again. Sold my dual Xeon 2690v3 and dual 1080ti, and got me a 32core epyc, and two Radeon frontier watercooled. Screw Nvidia and screw Intel. They wont ever get my Money ever again. So i will go with whatever AMD brings to the table. for me Intel and Nvidia cease to exist.


----------



## Vya Domus (Sep 3, 2018)

ppn said:


> So 21 TFLOPS speaks of potential 5120 shaders at 2025Mhz.



6144 shaders at 1700Mhz more likely.


----------



## carex (Sep 4, 2018)

in the cpu front if amd want games to be optimised for 8+ cores they have to supply zen 8 core to the console market as for the gpu navi will be their bet.
these days only few games utilizes full 16 threads


----------



## Captain_Tom (Sep 4, 2018)

RichF said:


> They may not be able to do it, but I have no doubt that people at both companies are trying to create new coins that will cause another crypto boom so they can sell out their products.



LOL, what?!  You clearly have no understanding of cryptocurrencies.  AMD can't just "Make another Ethereum or Monero."   Furthermore, they don't need to because mining is still profitable for the right people.



Vya Domus said:


> 6144 shaders at 1700Mhz more likely.



That's my guess at this point too.  Slightly higher clocks, but more cores.  Vega is actually every bit efficient as Pascal if you don't ramp up the clockspeeds - just look at AMD's APU's.

I really hope they launch this card (even a cut down version) for gamers this year.  50% more TFLOPs and double the bandwidth would make this capable of 4K@144Hz.


----------



## RichF (Sep 7, 2018)

Captain_Tom said:


> LOL, what?!  You clearly have no understanding of cryptocurrencies.


That's possible but not at all illuminating.


Captain_Tom said:


> AMD can't just "Make another Ethereum or Monero."


Why is that? Is there an international governing body that creates them, preventing all others from coming into being?


Captain_Tom said:


> Furthermore, they don't need to because mining is still profitable for the right people.


Is Vega still sold out because of mining?


----------



## Captain_Tom (Sep 7, 2018)

RichF said:


> That's possible but not at all illuminating.
> 
> Why is that? Is there an international governing body that creates them, preventing all others from coming into being?
> 
> Is Vega still sold out because of mining?



What question do you _actually _want answered?  

Do you want me to talk to you about the fundamentals of the Future of Money?  Do you want to know Vega's true use cases?  What?


----------



## RichF (Sep 7, 2018)

Bytales said:


> Nope, i have sworn i wont ever buy Nvidia and Intel again. Sold my dual Xeon 2690v3 and dual 1080ti, and got me a 32core epyc, and two Radeon frontier watercooled. Screw Nvidia and screw Intel. They wont ever get my Money ever again. So i will go with whatever AMD brings to the table. for me Intel and Nvidia cease to exist.


TechRadar just ran an article that suggested Navi will be for the "consoles" (low-end PCs masquerading as a separate platform) only. Or, "PC" gamers will get leftovers from a design that is targeted toward the "consoles".

https://www.techradar.com/news/amd-navi


----------



## Captain_Tom (Sep 7, 2018)

RichF said:


> TechRadar just ran an article that suggested Navi will be for the "consoles" (low-end PCs masquerading as a separate platform) only. Or, "PC" gamers will get leftovers from a design that is targeted toward the "consoles".
> 
> https://www.techradar.com/news/amd-navi



LOL your credibility is now officially dead.

The $500 XBX has a card that competes with a 1070 Ti in gaming, and yet you call it "low end". 

Hey Genius - the PS4 has more-or-less a 7870, but that is the same arch as the 7970 GHz technically.  It took the 780 Ti over a year later to beat that.  All that we know is Navi is FINALLY built for gaming first - that's it.  That should scare any Nvidiot, but that should make every gamer happy .


----------



## RichF (Sep 7, 2018)

londiste said:


> Delta compression? Both have it but Nvidia's has so far been considered (and measured) to be much more effective.
> Bandwidth might be a problem but not severe enough to solve with a solution as expensive as that.
> 
> APU is just CPU and GPU on one die/package. Nothing really stops AMD from replacing Jaguar cores with Zen/+/2. AMD can easily do a Raven Ridge with larger GPU and very likely is doing exactly that.


Apparently, AMD is producing a Zen-based APU for the Chinese market that will have HBM. It will reportedly be deployed there in desktops and then in a Chinese-market console.

https://www.extremetech.com/gaming/274919-more-details-on-the-new-amd-powered-chinese-console

The main drawback, in terms of the standard PC gaming platform, is the RAM split:



			
				Hruska said:
			
		

> The same pool of RAM is being used for both CPU and GPU, with a likely 4GB subdivision.



The Vega tech that helps with lower VRAM will likely help with the VRAM being 4 GB but programs might have an issue with 4 GB of system RAM.


Valantar said:


> Nobody is going to be interested in the new, "hot" crypto that's easy to mine if nobody is willing to buy or sell it. Also, difficulty is not what's behind the bubble bursting, but the simple fact that a system based purely on gambling and BS claims of value isn't sustainable over time. In other words, inventing a new currency changes nothing. If anything, there's a glut of currencies, and they're not helping anything.


I heard the same things after Bitcoin. Ethereum became the next big thing, though. It doesn't seem at all certain to me that there won't be another "next big thing" in crypto. Having many existing coins also doesn't prevent that possibility. There are plenty of examples in tech where the market was saturated by players — and yet there were next big things. The NES. The PlayStation. The XBox. The greatest example is the IBM PC, which came onto a market with _a lot_ of microcomputers already available. It had a big corporation behind it, which is why it succeeded. It wasn't its technical merits that made it sell. There were also enough well-known search engines, including metasearch engines, that plenty of people didn't predict Google.

Gambling has existed for a long time and there is a lot of money involved in it.

My point about Jaguar is the reason it was used, and especially kept for a second iteration, is because of the artificial effect of duopoly. Like monopoly, only with less severity, consumers get less product for their money. There are benefits of monopolization but the overall picture is a negative for consumers.


WikiFM said:


> I feel so stupid for not knowing all these before, I really believed that each process node was the same for every foundry.


Truthiness abounds in the tech business.


----------



## londiste (Sep 7, 2018)

RichF said:


> Apparently, AMD is producing a Zen-based APU for the Chinese market that will have HBM. It will reportedly be deployed there in desktops and then in a Chinese-market console.
> https://www.extremetech.com/gaming/274919-more-details-on-the-new-amd-powered-chinese-console


HBM part was misread in the initial batch of news. GDDR5. It is not that crucial though given fast enough memory whatever type it comes in.


RichF said:


> The main drawback, in terms of the standard PC gaming platform, is the RAM split:
> The Vega tech that helps with lower VRAM will likely help with the VRAM being 4 GB but programs might have an issue with 4 GB of system RAM.


Split itself should not be a problem. Software these days can handle it dynamically enough. Minor this about the dynamic pool is that one memory type or another may be more or less suitable for CPU or GPU though. GDDR trades latency for bandwidth, for example.


----------



## Valantar (Sep 7, 2018)

RichF said:


> Apparently, AMD is producing a Zen-based APU for the Chinese market that will have HBM. It will reportedly be deployed there in desktops and then in a Chinese-market console.
> 
> https://www.extremetech.com/gaming/274919-more-details-on-the-new-amd-powered-chinese-console
> 
> ...


That article explicitly states that the console uses GDDR5, not HBM. As confirmed by the pictures, and every other publication covering it.

As with other APUs in Windows, it likely uses dynamically allocated shared memory (with a fixed base amount), so VRAM allocation is likely adjusted by need. This works perfectly fine.



RichF said:


> I heard the same things after Bitcoin. Ethereum became the next big thing, though. It doesn't seem at all certain to me that there won't be another "next big thing" in crypto. Having many existing coins also doesn't prevent that possibility. There are plenty of examples in tech where the market was saturated by players — and yet there were next big things. The NES. The PlayStation. The XBox. The greatest example is the IBM PC, which came onto a market with _a lot_ of microcomputers already available. It had a big corporation behind it, which is why it succeeded. It wasn't its technical merits that made it sell. There were also enough well-known search engines, including metasearch engines, that plenty of people didn't predict Google.
> 
> Gambling has existed for a long time and there is a lot of money involved in it.


Was there ever a time before when people were scared of buying (and desperate to sell) Bitcoin? I sure can't remember that. Skepticism, sure, but not the "run away" attitude seen today. The situation is fundamentally different. This of course doesn't mean that a new wave of crypto won't appear - the financial "industry" doesn't like to leave potential ways of generating money alone for long, even if they're currently terrified of it. But it will likely take some time. 



RichF said:


> My point about Jaguar is the reason it was used, and especially kept for a second iteration, is because of the artificial effect of duopoly. Like monopoly, only with less severity, consumers get less product for their money. There are benefits of monopolization but the overall picture is a negative for consumers.


That's a bit of a stretch. Of course, we could speculate that if the X86 CPU market wasn't a duopoly, there might have been an established low-power CPU arch available in 2011-2012 when this console generation was designed, but that's rather meaningless speculation.

As for the mid-gen refreshes (Pro and X), they both arrived too early to implement Zen - the design wasn't yet ready for the PS4 Pro, let alone tested and known to perform outside of AMD's labs. The One X arrived later, but still too early for an implementation like that (which requires the design to be very well tested and known good). Then there's the issue of dramatically increasing CPU power in games - how do you make games scale for CPU power across such radically different designs? This makes sense for a "new generation" (which is becoming an increasingly meaningless term in the age of X86 consoles, but still makes sense in terms of software development), but not for a mid-gen refresh - you'd end up with games only working on the refresh, pissing off all the people who bought the other console 1-4 years earlier. Consoles are expected to have 5-8-year life cycles, not ~3 like a PC.

Then there's the issue of die area and cost. The Scorpio Engine is a 359mm2 die (on TSMC 16FF), of which the 8 CPU cores make up a tiny fraction. For a console, this is _huge_. In comparison, an original Zen Zeppelin (8c) on GloFo 14nm is 213mm2. Of course that has components that could be removed in a console (such as the DDR4 controller, USB, SATA, and PCIe PHY), but adding 8 Zen cores would still balloon die size dramatically. The addition of L3 cache alone would grow the die noticeably. Even reducing it to a single CCX (44mm2 when excluding everything else) for 4c8t would still entail a noticeable size increase, not to mention the issue of patching the OS, games and apps to account for 4 fast and 4 slow threads. Then there's the licencing cost of Zen cores vs. Jaguar cores, which would likely be in the 5-10x range given how new the arch was. And $1000 consoles don't really exist - for a good reason, as they wouldn't sell. $500 consoles usually struggle. This has little to do with a duopoly, and much more to do with the realities of chip design, chip production and fab costs. The tech simply wasn't ready in time, and while an argument can be made that a higher power/IPC/clock arch with fewer cores would have been better for gaming in the short term, that's not the direction either of the big console makers went (and thanks to the 8-core designs, consoles have a lot of cool functionality that would have been impossible otherwise). This is also likely due to them seeing single core perf flatlining and wanting to prepare their developers for the multi-core future. IMO, that's sound long-term planning.


----------



## cucker tarlson (Sep 7, 2018)

Captain_Tom said:


> The $500 XBX has a card that competes with a 1070 Ti in gaming, and yet you call it "low end".


1070Ti is a 1080p/60 GPU now ?

XBX can run 3200x1800 at 30 fps or 1080p at 60, which rx570 can easily do at console quality.

1070Ti is faster than 10.5 TFlop Vega 56, and close to 12.5 TFlop Vega 64, while xbx has 6. Talk about losing credibility


----------



## londiste (Sep 7, 2018)

Xbox One X's GPU is not comparable to GTX1070 or GTX1070Ti.
It is a bit larger than RX580 but on lower clocks. Comparable to RX580 (or GTX1060 6GB from the other camp).
It has wider memory bus and thus better bandwidth going for it but at the same time that is a resource it needs to share with CPU.

Strictly midrange.


----------



## Captain_Tom (Sep 7, 2018)

cucker tarlson said:


> 1070Ti is a 1080p/60 GPU now ?
> 
> XBX can run 3200x1800 at 30 fps or 1080p at 60, which rx570 can easily do at console quality.
> 
> 1070Ti is faster than 10.5 TFlop Vega 56, and close to 12.5 TFlop Vega 64, while xbx has 6. Talk about losing credibility



I am no fan of XBOX, but you are flat-out wrong.  The 1070 isn't exactly some masterpiece of 4K gaming lol:

https://tpucdn.com/reviews/Performance_Analysis/Monster_Hunter_World/images/2160.png

^1800p@30 is about what the 1070 is capable of too  (at best).   Oh, and your TFLOP comparisons are hilarious - most people do not seem to understand that Nvidia reports their "TFLOPS" as the card running at its base clock.  Thus a Nvidia card that by default boosts to 1800-2000MHz is actually comparable to the TFLOP's of their AMD counterparts.    *The 1080 Ti is really a 13-15 TFLOP card*.



londiste said:


> Xbox One X's GPU is not comparable to GTX1070 or GTX1070Ti.



The Polaris-XBX card has a 384-bit bus, and 10% more SP's.  That makes it easily 30% better overall, and thus ahead of a 1070 (at least in 1440p-4K).


----------



## cucker tarlson (Sep 7, 2018)

Captain_Tom said:


> https://tpucdn.com/reviews/Performance_Analysis/Monster_Hunter_World/images/2160.png
> 
> ^1800p@30 is about what the 1070 is capable of too  (at best).


Lol,stunning lack of knowledge, GTX 1080 can run consistently in 40s in witcher 3 at high preset *5K*, tested it myself.1070 can't even do 4K 30 ?

I took the lastest tpu review (of rx580 mech) and calculated what 1070 can do at 4K. It averages 41 fps in 21 games at *native 2160p PC Ultra quality*. You mean it's equivalent to XBX's 30 fps 1800p console quality ?



Captain_Tom said:


> Oh, and your TFLOP comparisons are hilarious - most people do not seem to understand that Nvidia reports their "TFLOPS" as the card running at its base clock.  Thus a Nvidia card that by default boosts to 1800-2000MHz is actually comparable to the TFLOP's of their AMD counterparts.    *The 1080 Ti is really a 13-15 TFLOP card*.


You don't seem to understand. I took the nvidia equvalent of amd's card for my comparison. V56 is 10.5tflop, xbox x is 6. 1070ti is slightly faster than V56, so it's not same range as xbox x.


----------



## mtcn77 (Sep 7, 2018)

It is always hilarious between opposing side fanboys and their blanket assumptions.
Xbox X is not in the same class as 1070 and it is not about the performance. It is a console and for new hlsl extensions, the early access pass is via one such console. Notice, with all the bells and whistles that come featured on a pc-class graphics solution, there is an attached compromise that, at best, blurs the entire Z-fighting domain and, at worst, does not apply any filter on flickering.
We know the Moore's Law is ever marching forward and that memory latency will be ever greater, so the ideal approach is through more complex filtering of displayed pixels.
There are pointers to pass on the pixel shader so it doesn't interpolate non-native texels that belong to different polygons - that should elevate box filtering efficiency to 4, instead of ¼ when discontinuities are churned together. Also, very ALU costly bilateral reconstruction filters can harvest seamless gradients from noisy edges. These are some ways in which Moore's Law scales well with visual quality, otherwise more pixels are more interpolation ridden edges at the cost of 4x the bandwidth.


----------



## JRMBelgium (Sep 20, 2018)

cucker tarlson said:


> Do you understand the concept of time and node size ? Maxwell was lightyears ahead of amd with maxwell



Do you understand the concept of budget? The fact that AMD is already matching Intel's CPU performance with such a limited budget and years of illegal price arangements by Intel is aleady amazing. You want them to keep upt with Nvidia as well? I don't understand how people can have such unrealistic expectations.

Just wait for Ryzen 3 next year. If they can beat Intel and release a GPU next year who can compare with the GTX 1080TI with lower power consumption and a lower price. I will be more then happy to upgrade my Ryzen CPU and upgrade to Navi.


----------



## gamerman (Sep 21, 2018)

i cant bleive amds all of time bsh1t took and took and cheat and i say lie...

amd CANT release nothing bfore 2019..or sure it can,but its only 4th version of vega, and IF amd try it compete agains even gtx 1000 series its must do HUGE work and also its need alot cash.

also its need moust important engineer work.


all thouse amd has NOT.


7nm gpu,, LOLOLOLOL


if some release 7nm gpu its sure nvidia or intel. thst one big lie from amd....or well it can release it but late 2019 when nvidia release ampere or new gpu AFTER Turing.


even when amd always release smalle line made gpu its ALWAYS eat more power and also was slower.

hmm, i cant continue anymore.... just check amd latest 3 gpus.... 200 series,furyx and vega

all thouse ALOT power eat and for that they are all slower and bcoz amd need they fstest watercool its tell only one thing,junks!

i hate amd bcoz they cheat so much and try sell junk for ppl.

amd never ever cant make rtx 2080 level gpu


----------



## cucker tarlson (Sep 21, 2018)

Jelle Mees said:


> Do you understand the concept of budget? The fact that AMD is already matching Intel's CPU performance with such a limited budget and years of illegal price arangements by Intel is aleady amazing. You want them to keep upt with Nvidia as well? I don't understand how people can have such unrealistic expectations.
> 
> Just wait for Ryzen 3 next year. If they can beat Intel and release a GPU next year who can compare with the GTX 1080TI with lower power consumption and a lower price. I will be more then happy to upgrade my Ryzen CPU and upgrade to Navi.


At this point I'm gonna say that if AMD is not paying you for your posts then you're getting screwed cause they damn well should.
Yes,I do understand AMD is like 1/10th nvidia's budget,so what ? No one can tell me to root for AMD if nvidia has a better card. I'm not brand loyal, I'd buy a VIA CPU and a Matrox Pharelia GPU next week if they dropped sth better than nvidia and intel have. AMD reaped what they sowed if you ask me,they tried to kill ten birds with one stone while nvidia went for heavy segmentation with very focused approach for each market and new architectures. AMD tried to scale gcn for everything from small apu through consoles, mid-range and enthusiast gaming to huge prosumer chips and profit from mining at the same time. They still did remarkably well but high end gaming proved too much for them, can't have it all with one architecture.


----------



## Deadstaar (Sep 23, 2018)

TheinsanegamerN said:


> And AMD, once again, leaves an entire segment to nvidia for a third generation in a row.
> 
> AMD sould just sell radeon by this point. They can do really well with GPUs, or CPUs, but not both. Sell Radeon to somebody that can actually produce decent GPUs. Vega was a year late and many watts too high, and is about to get eclipsed by a new generation of GPUs from nvidia.



Don't think you understand. Radeon hasn't been "enthusiast level" since ATI days. Recall when ATI was whooping Nvidia's a$$ up until they sold the Radeon line, back during the Elder Scrolls Oblivion days.

Let's step back and look at what the AMD product line is, CPU and GPU-wise.  They are about the performance to dollar ratio, not cramming a bunch of next gen silicon onto a die and selling it for top dollar. If Nvidia is selling a flagship card for nearly 599, then AMD is selling theirs for 399. It won't be faster, but it will be at a sweeter performance to dollar ratio. AMD has always been this, since I've been using them, back in the Turion and Thunderbird days. They were never, ever faster than an Intel chip...not even the legendary Phenom II. They just, every once in a while, produce a legendary chip that rivals the compeition at 2/3 the cost. I can't remember a time when one of these "magic silicons" were ever faster than a flashhip Intel, or Nviidia product. Unless you go back to the ATI GPU days, which always had a habit of trading blows with  Nvidia, since before Nvidia was any good...back in the 3DFX days.  AMD = budget minded. They steal the business away by being attractive at that price point level.


----------



## cucker tarlson (Sep 23, 2018)

You know why they're not selling their flagship as high as nvidia is ? You think they planned to release a bigger die with hbm2 at lower price than 1080ti ? stop selling that story,we've heard it over and over again. Radeon is so budget friendly that Vega 64 sells at 3000pln here while 1080 is 2200.I bought my 1080 new cheaper in 2016 than I'd pay for V64 now.


----------



## Valantar (Sep 23, 2018)

cucker tarlson said:


> At this point I'm gonna say that if AMD is not paying you for your posts then you're getting screwed cause they damn well should.
> Yes,I do understand AMD is like 1/10th nvidia's budget,so what ? No one can tell me to root for AMD if nvidia has a better card. I'm not brand loyal, I'd buy a VIA CPU and a Matrox Pharelia GPU next week if they dropped sth better than nvidia and intel have. AMD reaped what they sowed if you ask me,they tried to kill ten birds with one stone while nvidia went for heavy segmentation with very focused approach for each market and new architectures. AMD tried to scale gcn for everything from small apu through consoles, mid-range and enthusiast gaming to huge prosumer chips and profit from mining at the same time. They still did remarkably well but high end gaming proved too much for them, can't have it all with one architecture.


I don't quite know what you're talking about here - AMD and Nvidia's GPU arch strategies have been very similar for quite a while. Nvidia uses their architectures for just as wide a range of products as AMD, if not more - from self-driving car tech to the Switch and Shield series (though those are based on old-ass tech, that's not for any other reason than cost and availability) to their entire range of GPUs for both workstation, server, HPC, AI and gaming, they're all based on the same architecture. They don't segment any more or less than AMD, outside of historically disabling more FP64 features in their consumer parts than AMD. The main difference is that Nvidia's R&D budget _far_ outstrips AMD's, and always has (including ATI).

As for your ideological choices, those are yours to make, but arguing that consumers have zero responsibility for the large-scale consequences of our aggregated purchases is a bit naive. At the very least, it entirely strips you of the right to complain when monopolies or near-monpolies drive up prices and make hardware impossible to affor for regular users. If your only motivated by pure performance numbers, you're by default rooting for the company with the largest development resources, and as such promoting monopolistic market development. Again: this is your right, but you need to be aware of the systems your decisions play a part in. Brand loyalty is, in its pure form, a really dumb concept. We don't owe giant corporations anything. But when a market is dominated by a few large players, rooting for the underdog is good for everyone.



Deadstaar said:


> Don't think you understand. Radeon hasn't been "enthusiast level" since ATI days. Recall when ATI was whooping Nvidia's a$$ up until they sold the Radeon line, back during the Elder Scrolls Oblivion days.
> 
> Let's step back and look at what the AMD product line is, CPU and GPU-wise.  They are about the performance to dollar ratio, not cramming a bunch of next gen silicon onto a die and selling it for top dollar. If Nvidia is selling a flagship card for nearly 599, then AMD is selling theirs for 399. It won't be faster, but it will be at a sweeter performance to dollar ratio. AMD has always been this, since I've been using them, back in the Turion and Thunderbird days. They were never, ever faster than an Intel chip...not even the legendary Phenom II. They just, every once in a while, produce a legendary chip that rivals the compeition at 2/3 the cost. I can't remember a time when one of these "magic silicons" were ever faster than a flashhip Intel, or Nviidia product. Unless you go back to the ATI GPU days, which always had a habit of trading blows with  Nvidia, since before Nvidia was any good...back in the 3DFX days.  AMD = budget minded. They steal the business away by being attractive at that price point level.


This isn't true. While you have a point in taking value into the consideration ("who has the fastest CPU?" is a meaningless question if the winner is $10000), AMD has not only played the value card. My Fury X cost as much as a 980Ti, and roughly matched its performance (though it did include water cooling, which similarly priced 980Tis did not, but that's hardly a value play). Vega cards aren't cheap either. The 7000-series GPUs were perhaps a bit cheaper than Nvidia, but also for a time the fastest on the market, bar none. Polaris was a clear budget market segment play, and a very smart one at that. Ryzen gave users more cores for less money, but wasn't really a "budget" option - remember, the 1800X was $499 when it launched. Even if Intel's cheapest 8-core at the time was far more, that's not a budget CPU by any stretch of the imagination. Their prices have been cut as Intel has responded with more cores and lower prices, but Ryzen is not a budget alternative to Intel Core - it's an alternative. Period. In short: you're oversimplifying things. AMD has in the past 3-ish years executed an excellent strategy in turning around their CPU business with limited resources. This has led to GPU development having a lower priority, and the focus has been on compute and datacenter, where Vega excels, and where margins are far higher than gaming. Now that Zen is here, Zen2 is close to arriving, and AMD is profitable again, it's likely that they've been pouring some of that sweet Zen cash into development of Navi, and I'm hopeful that Navi will come close to catching Nvidia's current efficiency advantage (as that's necessary to reach performance parity - cooling more than 250-ish watts in an AIC isn't really feasible), but we'll have to see. But if AMD can deliver that with Navi, and thus compete in the high end, there's no doubt in my mind that they're going to try. Hopefully they won't rise to Nvidia's recent idiotic price levels ($1200 for a consumer-level GPU? Hell no.), but I'm definitely expecting a ~$700 follow-up to the Fury X and Vega 64.


----------



## medi01 (Sep 24, 2018)

cucker tarlson said:


> which rx570 can easily do at console quality.



Oh, console quality, let me guess, what is that exactly? This kind of stuff (*ran on vanilla PS4, not even pro, XBX is 2+ times faster than that*):
















or perhaps you have mistaken normal consoles to Nintendo's handheld with huang's "shield" chip in it?






well, yeah, that one is not in the same league, but it's a portable, after all.


----------



## cucker tarlson (Sep 24, 2018)

Yes,the first one.It looks good,4K medium-high would look the same on a PC game.Also, is this even from gameplay footage or cutscene,and most importantly what fps is it running ?

This is WD2 running native 4K high settings at 60. Ps4 Pro can't do native 4K at 30, rx570 would do that easily, even more than 30, 1080 is not 2x faster.











I'm not saying there aren't gorgeous games for ps4. I'd like a ps4 profor myself too to play gow3 or uncharted, but when you take a well optimized pc game,it'll run higer resolution and framerate than ps4 one.











Not saying there aren't any gorgeous games fos ps4,there are many,I'd like a ps4 pro for myself to play gow and uncharted. But if you take a well optimized pc game,it'll run higher res and fps than console.


----------

