• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD 7nm "Vega" by December, Not a Die-shrink of "Vega 10"

Still, that is how TR works.
yes, but that's multi chip on one die, not single big chips like vega 20. that's what I meant, nvidia has a more efficient way to connect those.
 
Yes, all those special features, that AMD fans were screaming would save AMD. DX12, async, Mantle, vulkan, tressFX, the list goes on.

Special features dont matter unless you can get most developers to use it, and only nvidia's gameworks has seen such success. For 5 years "special features" were going to be AMD's ace up their sleeve, and for 5 years Nvidia has dominated them on sales. AMD needs to focus less on special features they cant support and more on producing fast GPUs.


Performance in one application =? performance overall. You could just as easily point to nvidia's gaming performance and CUDA performance in pro applications and say "You can doubt it, but that shows how they really stack up".

Regardless of how good VEGA is (which is highly subjective based on application), VEGA was over a year late to market, power hungry, with very little OC capability, was hampered by minuscule availability and HBM production. The result was Nvidia capturing a huge portion of the market using now 2 year old GPUs because AMD never bothered to show up. You cant just leave an entire generation behind and expect people to continue supporting your brand.

AMD now considering leaving a second generation to nvidia does two things. First, it creates an even stronger idea that AMD simply cant compete on the high end, reinforcing the "mindshare" that many AMD fans are convinced exists. in reality, it is people being uncertain about investing in a brand when said brand cannot consistently show UP to compete. The second is that it gives nvidia a captive market to milk for $$$, which helps keep them economically ahead of AMD, able to make bigger investments in development of new tech, and perpetually keeping AMD in a position of catching up.
You cannot expect to compete when developers won't utilise your hardware properly. Let's see how long it took them to support specific feature sets in AMD hardware: right until Nvidia launched their next. So it is pointless to argue when you are losing on time to market however much you beat the competition on paper.

yes, but that's multi chip on one die, not single big chips like vega 20. that's what I meant, nvidia has a more efficient way to connect those.
Right, because TR and Epyc's are small chips?
 
I think they will still use GCN in some revised form, but just glue them together like TR/EPYC
I believe a certain RTG head honcho (don't know the name) stated in an interview that there will be no Infinity Fabric-like solution on GPUs in near future.
 
it creates an even stronger idea that AMD simply cant compete on the high end, reinforcing the "mindshare" that many AMD fans are convinced exists.

The "mindshare" that AMD fanboys have is that AMD can't compete and this reinforces it ? What the hell ?
 
And AMD, once again, leaves an entire segment to nvidia for a third generation in a row.

AMD sould just sell radeon by this point. They can do really well with GPUs, or CPUs, but not both. Sell Radeon to somebody that can actually produce decent GPUs. Vega was a year late and many watts too high, and is about to get eclipsed by a new generation of GPUs from nvidia.
You keep forgetting that AMD produces both the Playstation and Xbox consoles - current and next-gen - and that wouldn't be the case if they didn't own ATI.
Also the cards that focus on the middle and budget market segments comprise something like 90% of the whole market, so it makes sense for them to ignore the top ranks.
These GPUs grab headlines, but it's only enthusiasts who actually buy them.
 
The aunty is letting the nephew have his fun. Raja needs to step it up with that Intel R&D budget. 2020 is too far away.
 
I think they will still use GCN in some revised form, but just glue them together like TR/EPYC

If it works, who cares really. Ryzen is a proof of that. It's "glued" together CPU, but it works and it's super cost efficient. And as it turns out, still very power efficient too.

Looks like they're gonna keep Vega for compute where it really shines and focus on making Navi gamer focused. Really, NVIDIA's way of splitting compute and gaming rendering into two tiers works. Where AMD trying to combine both just doesn't get the momentum. Maybe it turns out GCN can work well with ray tracing with "minor" changes, but it's still losing steam for classic game rendering. But that's all so far away it's hard to say anything.
 
If it works, who cares really. Ryzen is a proof of that. It's "glued" together CPU, but it works and it's super cost efficient. And as it turns out, still very power efficient too.

Looks like they're gonna keep Vega for compute where it really shines and focus on making Navi gamer focused. Really, NVIDIA's way of splitting compute and gaming rendering into two tiers works. Where AMD trying to combine both just doesn't get the momentum. Maybe it turns out GCN can work well with ray tracing with "minor" changes, but it's still losing steam for classic game rendering. But that's all so far away it's hard to say anything.
Afaik, ray-tracing got adoption because rasterization hit its limits. This is not so bad for AMD in that sense, they had less rasterizers installed.
 
So all of the eggs are in the Navi basket yet, it feels like we're looking across the Sahara desert for a glimmer of light reflecting off of a distant diamond when the sun is just so. Sadness, indeed.
 
AMD is winning on density. They have features unsupported by Directx, still, which could turn the tables. You are free to doubt that balance however the case on gpu-mining should provide pointers on how they really stack up against one another.
Density of what? Transistor density is pretty much the same as chips are manufactured in the same place. In general AMD has had larger chips competing with smaller Nvidia chips.
Both vendors have features unsupported by DirectX, mostly because they are not that mainstream or just not that useful.
GPU mining does not have a single "this is good" GPU. There are different algorithms for different coins that favor different arhcitectures or aspects of GPUs as well as memory systems.

This is going to be a turning point. If the 20 series is successful and people pay up it will likely mark the end of any effort AMD will make to compete in the high end mainstream PC market. Should they come up with something better and much cheaper, they'll be at a disadvantage because they'll have much lower margins on their products. And if they want to have the same margins then they'll have to ask the same prices, either way the consumer will be screwed.
AMD has had lower margins for a while now. When it comes to 20 series, Nvidia very likely does not have their usual margins either. These are expensive cards to produce.

IF has 2 times more bandwidth than NVLink, afaik.
Both are scalable in a very large degree. Neither is specifically slower or faster. The specific implementation of interconnect is to match some kind of optimization point for the use case.

Right, because TR and Epyc's are small chips?
Yes, they are. 2 or 4 of Zen/Zen+ 209/213mm² dies.
 
Last edited:
I think Navi will be an assembly of 4x enhanced Polaris cores at 7nm glued together on single PCB. There were hints of this glueing together since Vega emerged and they have tons of experience from Ryzen. Or maybe smaller Vega cores.
 
AMD has three years to completely capitalise on Intel's failings before Intel can realistically catch up in the CPU space. Furthermore, AMD has an incredibly solid compute architecture in Vega, and a market thats ready to *only* pay a few thousand compared to tens of thousands of cards, compared to the gaming market that keeps whinging for 'competition', but when faster, cheaper, more efficient, or all three attributes are available, still buy Nvidia.

AMD GPU enthusiasts, I wouldn't be expecting anything until mid 2019 at the earliest, and in all honesty, I wouldn't expect a remotely competitive archtitecture until 2020.
 
I think Navi will be an assembly of 4x enhanced Polaris cores at 7nm glued together on single PCB. There were hints of this glueing together since Vega emerged and they have tons of experience from Ryzen. Or maybe smaller Vega cores.
It won't. There was a guy high up in AMD's GPU part that specifically said Navi is not MCM.
 
Density of what? Transistor density is pretty much the same as chips are manufactured in the same place. In general AMD has had larger chips competing with smaller Nvidia chips.
Both vendors have features unsupported by DirectX, mostly because they are not that mainstream or just not that useful.
GPU mining does not have a single "this is good" GPU. There are different algorithms for different coins that favor different arhcitectures or aspects of GPUs as well as memory systems.

AMD has had lower margins for a while now. When it comes to 20 series, Nvidia very likely does not have their usual margins either. These are expensive cards to produce.

Both are scalable in a very large degree. Neither is specifically slower or faster. The specific implementation of interconnect is to match some kind of optimization point for the use case.
Can I quote you on that? Directx is very favourable towards Nvidia. Thread workgroup sizes 'match' Nvidia, 2 kernels to reach peak size. Need I say more?
 
AMD has three years to completely capitalise on Intel's failings before Intel can realistically catch up in the CPU space. Furthermore, AMD has an incredibly solid compute architecture in Vega, and a market thats ready to *only* pay a few thousand compared to tens of thousands of cards, compared to the gaming market that keeps whinging for 'competition', but when faster, cheaper, more efficient, or all three attributes are available, still buy Nvidia.

AMD GPU enthusiasts, I wouldn't be expecting anything until mid 2019 at the earliest, and in all honesty, I wouldn't expect a remotely competitive archtitecture until 2020.
One year realistically speaking, AMD needs to win in servers. AMD will gain enough space in desktops, notebooks eventually but they need the enterprise market.
 
Can I quote you on that? Directx is very favourable towards Nvidia. Thread workgroup sizes 'match' Nvidia, 2 kernels to reach peak size. Need I say more?
Yeah, can you elaborate? Why is that favourable towards Nvidia?
 
AMD is winning on density. They have features unsupported by Directx, still, which could turn the tables. You are free to doubt that balance however the case on gpu-mining should provide pointers on how they really stack up against one another.

that's the problem. AMD like to push things that is hard for developer to take advantage of. what happen to stuff like primitive shaders? it seems even AMD like to ditch proper support even before developer start using it.
 
that's the problem. AMD like to push things that is hard for developer to take advantage of. what happen to stuff like primitive shaders? it seems even AMD like to ditch proper support even before developer start using it.
No, hardware journalism, such as guru3D, took it upon themselves to disclaim the result - mind you, not anybody else's, their own result. We both know what happened to Nvidia Series when it were enabled...

Yeah, can you elaborate? Why is that favourable towards Nvidia?
2048 is 2 kernels of 1024. 2560 is not. You cannot have a third kernel without finishing the other two. Essentially, you are always at 80% of your peak whether the code allows it or not.
 
Four stacks of HBM2 would have about 2TB/s bandwidth, right?
 
Seeing the specs, I suspect 2080 will have over 10% improvement over 1080ti in general. That's probably why they defined a new benchmark and try to impress you with that. I hate the fact they didn't go for 7nm and try to ask the consumers to pre-pay their immature technology, wait for 30 series or Na'vi
Turing is a new architecture with a completely different SM structure than Pascal, assuming they would scale similarily would be a mistake.

AMD is winning on density.
If theoretical specs mattered, AMD would be king.

You cannot expect to compete when developers won't utilise your hardware properly. Let's see how long it took them to support specific feature sets in AMD hardware: right until Nvidia launched their next. So it is pointless to argue when you are losing on time to market however much you beat the competition on paper.
For the last 2-3 years, there have been far more AMD partner games than Nvidia partner games, primarily due to consoles. Lack of "utilization" is not the problem, but lack of hardware improvements.

I think Navi will be an assembly of 4x enhanced Polaris cores at 7nm glued together on single PCB. There were hints of this glueing together since Vega emerged and they have tons of experience from Ryzen. Or maybe smaller Vega cores.
Not for Navi, so not anytime soon.
 
If theoretical specs mattered, AMD would be king.


For the last 2-3 years, there have been far more AMD partner games than Nvidia partner games, primarily due to consoles. Lack of "utilization" is not the problem, but lack of hardware improvements.
It is the same hardware since GCNEvergreen HD5000 - Directx maximum thread count is still 1024.
It is the same hardware since HD6900 series - consoles just started harnessing EQAA for spatial domain supersampling as in FarCry 4 CLUT rendering. We are still waiting on its integration with checkerboard rendering which brings...
Checkerboard rendering - hardware was available to render a rotated-grid since HD5000. Just became widespread with consoles using the checkerboard pattern in Frostbite games.
 
Last edited:
Proper competition for NVIDIA! Yeah, we can dream...
 
Back
Top