• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD 7nm "Vega" by December, Not a Die-shrink of "Vega 10"

Turing is a new architecture with a completely different SM structure than Pascal, assuming they would scale similarily would be a mistake.
Based on hat we know so far, it should be unchanged from Volta. Volta performance scaled linearly enough with Pascal.
 
if AMD launches Navi today they can easily throw nvidia out from the chart
without infinity fabric and 7nm its not possible ....as die size will be issue ....and i can see 4x dies (6144 cores) to challenge the Titanxxx

but in the end ....they have to launch something otw dream on.
Seriously how lazy they are :( SAD
 
Four stacks of HBM2 would have about 2TB/s bandwidth, right?
Probably more like 1.2 TB/s. Vega10 with two stacks is about 640 MB/s if memory serves. Very doubtful they're changing from HBM2.
 
if AMD launches Navi today they can easily throw nvidia out from the chart
Even if they did launch it now at 7 nm (not possible yet), it would still barely exceed Vega. Navi is not going to be a high end chip.
 
yes, but that's multi chip on one die, not single big chips like vega 20. that's what I meant, nvidia has a more efficient way to connect those.
You do realize that IF can attach an EPYC to a Vega 20 or even to a Xilinx FPGA with a little collaboration between the companies. Considering that AMD and Xilinx have been sharing booth space at various tech events the collaboration is well underway.

On the last conference call Lisa Su confirmed that Rome and Vega 20 will be offered as a combined package to customers using the IF interconnect.
 
Last edited:
3 more events to go for AMD this year .......lets see
1.png
 
3 more events to go for AMD this year .......lets see
View attachment 105813
Most of those, if not all 3, will probably be about Zen2 (Epyc), Vega instinct, AI learning etc... Maybe some DirectX Ray-tracing. I doubt AMD will have anything worthwhile for the consumers until 7nm matures. And that's in 2019.
 
if ...IF amd dare release something gpu ,,of coz december... january and so on.. prepare over 400W reference tdp. even 7nm manufactor..

but i can even bet..december 2018 amd not release nothing else than old vega with 7nm. junk

2020 when intel coming gpu market,,,its amd gpu end for ever.
 
The biggest problem with Vega 20 is that it doesn't offer much for consumers; it's inteded for professional uses with fp64 and massive memory bandwidth. Volumes on 7 nm will be very limited initially, and wasting precious dies on consumer products with minimal gains over Vega 10 sounds like a strange move.
 
I like all the sad comments here. They prove one simple thing: people have money to throw at stuff that they don't need. I mean, look at how many people own a fast Pascal card which can play everything at either 1080P or 4K with 60FPS and complain about high prices for Turing cards. Why do you even need something better if you can play everything? To feed the never ending obsession of having the best/latest hardware parts.
 
This is going to be a turning point. If the 20 series is successful and people pay up it will likely mark the end of any effort AMD will make to compete in the high end mainstream PC market. Should they come up with something better and much cheaper, they'll be at a disadvantage because they'll have much lower margins on their products. And if they want to have the same margins then they'll have to ask the same prices, either way the consumer will be screwed.

You reap what you sow, dear consumer. Anyone can ask whatever amount of money they want but that price becomes common place only when it is accepted on a large scale. Nvidia keeps rising the bar, are people going to accept it ? That's all it comes down to.

People need to STOP paying high prices for Nvidia GPUs. Doing so wrecks the PC Gaming market.

The Consumer should be demanding cheaper prices all across the board. And that ain't happening. People seem to be OK bending over for Nvidia.
 
Many where hoping AMD to produce something good so they can go and buy cheaper Intel and Nvidia hardware. For me, I am glad to see AMD throwing R&D money where it will make money, not where people would end up laughing at it's face saying "Thank you for helping us buy cheaper products from your competitors" adding that "AMD hardware is for the poor".

It's obvious that we are in a "bulldozer" era for GPUs, so Nvidia will go unchallenged for the next 2-3 years. You want better GPUs from AMD and better competition? Support Ryzen as an option to those who ask you for a hardware advice.
 
You reap what you sow, dear consumer. Anyone can ask whatever amount of money they want but that price becomes common place only when it is accepted on a large scale. Nvidia keeps rising the bar, are people going to accept it ? That's all it comes down to.

As of right now, I have no inclination to purchase the 2080 Ti even though I can afford it. The purchase would make me feel absolutely disgusted with myself. The second half of that principle is that I don't want to be a part of the consumers that reinforce nvidia's perception that these price ranges are acceptable by purchasing one.

I'm not sure how I am going to upgrade from my 980 Ti, but the position that nvidia is placing me in is bs.
 
Most of those, if not all 3, will probably be about Zen2 (Epyc), Vega instinct, AI learning etc... Maybe some DirectX Ray-tracing. I doubt AMD will have anything worthwhile for the consumers until 7nm matures. And that's in 2019.
true amd such a hopeless competitor they dont care about regular consumers

I like all the sad comments here. They prove one simple thing: people have money to throw at stuff that they don't need. I mean, look at how many people own a fast Pascal card which can play everything at either 1080P or 4K with 60FPS and complain about high prices for Turing cards. Why do you even need something better if you can play everything? To feed the never ending obsession of having the best/latest hardware parts.
true but not every game especially @4K ....120fps will be the last point for me.

People need to STOP paying high prices for Nvidia GPUs. Doing so wrecks the PC Gaming market.

The Consumer should be demanding cheaper prices all across the board. And that ain't happening. People seem to be OK bending over for Nvidia.

dont worry all the midrange cards still holds top 4 place the place in steam survey list .....especially 1050ti like ..... and what u think they will have that much quantity to sell i mean see the die size.


if ...IF amd dare release something gpu ,,of coz december... january and so on.. prepare over 400W reference tdp. even 7nm manufactor..

but i can even bet..december 2018 amd not release nothing else than old vega with 7nm. junk

2020 when intel coming gpu market,,,its amd gpu end for ever.
nah if they dare....which i doubt as well..... 7nm is incredibly efficient as much as 60% or more + the die size will be half as well so they can increase the transistor count. so no 350+ watt unless they nonsensically overclock like vega 64.
 
Last edited:
And AMD, once again, leaves an entire segment to nvidia for a third generation in a row.

AMD sould just sell radeon by this point. They can do really well with GPUs, or CPUs, but not both. Sell Radeon to somebody that can actually produce decent GPUs. Vega was a year late and many watts too high, and is about to get eclipsed by a new generation of GPUs from nvidia.

Part of the reason they aren't doing the highest end is that they are using GPUs elsewhere. So why would they sell radeon? consoles, segments below the high end, pro GPUs. Why on earth would they sell just because they aren't performing faster than 1080ti?
 
Seems like I am among the rare few who are happy with this news. Going separate dies for compute and gaming will keep the GPU good for both jobs which AMD wasn't able to do earlier because of its economic situation. Now things are different as they have money again. Hopefully their GPU from now onwards will be not just jack of all trades but master of specific field for which they are made.
 
Seems like I am among the rare few who are happy with this news. Going separate dies for compute and gaming will keep the GPU good for both jobs which AMD wasn't able to do earlier because of its economic situation. Now things are different as they have money again. Hopefully their GPU from now onwards will be not just jack of all trades but master of specific field for which they are made.

oh yeah this is good. I said this months ago but didn't even realize this is the indication. They've needed to split the two so they could address the issues with their chips for a while. All that compute power and still not pushing out the pixels like they should.

Maybe they get both subsidized. Sony and MS pay them to develop gaming GPUs and maybe Tesla and others for compute GPUs.
 
Features missing from AMD's arsenal that I know for great potential;
  • Shadow cache and primitive discard acceleration,
  • Rotated grid sampling and coverage lookup table(virtual supersampling),
  • Rapid packed math.
Come on, FP16 got upgraded 30% in GCN 1.2 - we still haven't seen it on pc - now, it is up by a further 22%(rapid packed math). Trust me, we don't see any highlights unless Microsoft pushes the standard, not the vendors themselves.
 
Many where hoping AMD to produce something good so they can go and buy cheaper Intel and Nvidia hardware. For me, I am glad to see AMD throwing R&D money where it will make money, not where people would end up laughing at it's face saying "Thank you for helping us buy cheaper products from your competitors" adding that "AMD hardware is for the poor".
If I could give you a +100 for this, I would.
 
Features missing from AMD's arsenal that I know for great potential;
Shadow cache and primitive discard acceleration,
Rotated grid sampling and coverage lookup table(virtual supersampling),
Rapid packed math.

Come on, FP16 got upgraded 30% in GCN 1.2 - we still haven't seen it on pc - now, it is up by a further 22%(rapid packed math). Trust me, we don't see any highlights unless Microsoft pushes the standard, not the vendors themselves.
fp16, which AMD calls "Rapid Packed Math", should be one of the biggest no-brainers of them all. The theory is simple; supported cards (Vega, GP100, GV100) doubles the FPU throughput vs. fp32. Most of the shader workload in games is what we call "fragment processing", which typically accounts for 60-80% of the workload. And the implementation is rather simple, just adjust the data types in the shaders and the code, and fp16 is still plenty for most games in HDR. In theory this could yield boosts of ~20-40% when not otherwise bottlenecked, but therein lies the problem, we all know that Polaris and Vega have plenty of FPU throughput already. When they struggle to saturate the resources they already have, effectively "adding" more resources will only yield marginal gains.

fp16 will eventually become the norm, but it wouldn't save AMD now.
 
fp16, which AMD calls "Rapid Packed Math", should be one of the biggest no-brainers of them all. The theory is simple; supported cards (Vega, GP100, GV100) doubles the FPU throughput vs. fp32. Most of the shader workload in games is what we call "fragment processing", which typically accounts for 60-80% of the workload. And the implementation is rather simple, just adjust the data types in the shaders and the code, and fp16 is still plenty for most games in HDR. In theory this could yield boosts of ~20-40% when not otherwise bottlenecked, but therein lies the problem, we all know that Polaris and Vega have plenty of FPU throughput already. When they struggle to saturate the resources they already have, effectively "adding" more resources will only yield marginal gains.

fp16 will eventually become the norm, but it wouldn't save AMD now.
Perhaps, you are taken for a ride by the elusivity of zero-risk bias. It is not as easy as you think. The short-shaders are ganged into a longer shader, like VLIW packing, but in actual pipeline. VLIW doesn't occur on the same lane.
 
if ...IF amd dare release something gpu ,,of coz december... january and so on.. prepare over 400W reference tdp. even 7nm manufactor..

but i can even bet..december 2018 amd not release nothing else than old vega with 7nm. junk

2020 when intel coming gpu market,,,its amd gpu end for ever.

seriously you sound so ignorant!
 
Back
Top