• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

NVIDIA Underestimated AMD's Efficiency Gains from Tapping into TSMC 7nm: Report

I was told it was TSMC securing NV orders, but then AMD securing its own slots at TSMC and then it was AMD embracing 7 and 5 nm, but still NV is "leapfrogging" something with... 7 and 5nm.

And if it's not confusing enough, add Samsung into the picture.

Could somebody who was following news on TPU decrypte who secured what and who leapfroged whom as it sounds very puzzling.


What efficiency? It has basically the same performance as Navi but uses about 30% more power. Not sure what they were going to get out of that.
It is also 330mm2 chip with HBM2 vs 250mm2 chip with GDDR.

Thank god, Raja left AMD and instead is making Intel great again.
 
So basically the same thing then? In case you don't remember Intel was working on CNL & ICL for a good 4-5 years, it's just that their node (10nm) development got worse than the delays at 14nm, yes even 22nm was delayed.
But it wasn't just node issues, it was perfect storm of:

1) That ungodly Jim Keller doing magic with IPC
2) Chiplet design <= it's hard to overestimate how disruptive the approach is
3) And, of course, third party labs beating Intel (and it doesn't look like Intel would be ever able to gain fab advantage)
 
I believe real men used to have fabs, now they are just TSMC customers like everyone else.
 
You can underclock and undervolt Navi 10, thus saving 35% of the power consumption while losing only 5% of the performance:

You can do the same if not even better on the 2080Ti. Undervolting is the shiz on just about any card; especially now that they are power capped anyways.


Generally Nvidia is better at performance per watt and AMD is better at performance per wafer size...
 
I honestly don't know why there is even a discussion about efficiency between Vega and anything newer than Maxwell. It just wasn't even close.

And as for Navi, it is a good product. It delivers comparable performance to Nvidia at $100 less at the expense of extra power usage. It isn't as fast as Turing but it is a great product for it's purpose.

I truly believe that RDNA2 will be a rude awakening four Nvidia... And I believe AMD is going to put the pressure on even harder in the next few years.

I agree, I think Navi2 will be a pleasant surprise.
 
This is ironic considering you are doing the wrong math yourself !

RX5700XT is around 25% faster than V64 in games . 5700XT has indeed 62,5 % less CUs than V64 but 5700XT has also around 76,9% ( Strix V64 vs Strix 5700XT ) higher clock speed than V64 which can be purely attributed to the node . So overall 5700XT architectural gains are not as high as you make them look like .........

Furthermore Navi efficiency becomes embarrassing when compared to the true competition which are Nvidia products . When products such as RTX2080 blow 5700XT out of the watter in performance/watt metrics , heck even 1080Ti on 16nm outpaces it in such metrics ..... it's no hard to see why peoples are not impressed by Navi efficiency !
There is a difference that should be noted: Try running navi at same voltage as Turing for a more apples to apples comparison. The 5700 vanilla which runs around 1000mv is on pair or better than most Turing and all pascal card perf/watt (except at 4k where the big guns shines) . The 5700XT runs at 1200mv stock, no Nvidia-card runs that high to my knowledge, yet it's on pair with pascal and slightly worse than Turing. Some Turingcards also seem to be overvolted and show much improved efficiency with some undervolting.


Some further testing with my 5700XT, avg consumption, max temps:
1605@850mv 118fps 90W 1900rpm 64/68C
1700@900mv 122fps 104W 2000rpm 69/74C
1800@940mv 127fps 114W 2300rpm 67/73C
1900@1000mv 131fps 131W 2400rpm 70/80C
2000@1060mv 135fps 149W 2100rpm 79/92C
2050@1100mv 136fps 158W 2200rpm 75/89C
Stock: 2050@1200mv 133fps 176W 2200rpm 88/104C

Poor binning ends up with a poor stock perf/watt.

Pascal in my opinion was the most impressive, Turing has not improved perf/watt that much, going from 16nm to 12nm did little. I have much higher hopes for Ampere.

You can do the same if not even better on the 2080Ti. Undervolting is the shiz on just about any card; especially now that they are power capped anyways.


Generally Nvidia is better at performance per watt and AMD is better at performance per wafer size...
Very similar to my findings with the 5700XT, If I trade 11% performance from stock 2050MHz@1200mv and UV/underclock to 1610@850mv consumption is cut by 50% avg from 176W to 90W avg in heaven bench.

I have tested UV on several pascal cards and they are better binned at volt vs frequency. For Navi 5700 vanilla is very well optimized, XT on they other hand heavily overvolted, seems like several of the high end Turing cards suffer from this aswell, but not as bad as 5700 XT since Turing generally maxes out around 1-1.1V.
 
There is a difference that should be noted: Try running navi at same voltage as Turing for a more apples to apples comparison. The 5700 vanilla which runs around 1000mv is on pair or better than most Turing and all pascal card perf/watt (except at 4k where the big guns shines) . The 5700XT runs at 1200mv stock, no Nvidia-card runs that high to my knowledge, yet it's on pair with pascal and slightly worse than Turing. Some Turingcards also seem to be overvolted and show much improved efficiency with some undervolting.


Some further testing with my 5700XT, avg consumption, max temps:
1605@850mv 118fps 90W 1900rpm 64/68C
1700@900mv 122fps 104W 2000rpm 69/74C
1800@940mv 127fps 114W 2300rpm 67/73C
1900@1000mv 131fps 131W 2400rpm 70/80C
2000@1060mv 135fps 149W 2100rpm 79/92C
2050@1100mv 136fps 158W 2200rpm 75/89C
Stock: 2050@1200mv 133fps 176W 2200rpm 88/104C

Poor binning ends up with a poor stock perf/watt.

Pascal in my opinion was the most impressive, Turing has not improved perf/watt that much, going from 16nm to 12nm did little. I have much higher hopes for Ampere.


Very similar to my findings with the 5700XT, If I trade 11% performance from stock 2050MHz@1200mv and UV/underclock to 1610@850mv consumption is cut by 50% avg from 176W to 90W avg in heaven bench.

I have tested UV on several pascal cards and they are better binned at volt vs frequency. For Navi 5700 vanilla is very well optimized, XT on they other hand heavily overvolted, seems like several of the high end Turing cards suffer from this aswell, but not as bad as 5700 XT since Turing generally maxes out around 1-1.1V.

Why are you talking like if undervolting was Navi exclusive feature though? You can undervolt any GPU. There is nothing "apples to apples" as you say in undervolting one GPU and leaving the others stock. Undervolt all. Turing isn't any better binned than Navi and also runs on way too high voltage.

For example 2080 Ti undervolted to 0.925V at 1995 MHz is going stay slightly above stock 260W power limit while being only 5-6% slower than maximum 1.093V OC that isn't entirely satisfied even with custom 380W limit. This kind of max OC is 16% faster than stock and this particular undervolt is 10% faster. Faster than stock, not slower. These are the real gains from undervolting, either 10% performance gain while staying close to reference power limit, or from the other point of view shaving 100W or almost 1/3rd of the power off from maximum OC at only 5-6% performance loss.

What you are showing here is not only going backwards with performance, below stock, but your results are incorrect because you are thermally limited and that's why you gained performance with first two undervolts compared to stock, because you escaped thermal throttling and stabilized the clock. Also is you fan speed left on auto since it fluctuates from test to test? You are testing the cooler here not the GPU.

But even assuming everything was okay, the last reasonable setting from performance standpoint is 2000@1060mv and it saved you like 15% of the power. Any GPU can do that, this is nothing specific to Navi. If you undervolted all GPUs and made the efficiency graph again it wouldn't change too much.

I just don't understand all of this idea of justifying one GPU by tweaking it to it's best and then comparing it against the rest at stock. Give the same treatment to all and then compare.
 
Last edited:
Why are you talking like if undervolting was Navi exclusive feature though? You can undervolt any GPU.
I like the rest of your post, good to see Nvidia fans moving past complacency, yet undervolting is linear, overvolting is exponential. That is why you get exponentially better undervolting results from a stock overvolted gpu.
It is also the case, our good friend has made an honest to goodness call.
I don't know the formula, but something tells me there is a linear relationship between the voltage inflection point where resistance drops, also marks its thermal conduction low point.
1605@850mv 118fps 90W 1900rpm 64/68C
1700@900mv 122fps 104W 2000rpm 69/74C
1800@940mv 127fps 114W 2300rpm 67/73C
1900@1000mv 131fps 131W 2400rpm 70/80C
2000@1060mv 135fps 149W 2100rpm 79/92C
2050@1100mv 136fps 158W 2200rpm 75/89C
Stock: 2050@1200mv 133fps 176W 2200rpm 88/104C
See here. Can't we formulate it? I'm against the opinion there is an inconclusive answer to this question.

This is what I could find out:
And it often can be observed with the SEB
phenomenon. Along the ion strike path, carriers will increase the local tempera-
ture which would lead to conduction increasing. This would further result in the
current and heat increasing.
Why don't we discuss resistance drops like an abstract entity whether it be passive or not.
I'm against this oppression of scientific conduct repressing enrollment of resistivity in passing context. There is silent discrimination against titular ohm.
 
Why are you talking like if undervolting was Navi exclusive feature though? You can undervolt any GPU. There is nothing "apples to apples" as you say in undervolting one GPU and leaving the others stock. Undervolt all. Turing isn't any better binned than Navi and also runs on way too high voltage.

For example 2080 Ti undervolted to 0.925V at 1995 MHz is going stay slightly above stock 260W power limit while being only 5-6% slower than maximum 1.093V OC that isn't entirely satisfied even with custom 380W limit. This kind of max OC is 16% faster than stock and this particular undervolt is 10% faster. Faster than stock, not slower. These are the real gains from undervolting, either 10% performance gain while staying close to reference power limit, or from the other point of view shaving 100W or almost 1/3rd of the power off from maximum OC at only 5-6% performance loss.

What you are showing here is not only going backwards with performance, below stock, but your results are incorrect because you are thermally limited and that's why you gained performance with first two undervolts compared to stock, because you escaped thermal throttling and stabilized the clock. Also is you fan speed left on auto since it fluctuates from test to test? You are testing the cooler here not the GPU.

But even assuming everything was okay, the last reasonable setting from performance standpoint is 2000@1060mv and it saved you like 15% of the power. Any GPU can do that, this is nothing specific to Navi. If you undervolted all GPUs and made the efficiency graph again it wouldn't change too much.

I just don't understand all of this idea of justifying one GPU by tweaking it to it's best and then comparing it against the rest at stock. Give the same treatment to all and then compare.
When I say apples to apples I mean voltages. See my comments about 5700 vanilla which is voltagewise close to Turimg and shows better perf/watt than Turing.

As I say in the post, Turing can also be undervolted with impressive results, most for the high end models from what I have read. None of my results were thermally limited/throttling, but stock it runs into power limit. All settings except stock uses custom fan curve. I also tested auto, at below 1000mv it barely made a difference compared to auto. At 1100mv and up my custom curve had better thermals but same performance.
 
This is some good BS ! As it can be seen here https://www.techpowerup.com/review/amd-radeon-rx-5700-xt/29.html 5700 vs V56 is around 43% efficiency and 5700Xt vs V64 is around 37% efficiency .... nowhere near the 60% claimed !
Percentage conparisons are analogies which in turn aren't absolute numbers to compare as you think they do so. Friendly advice: Try and learn some simple maths to use and not allow anyone make you look ignorant or fool.
 
Percentage conparisons are analogies which in turn aren't absolute numbers to compare as you think they do so. Friendly advice: Try and learn some simple maths to use and not allow anyone make you look ignorant or fool.

LMAO you don't even understand what you are reading pal !

Perf/watt charts such as this one https://tpucdn.com/review/amd-radeon-rx-5700-xt/images/performance-per-watt_2560-1440.png are based on a reference GPU and can therefore be directly compared to this reference . For instance here is the 5700 reference chart https://tpucdn.com/review/amd-radeon-rx-5700/images/performance-per-watt_2560-1440.png and guess what 5700 is 37% more efficient than v56 same as in the 5700XT reference chart ....... and nowhere near your ludicrous 60% claim !

What would be the use to make said charts if you couldn't directly compare the numbers ? Talking about friendly advices : Start by applying your friendly advices to yourself cause up untill now you are the only one looking like a total ignorant '' file '' !
 
Last edited:
LMAO you don't even understand what you are reading pal !

Perf/watt charts such as this one https://tpucdn.com/review/amd-radeon-rx-5700-xt/images/performance-per-watt_2560-1440.png are based on a reference GPU and can therefore be directly compared to this reference . For instance here is the 5700 reference chart https://tpucdn.com/review/amd-radeon-rx-5700/images/performance-per-watt_2560-1440.png and guess what 5700 is 37% more efficient than v56 same as in the 5700XT reference chart ....... and nowhere near your ludicrous 60% claim !

What would be the use to make said charts if you couldn't directly compare the numbers ? Talking about friendly advices : Start by applying your friendly advices to yourself cause up untill now you are the only one looking like a total ignorant '' file '' !
Simple lesson for starters.

When you have a price of $200 for the product A and 300$ for the product B, that means that the product A is 33% cheaper than the product B (1-200/300=1-0,67=0,33=33%)
and the product B is 50% more expensive than the product A (300/200-1=1,5-1=0,5=50%). The base is the reference point to which you compare another.

So, when you try to get the increase of efficiency from V56 to RX5700 you have to go through the next fanctions: 100/63-1=1,59-1=0,59=59%. If you want to get how less efficient V56 is vs the RX5700 you have to go like this: 1-63/100=1-0,63=0,37=37%.

Your welcome.
 
When I say apples to apples I mean voltages.

Voltages mean nothing across nodes. Is it fair to compare DDR 4 that runs at 1.2 or 1.35 to DDR 3 that runs at 1.5 to 1.65?

Is that apples to apples?
 
Voltages mean nothing across nodes. Is it fair to compare DDR 4 that runs at 1.2 or 1.35 to DDR 3 that runs at 1.5 to 1.65?

Is that apples to apples?
Ram voltage doesn't behave in the same way since one of the goal of manufacturers is lower voltage. On CPUs and GPUs neither Intel, AMD or Nvidia seems very interested in running as low voltage as possible. Since 16, 14, 12 and 7nm runs at basically the same voltages then yes. AMD has run their voltages at 1-1.2V at both 14nm and 7nm. Nvidia has generally run their cards at 0.9-1.1 at both 16nm, 14nm and 12nm. Same story with CPUs and nodeshrinking aswell. What happends when Intel or Nvidia gets to 7nm time will show but I bet it's the same story again.
 
Simple lesson for starters.

When you have a price of $200 for the product A and 300$ for the product B, that means that the product A is 33% cheaper than the product B (1-200/300=1-0,67=0,33=33%)
and the product B is 50% more expensive than the product A (300/200-1=1,5-1=0,5=50%). The base is the reference point to which you compare another.

So, when you try to get the increase of efficiency from V56 to RX5700 you have to go through the next fanctions: 100/63-1=1,59-1=0,59=59%. If you want to get how less efficient V56 is vs the RX5700 you have to go like this: 1-63/100=1-0,63=0,37=37%.

Your welcome.

Dude you realise that the math you are doing has already been done once to make the perf/watt charts , aren't you ? Lesson being you have no clue what you read ........

Your welcome !
 
Dude you realise that the math you are doing has already been done once to make the perf/watt charts , aren't you ? Lesson being you have no clue what you read ........

Your welcome !
No hard feelings at all. There isn't any point in keep discussing with you about that topic. Anyone else not having the knowledge but willing to understand have done so by now. :toast:
 
Back
Top