• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Sapphire Reps Leak Juicy Details on AMD Radeon Navi

Another leak was made?


2560 SP (64 SP for each CU):
eXu5yAW.png
 
Last edited:
The Linux radeonsi driver got updates, so it's still GCN for sure.
As sure as Turing still being CUDA, no shit, Watson.
 
As sure as Turing still being CUDA, no shit, Watson.
Nothing to do with that. Nvidia always keeps compatibility with older archs. Calls are still Tesla or Maxwell even on Turing, that's why Mesa can still update nouveau.
Plus CUDA is nothing more than compute with nice software support over it.
I hope you are not confusing the marketing name for shaders with the architecture of the card.
 
Sad to see that AMD can't bring worthy opponent for 2080ti.

sad to see you have been in a cave for a year where everyone and their mother knows AMD is targeting high end with their next gen architecture next year. Oh wait may be you didn't read this post lol.

I don't like prices. If it is true, AMD should reduce prices.

For 4K.
View attachment 123382


I dont like prices, amd should reduce prices so I can buy nvidia would be the correct problem. AMD wins nothing buy going in to price war they know how it works lol. So they rather make more on each one they sell.
 
... only five days to go now. I've got to say this waiting period is rather frustrating. Doesn't help that I'm constantly checking tech forums and bottom-barrel rumor sites (yes, I have sinned and read wccftech, and yes, I am ashamed of myself) instead of grading papers like I really ought to be doing. Oh well, the Computex keynote is much earlier than my deadline XD
 
AMD wins nothing buy going in to price war they know how it works lol. So they rather make more on each one they sell
AMD reduce RX 500 series prices due to the mining(not for price strategy). I don't believe that they know because they realesed Radeon 7 for 700-800 Dollars(it is almost same with RTX 2080 for 4K and same prices) and it has got 16 GB VRAM with HBM. My opinion is that if it had got 8 GB and GDDR6 memory for 500 Dollars, it would has good price/performance ratio.
 
That ... is concerning. If a 4096 SP/64 CU VII with 16GB of HBM2 barely beats out the RTX 2070 (and comes close to or beats the 2080 in select titles), a 2560SP / 40CU setup with 8GB of >=10Gbps GDDR6 will ... not beat the RTX 2070. Unless there's some sort of minor miracle happening. The news post about possibly doubing up on the front-end might be promising, but nonetheless ... worrying.

Moving from 16 CUs per SE in the VII / V64 / Fiji to five? That's a drastic change for sure.
next_horizon_david_wang_presentation-07_vega20_575px.png

(image source)
But if that was "all" that was needed to dramatically improve per-CU performance (not saying this is a minor change), why wasn't this done years ago?
 
Nothing to do with that.
Sure sure.
That is why even Nvidia refers to own cards in terms of "NUMBER OF CUDA CORES".

#omgcudais11yearsold
#turingstillcuda

Stop the stupidity fest, GCN is an instruction set, as it CUDA< what is inside silicon, you have no idea, but it's certainly not the what it was 7 years ago.

Even Vega VII, which is supposed to be a mere die shrink, beats Vega 64 at the same clock/mem speed.
123464



It's so easy to manipulate by just picking different set of games, so here it's 19% advantages becomes 9%, how come:

123461
123462


And the power consumption:

123463
 
Last edited:
Sure sure.
That is why even Nvidia refers to own cards in terms of "NUMBER OF CUDA CORES".

Stop the stupidity fest, GCN is an instruction set, as it CUDA< what is inside silicon, you have no idea, but it's certainly not the what it was 7 years ago.
AMD equivalent term to "CUDA core" is "Stream processor". This is all marketing, both mean the same basic thing.
CUDA is API. Its counterpart/competitor is OpenCL.

As far as they have publicly said, Nvidia does not have the same type of stable-ish ISA for their GPUs as AMD has with GCN. The closest thing to usable ISA for Nvidia GPUs seems to be PTX which is strictly speaking not ISA but in both functionality and nature more of a middleware (a virtual machine) between actual ISA and API.

Even Vega VII, which is supposed to be a mere die shrink, beats Vega 64 at the same clock/mem speed.
It only beats Vega 64 there due to bigger memory bandwidth. On your screenshot, 90Mhz (6%) frequency difference on core can be measurable and 300Mhz deficit on memory speed does not make up for the twofold difference in memory bus width (as they note, Radeon VII still has 45% more memory bandwidth on these settings). By the way, the difference between 84,4 and 81,5 is 3.5% which is less than the clock speed difference on the core.

The source for the screenshots is Computerbase's Radeon VII coverage:
 
Last edited:
That ... is concerning. If a 4096 SP/64 CU VII with 16GB of HBM2 barely beats out the RTX 2070 (and comes close to or beats the 2080 in select titles), a 2560SP / 40CU setup with 8GB of >=10Gbps GDDR6 will ... not beat the RTX 2070. Unless there's some sort of minor miracle happening. The news post about possibly doubing up on the front-end might be promising, but nonetheless ... worrying.

Moving from 16 CUs per SE in the VII / V64 / Fiji to five? That's a drastic change for sure.
next_horizon_david_wang_presentation-07_vega20_575px.png

(image source)
But if that was "all" that was needed to dramatically improve per-CU performance (not saying this is a minor change), why wasn't this done years ago?
Remember that Fiji/Vega have a problem with underutilization (well, GCN in general) so if they finally addressed that problem, they can do a lot more with less. Sony wouldn't expect any less so it was priority from day one.
 
AMD equivalent term to "CUDA core" is "Stream processor". This is all marketing, both mean the same basic thing.
CUDA is API. Its counterpart/competitor is OpenCL.

As far as they have publicly said, Nvidia does not have the same type of stable-ish ISA for their GPUs as AMD has with GCN. The closest thing to usable ISA for Nvidia GPUs seems to be PTX which is strictly speaking not ISA but in both functionality and nature more of a middleware (a virtual machine) between actual ISA and API.

It only beats Vega 64 there due to bigger memory bandwidth. On your screenshot, 90Mhz (6%) frequency difference on core can be measurable and 300Mhz deficit on memory speed does not make up for the twofold difference in memory bus width (as they note, Radeon VII still has 45% more memory bandwidth on these settings). By the way, the difference between 84,4 and 81,5 is 3.5% which is less than the clock speed difference on the core.

The source for the screenshots is Computerbase's Radeon VII coverage:

They are trying to do TFLops vs Tflops. Radeon VII is not a full chip(60 CU). So 2*60*64*1.49 = 11.443 TFlops vs 2*64*64*1.4 = 11.468 Tflops.

That said if Navi really is only 40CU chip it's have to be clocked really high or Perf/TFlops have to be improved a lot for making it near to alleged RTX 2070 performance. I have hard time to believe that number to be real, which make that whole rumor a bit baseless.
 
They are trying to do TFLops vs Tflops. Radeon VII is not a full chip(60 CU). So 2*60*64*1.49 = 11.443 TFlops vs 2*64*64*1.4 = 11.468 Tflops.
Good catch. I forgot about that. In this case yes, the speed difference should just make up for less execution units on Radeon VII. Higher clock speed probably has minor positive effects on rest of the GPU though.
That said if Navi really is only 40CU chip it's have to be clocked really high or Perf/TFlops have to be improved a lot for making it near to alleged RTX 2070 performance. I have hard time to believe that number to be real, which make that whole rumor a bit baseless.
40CU is still 2560 shaders. 11% more than RX580 with lower power consumption/higher clocks from 7nm should do the trick. Assuming similar results as Radeon VII, 40CU at 1800MHz would bring this GPU to 9.2 TFLOPS, right alongside Vega56 in terms of performance. I suppose it would match with rumored Playstation 5 GPU as well. Hoping AMD learned from shrinking Vega and they can get this hypothetical 40CU Navi to 2GHz would bring theoretical compute power to 10.2GHz at Vega64 levels. Add improved architecture and it should be doable to put it in between RTX2060 and RTX2070.
 
Good catch. I forgot about that. In this case yes, the speed difference should just make up for less execution units on Radeon VII. Higher clock speed probably has minor positive effects on rest of the GPU though.

40CU is still 2560 shaders. 11% more than RX580 with lower power consumption/higher clocks from 7nm should do the trick. Assuming similar results as Radeon VII, 40CU at 1800MHz would bring this GPU to 9.2 TFLOPS, right alongside Vega56 in terms of performance. I suppose it would match with rumored Playstation 5 GPU as well. Hoping AMD learned from shrinking Vega and they can get this hypothetical 40CU Navi to 2GHz would bring theoretical compute power to 10.2GHz at Vega64 levels. Add improved architecture and it should be doable to put it in between RTX2060 and RTX2070.

Well it has but that GNs balls to the wall OC vega56 2*56*64*1.71 = 12.257 TFlops somewhat equals stock RTX 2070. For 40CU card that would mean 12 257/(2*40*64) = 2.39 GHz core clock. Of course performance between the rtx 2060 and rtx 2070 is very doable with that CU amount, but that sapphire reps speaks above rtx2070 perf.
 
Putting the best foot forward is common for marketing purposes, Sapphire and its sales rep boost what AMD claims in slides a little and we get the described result. Hoping I am wrong but this sounds eerily similar to the situation with Vega and Radeon VII launch slides. There will undoubtedly be games where that hypothetical 40CU GPU at 2GHz will exceed RTX2070 in performance and that will be enough. I can name some right now - World War Z, Resident Evil 2, Division 2, DiRT 4, Strange Brigade, Forza Horizon 4...
 
Putting the best foot forward is common for marketing purposes, Sapphire and its sales rep boost what AMD claims in slides a little and we get the described result. Hoping I am wrong but this sounds eerily similar to the situation with Vega and Radeon VII launch slides. There will undoubtedly be games where that hypothetical 40CU GPU at 2GHz will exceed RTX2070 in performance and that will be enough. I can name some right now - World War Z, Resident Evil 2, Division 2, DiRT 4, Strange Brigade, Forza Horizon 4...

Well yeah that is true. There have always been titles which just runs better on other IHVs arch.
 
Remember that Fiji/Vega have a problem with underutilization (well, GCN in general) so if they finally addressed that problem, they can do a lot more with less. Sony wouldn't expect any less so it was priority from day one.
Oh, absolutely, I tried to make that point clear in my post, but I can see that it got a bit muddled. Still, there's a very good (and as of yet unknowable) question remaining whether rebalancing will actually help utilization, or if this is a more fundamental architectural issue.
 
Sure sure.
That is why even Nvidia refers to own cards in terms of "NUMBER OF CUDA CORES".

#omgcudais11yearsold
#turingstillcuda

Stop the stupidity fest, GCN is an instruction set, as it CUDA< what is inside silicon, you have no idea, but it's certainly not the what it was 7 years ago.

Even Vega VII, which is supposed to be a mere die shrink, beats Vega 64 at the same clock/mem speed.
View attachment 123464



It's so easy to manipulate by just picking different set of games, so here it's 19% advantages becomes 9%, how come:

View attachment 123461 View attachment 123462

And the power consumption:

View attachment 123463
CUDA is a fancy marketing name for compute, it's the same as OpenCL, Open COMPUTE. And compute shaders are part of the pipeline of all modern cards.
All I've said are facts from the Mesa driver developers. You don't believe me? Go complain to them.
 
Last edited:
Once again AdoredTV was wrong. How surprised I am. Also, if PS5/Xbox 2 have a RTX 2070 equivalent GPU, and retail for 400€/500€, they can be great products for the price tbh. Will be interesting to see what happens in the next 12 months.
except AdoredTV specifically said the prices would likely change and everyone should be wary of this. Unless the 7 nm process is giving really bad yeilds (which zen 2 suggests otherwise) I can only see this as AMD increasing their margins due to the market stagnation.
 
sad to see you have been in a cave for a year where everyone and their mother knows AMD is targeting high end with their next gen architecture next year. Oh wait may be you didn't read this post lol.

We will see that when they will release their navi gpu's :). if they can beat rtx 2080ti or not. Somehow there are doubts about that. AMD fanboy detected. ‍And if you hope that they will beat rtx 2080ti only next year...... that is just sad

Abd
 
Back
Top