# RTX 2080 memory bandwidth issue



## dj-electric (Sep 21, 2018)

Hi w1z, seems like this RTX 2080 turned into an 8800GT


----------



## bug (Sep 21, 2018)

Maybe it deactivates bandwidth when it doesn't need it? 
You know, like engines shut down cylinders?


----------



## mobiuus (Sep 21, 2018)

yeah i noticed it too, very curious...maybe a gpuz needs an update maybe it is like bug said...


----------



## bug (Sep 21, 2018)

DarkStalker said:


> yeah i noticed it too, very curious...maybe a gpuz needs an update maybe it is like bug said...


No way it's like I said, I was just poking fun.


----------



## PerfectWave (Sep 21, 2018)

maybe u need multiply that for 6 cos it use gdr6 XD


----------



## Kissamies (Sep 21, 2018)

It just shows the "true", it doesn't recognize it as DDR. So multiply that with 4.


----------



## Assimilator (Sep 21, 2018)

The formula for calculating memory bandwidth is (bus bandwidth / 8) * (memory clock speed) * (memory type bits per cycle). So for stock GTX 2080 that's (256 / 8) * 1750 * 8 = 448 000 MB/s = 448 GB/s. My guess is that @W1zzard 's calc for memory doesn't know about GDDR6 (or isn't detecting it correctly) and is thus defaulting to standard DDR, which has only 2 bits/cycle versus GDDR6 which has 8 bits/cycle.

All the GPU-Z screenshots in the GTX 2080/2080 Ti reviews have the same issue... Wiz is gonna have to redo all of those once he fixes this bug!


----------



## Kissamies (Sep 21, 2018)

Assimilator said:


> The formula for calculating memory bandwidth is (bus bandwidth / 8) * (memory clock speed) * (memory type bits per cycle). So for stock GTX 2080 that's (256 / 8) * 1750 * 8 = 448 000 MB/s = 448 GB/s. My guess is that @W1zzard 's calc for memory doesn't know about *GDDR6 (or isn't detecting it correctly) and is thus defaulting to standard DDR, which has only 2 bits/cycle versus GDDR6 which has 8 bits/cycle*.
> 
> All the GPU-Z screenshots in the GTX 2080/2080 Ti reviews have the same issue... Wiz is gonna have to redo all of those once he fixes this bug!


So in a nutshell, this means that it's quadruple like GDDR5?


----------



## dj-electric (Sep 21, 2018)

Memory bandwidth throughput is memory bandwidth throughput. It does not need calculations, multiplier or anything else. Never did.
I just put it here so w1zz could fix it. That's all. Basically what @Assimilator said


----------



## Assimilator (Sep 21, 2018)

Chloe Price said:


> So in a nutshell, this means that it's quadruple like GDDR5?



Nope, octuple. GDDR5 is 4 bits/cycle which is why GTX 1070 needs to run its 256bit GDDR5 at 2GHz to reach "only" 256GB/s bandwidth, while RTX 2080's 256bit GDDR6 purrs along at a much lower 1.75GHz yet is able to reach almost double the bandwidth of 1070 @ 448GB/s.

On the other hand, GDDR5*X* (which GDDR6 is essentially the successor of) is also 8 bits/cycle, hence why GTX 1080 can get 320GB/s bandwidth (far higher than GTX 1070) at only 1250MHz memory clock (far lower).


----------



## kastriot (Sep 21, 2018)

O man i feel sorry for you, wasted money eh?


----------



## Kissamies (Sep 21, 2018)

Assimilator said:


> Nope, octuple. GDDR5 is 4 bits/cycle which is why GTX 1070 needs to run its 256bit GDDR5 at 2GHz to reach "only" 256GB/s bandwidth, while RTX 2080's 256bit GDDR6 purrs along at a much lower 1.75GHz yet is able to reach almost double the bandwidth of 1070 @ 448GB/s.
> 
> On the other hand, GDDR5*X* (which GDDR6 is essentially the successor of) is also 8 bits/cycle, hence why GTX 1080 can get 320GB/s bandwidth (far higher than GTX 1070) at only 1250MHz memory clock (far lower).


Ah, ok. It's always nice to be a little wiser.


----------



## W1zzard (Sep 21, 2018)

Just a calculation bug, will be fixed in next release


----------



## bug (Sep 21, 2018)

W1zzard said:


> Just a calculation bug, will be fixed in next release


*next build multiplies all bandwidths 4 times*


----------



## Vayra86 (Sep 21, 2018)

bug said:


> *next build multiplies all bandwidths 4 times*



Only under load!


----------



## dj-electric (Sep 21, 2018)

kastriot said:


> O man i feel sorry for you, wasted money eh?


Not a dime (media), but thanks for the care.


----------



## EarthDog (Oct 4, 2018)

Will the Texture Fillrate GT/s issue on the 2080's also be fixed with the memory bandwidth correction? The 278.8 for the 2080 does not match the NV whitepapers.


----------



## EarthDog (Oct 5, 2018)

BUMP. @W1zzard 

(cant seem to edit this post??????)

Also had the question in another thread since wedesnday. Confirmation or if its just me would be great!


----------



## dj-electric (Oct 5, 2018)

Also here's one from an RTX 2080 Ti:


----------



## Tomgang (Oct 5, 2018)

Nvidia has done it again. They gimped turing like they gimped Maxwell (GTX 970) if any one remember. This time its just more official


----------



## Sebastian-san (Oct 6, 2018)

Can you guys try using Aida 64 gpu test and looking at the real time measures?


----------



## EarthDog (Oct 6, 2018)

dj-electric said:


> Also here's one from an RTX 2080 Ti:


Thats the same texture fill rate I see with a 2080.

https://www.techpowerup.com/forums/threads/techpowerup-gpu-z-v2-11-0-released.247661/#post-3915605


----------



## EarthDog (Oct 9, 2018)

@W1zzard 

..... is this fix on the list as well? 


TIA for the acknowledgement.


----------



## dj-electric (Oct 11, 2018)

W1zz, GPU-Z has to be fixed to suite RTX cards.

Base clock slot became extremely irrelevant this time around.
Having default and current is fine when it is accurate - it isnt as for 2.11.0
No mention of RT or Tensor  cores in any way.

Boost is just the official stuff, and while being in the same realm of Intel's CPU boost clocks, the numbers were not relevant since GTX 900 four years ago.
Maybe time to move those slots into real-time reading? or at least real time vs official in the box below.

No RTX 2080 Ti is going to operate at 1350Mhz core under 3D load.

Stuff has to change.


----------



## bug (Oct 12, 2018)

dj-electric said:


> W1zz, GPU-Z has to be fixed to suite RTX cards.
> 
> Base clock slot became extremely irrelevant this time around.
> Having default and current is fine when it is accurate - it isnt as for 2.11.0
> ...


Oh, good. For a moment there I was worried we ran out of people that can tell others what they need to do


----------



## dj-electric (Oct 12, 2018)

bug said:


> Oh, good. For a moment there I was worried we ran out of people that can tell others what they need to do



W1zz does not have to listen to me, us, to do anything. He can keep GPU-Z as is.
These were mere suggestions, and with lack of said features GPU-Z might risk of irrelevance among media and users.


----------



## W1zzard (Oct 12, 2018)

2.12.0 released with fix for GDDR6 bandwidth. I'll think about the other proposed changes for next version


----------



## bug (Oct 12, 2018)

dj-electric said:


> W1zz does not have to listen to me, us, to do anything. He can keep GPU-Z as is.
> These were mere suggestions, and with lack of said features GPU-Z might risk of irrelevance among media and users.


Yeah, ok. I was just thinking, what you posted is so obvious, it's probably on their list already


----------



## EarthDog (Oct 12, 2018)

W1zzard said:


> 2.12.0 released with fix for GDDR6 bandwidth. I'll think about the other proposed changes for next version


Much appreciated. all around. 

As it stands, the difference between what NVIDIA reports as Pixel Fillrate GT/s and GPUz is confusing and a bit concerning. I would imagine that the software should report what the whitepaper does unless its some odd dubious method. Considering these cards will all hit base boost clock+ in 99% of situations, it doesn't make sense, to me, to list it based off the base clock and going against whitepapers. I don't understand why the deviation...

RE: Tensor/RT cores, those should also be listed was suggested when they came out. A suggestion on how to implement them without jamming up the first page of GPUz.... perhaps only have it on RTX cards. The software can load up those fields only when it detects an RTX card. If the software can figure out NV/AMD and put up their avatar, it should be able to scrape and figure out if its an RTX and add the fields for those cores and values. This way, if a GPUz user has an RTX card, it will know and use the proper fields, but older NV GPUs and AMD GPUs won't get those fields as it does not have the hardware. 

Anyway, thanks for the time you spend supporting the software. Your work is appreciated.


----------



## bug (Oct 12, 2018)

^^^ If space is a concern, those can go into a separate tab.


----------



## EarthDog (Oct 12, 2018)

That's the thing, it was brought up in another thread that maybe it can go in the advanced tab, but, I don't see why it should be buried if its a standard piece of hardware in the card...which is what that first page of GPUz lists. That said, I do understand the need for a clean interface and not jamming it up when so few people have these cards with those features in hand, hence the suggestion to make the software aware and only display those fields with applicable cards. Then once this takes off and makes its way to AMD cards, it can be a permanent fixture.


----------



## bug (Oct 12, 2018)

Yes, it wouldn't be the most intuitive thing, that much is clear. That's why I suggested it would be used only if it is to solve the space problem.
It doesn't need to be an "advanced" tab, but maybe a "Graphics Card (continued)" tab instead?


----------

