• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Three Unknown NVIDIA GPUs GeekBench Compute Score Leaked, Possibly Ampere?

Scared of what? Virtually no professional software/SDK uses OpenCL. It's all CUDA, Nvidia has the market covered.
And I've seen AMD OpenCL 2.0 cards beaten by Nvidia OpenCL 1.2 cards in less professional apps.

If Intel really shows up, OpenCL will get a massive boost. Intel went Freesync or well VESA AFR, HDMI uses it too, NV suddenly stopped requiring port corrupting hardware for their AFR support.

NV has refused to support OpenCL 2.0 to force apps to use CUDA to support the newer functions. If they weren't scared, they'd enable the support.

As for performance a Radeon VII will smack around a 2080 Ti in OpenCL workloads.

For mining on GPUs, 290s, 390s, Vegas were god.

NV is scared because they pull in loads of cash from CUDA licensing. OpenCL torpedoes that.
 
If Intel really shows up, OpenCL will get a massive boost. Intel went Freesync or well VESA AFR, HDMI uses it too, NV suddenly stopped requiring port corrupting hardware for their AFR support.

NV has refused to support OpenCL 2.0 to force apps to use CUDA to support the newer functions. If they weren't scared, they'd enable the support.

As for performance a Radeon VII will smack around a 2080 Ti in OpenCL workloads.

For mining on GPUs, 290s, 390s, Vegas were god.

NV is scared because they pull in loads of cash from CUDA licensing. OpenCL torpedoes that.
Nope, general consensus seems to be OpenCL is just bad/poorly designed.

And here's Radeon VII "smacking around" the 2080Ti: https://www.phoronix.com/scan.php?page=article&item=radeon-vii-rocm24&num=2
Keep in mind 2080 Ti is only on OpenCL 1.2.
 
the clocks seem low because most likely they are just base clocks, with boost being around 1500~, at base 1.11~ it would barely be as powerful as a Quadro 8000 and i think this gpu is a Quadro
No, Geekbench reads boost clocks as well. The 118CU GPU is beating the Titan RTX by 40% despite the low clocks.
 
No, Geekbench reads boost clocks as well. The 118CU GPU is beating the Titan RTX by 40% despite the low clocks.
Ahh okay, but i think the clock tables may be different and its reading base clock anyways or even the wrong clocks in general, it happened to Navi aswel on launch and before it. literally reading 1ghz clock.
 
Ahh okay, but i think the clock tables may be different and its reading base clock anyways or even the wrong clocks in general, it happened to Navi aswel on launch and before it. literally reading 1ghz clock.
*cough*engineering sample*cough*
 
Just a guess:
Modern games do quite a lot of compute work on GPU aka compute shaders. But building compute pipelines (including openCL) is still not as productive as CUDA. I guess it could be an attempt to lure developers to make use of CUDA interoperability, and therefore more dependence on Nvidia GPU.


Very few research labs in bioinformatics / biomedical use OpenCL. It is the exact opposite of user friendly. Sloppy documentation of almost everything, lack of active community engagement. As of right now it is almost abandonware, at least for us genetics/genomics researchers.

To put it simply, why would researchers devote their time, energy and resources into OpenCL when nobody will even cite and use their work afterwards?
 
Because it is a lot? why would you need 24 or 47 GB in a graphics card for gaming? That is why it is weird and maybe these cards are workstation of some sort not gaming.

I'm not sure why someone would think that these are anything but professional cards. "Weird" RAM aside, a GTC reveal alone makes it pretty clear.
 
Okay, 2 games that I play and need more than 16gb on 4k to play nice, re2 remake and cities skylines. I'd say they are not new by any means, cities 2015 and re2 remake half and year ago. For you trolls that dont play on 4k, I cant for the sake of it make you agree with me, you need to play the games and see for yourself and like I already said, nvidia works closely with game devs, also for professional gpus, nvidia and amd have its own line of dedicated gpus. You might be probably referring to workstations and deep learning.
Liar. I run Cities at 4k with all details fully maxxed out. Doesnt max out the framebuffer on a vega 64 GPU with 8GB of VRAM. And RE2R is jsut broken when it comes to reporting, what it "uses" is merely allocated, not actively used.

You do not need 16GB of VRAM to run either of these games at 4k. If you are running a ton of graphical mods on Cities, then you could push past the framebuffer. But thats mods. You could mod the liek sof skyrim to use 2-3x the VRAM of cards at the time, but that was not the native game, and mods are not often optimized like the base game is.
 
Liar. I run Cities at 4k with all details fully maxxed out. Doesnt max out the framebuffer on a vega 64 GPU with 8GB of VRAM. And RE2R is jsut broken when it comes to reporting, what it "uses" is merely allocated, not actively used.

You do not need 16GB of VRAM to run either of these games at 4k. If you are running a ton of graphical mods on Cities, then you could push past the framebuffer. But thats mods. You could mod the liek sof skyrim to use 2-3x the VRAM of cards at the time, but that was not the native game, and mods are not often optimized like the base game is.


How can you call me a liar, when your own explanation agrees with my statement? First of all, those 2 games with 8gb vram at 4k is just not enough if you want play it nicely. I'm not saying they will use 16gb of vram, I'm saying you will have enough free vram space if those games needs that. Nobody wants to play a game with stutters and other problems related to not have enough free vram.

About re2 remake showing memory size usage wrong, I wonder if gpu-z is also showing it wrong then, cause I used gpu-z the last time I saw just to check and windows taskbar manager just to see the usage.
 
Very few research labs in bioinformatics / biomedical use OpenCL. It is the exact opposite of user friendly. Sloppy documentation of almost everything, lack of active community engagement. As of right now it is almost abandonware, at least for us genetics/genomics researchers.

To put it simply, why would researchers devote their time, energy and resources into OpenCL when nobody will even cite and use their work afterwards?
I know right, it's a bloody pain with little benefits. Nvidia won here with a productive API.

edit: I didn't mean people use OpenCL if that is what it looks like. What I was saying is that Nvidia exposed CUDA to all gaming gpus so game developers can use it too and see how much more productive it is than other API's.
 
Last edited:
How can you call me a liar, when your own explanation agrees with my statement?
Because it doesnt agree with your statement, and you are trying to twist other people's statement to support yours. MODS are not part of the stock gameplay experience. They are made by the community. For every talented coder there are just as many mods that are poorly optimized, if optimized at all, and it is trivial to break a game by loading it with mod after mod. That is not the fault of the card, because it doesnt matter how much silicon and memory you throw at a problem, you will always be able to throw more software at it as well.

If you drop a turbo into your car and overheat it because the radiator didnt have enough capacity for the increased load, is that the fault of the radiator? No. You modded the application and ran out of capacity, for its designed use case it works perfectly.

First of all, those 2 games with 8gb vram at 4k is just not enough if you want play it nicely.
Citation needed, something that has been asked of you multiple times and you refuse to deliver. (here's a hint, a site with 0 benchmarks or proof of what you are claiming makes you look like a total mong).
I'm not saying they will use 16gb of vram, I'm saying you will have enough free vram space if those games needs that.
Except those games do not need that, that has been proven to you already in this very thread by @bug, and is readily disproven by casual googling of these very games being played in 4k for reviews and gameplay videos showing them running just fine.

Nobody wants to play a game with stutters and other problems related to not have enough free vram.
Good thing that isnt a problem with any game currently on the market, 8gb is currently sufficient for 4k.

About re2 remake showing memory size usage wrong, I wonder if gpu-z is also showing it wrong then, cause I used gpu-z the last time I saw just to check and windows taskbar manager just to see the usage.
What did you think we were talking about? RE2R "consumes" large amounts of VRAM because it is reserving way more then it actually needs. Much of that VRAM is unused, as is evident by the fact that lower VRAM cards run the game fine without stuttering.

Let me help you here Metroid: you came here making claims that games need more then 8GB of VRAM to play sufficiently in 4k. That has been proven false by information posted by other users. You have yet to post anything that backs up your claims. The burden of proof is on the back of those making claims. That's you.

Since you seem so sure about this, how about you record video on your computer of the games you are talking about, show the settings you are using, run an FCAT test and FPS test for us, use MSI afterburner to verify VRAM usage and FPS results. Shouldnt take more then 10 minutes to run the benchmarks and a bit of time to post the resulting video to youtube. Doesnt need to be edited or anything, just as long as it contains proof of what you are claiming.
 
I'm not sure why someone would think that these are anything but professional cards. "Weird" RAM aside, a GTC reveal alone makes it pretty clear.
Somebody did and this was my answer bro. Besides as mentioned, these are samples. we dont know whether these end up in which segment and if they have these RAM capacities. It all may change you know depending on the tiers NV would go with. Who knows what will happen? I surely don't. We know new cards from NV are around the corner.
 
Because it is a lot? why would you need 24 or 47 GB in a graphics card for gaming? That is why it is weird and maybe these cards are workstation of some sort not gaming.
But why did almost everyone here assume this is a desktop gaming card? :) Is it mentioned somewhere in the leak or what?

Both leaked cards look like next-gen top Quadro models. RTX 6000 and 8000 had 24 and 48GB RAM respectively.
 
And , nowhere it says those alleged two products have video outputs.
 
But why did almost everyone here assume this is a desktop gaming card? :) Is it mentioned somewhere in the leak or what?

Both leaked cards look like next-gen top Quadro models. RTX 6000 and 8000 had 24 and 48GB RAM respectively.
I said it is professional card due to the ram capacity.
 
If Intel really shows up, OpenCL will get a massive boost. Intel went Freesync or well VESA AFR, HDMI uses it too, NV suddenly stopped requiring port corrupting hardware for their AFR support.

NV has refused to support OpenCL 2.0 to force apps to use CUDA to support the newer functions. If they weren't scared, they'd enable the support.

As for performance a Radeon VII will smack around a 2080 Ti in OpenCL workloads.

For mining on GPUs, 290s, 390s, Vegas were god.

NV is scared because they pull in loads of cash from CUDA licensing. OpenCL torpedoes that.

Intel uses it's own sysCL based oneAPIs DPC++ not opencl. While one can run OpenCL/Cuda/Syscl code through wrapper, it's still better to use direct oneAPi code with intel hw. Not sure how easy will it be to migrate from OneAPI to SysCL/OpenCL once you have done coding for Intel. So all in all intel's showing up won't necessary give OpenCL any boost, rather deprecate it even further(And I mean deprecating by things like Apple has moved all to Metal, intel might give their OpenCL support nvidia like second citizen status).

And what do you mean by AFR, some multi card rendering method or did you mess it with VRR. VESA VRR and HDMI Forum VRR are different things, HDMI Forum VRR is supported currently by Console manufacturers and Nvidia, amd's support for it is still pending.

I don't think CUDA license have any fee, but nvidia can lock HW for them with CUDA.
 
NV is scared because they pull in loads of cash from CUDA licensing. OpenCL torpedoes that.
Imagine a world where Nvidia haters actually learn something about the products/company they attack. :o

CUDA is free to use (also commercially).
Furthermore, Nvidia cards obviously can run OpenCL programs, so it's not like anyone's forced to use CUDA.
 
Intel uses it's own sysCL based oneAPIs DPC++ not opencl. While one can run OpenCL/Cuda/Syscl code through wrapper, it's still better to use direct oneAPi code with intel hw. Not sure how easy will it be to migrate from OneAPI to SysCL/OpenCL once you have done coding for Intel. So all in all intel's showing up won't necessary give OpenCL any boost, rather deprecate it even further(And I mean deprecating by things like Apple has moved all to Metal, intel might give their OpenCL support nvidia like second citizen status).

And what do you mean by AFR, some multi card rendering method or did you mess it with VRR. VESA VRR and HDMI Forum VRR are different things, HDMI Forum VRR is supported currently by Console manufacturers and Nvidia, amd's support for it is still pending.

I don't think CUDA license have any fee, but nvidia can lock HW for them with CUDA.

According to Intel all of their GPUs from 2010 on support OpenCL.

I run OpenCL on my 7700K's UHD630 without any translation. I only need to have the drivers enabled. Intel also offers SDK support for their FPGAs to do OpenCL.

CUDA costs a bunch because you pay for the hardware. Only one supplier of hardware that can run CUDA. I'd actually be really interested in wrapping CUDA and running it on non nV hardware but they NV has never shied away from locking their software down as hard as possible.

I remember when I could have PhysX on while using a Radeon GPU to do the drawing.

I'm also very sure it would be very easy for NV to turn on OCL 2 support.

Edit: Isn't One API open? I thought it was basically OpenCL 3... I have to look into it more.

Edit 2: One API is basically a unified open standard that offers full cross platform use. According to Phoronix articles, porting it to AMD will be easy because Intel and AMD both use open source drivers, NV on the other hand locks the good stuff up with closed source drivers on Linux

Edit 3: Too many abbreviations in my head, I meant VRR instead of AFR. Somehow it was Adaptive Frame Rendering... LoL I meant the lovely piece of hardware that NV required for VRR rather than just supporting the Display port spec. I mean it's kinda awesome that the Xbox One X supports VRR if you plug it into a compatible display.

Imagine a world where Nvidia haters actually learn something about the products/company they attack. :eek:

CUDA is free to use (also commercially).
Furthermore, Nvidia cards obviously can run OpenCL programs, so it's not like anyone's forced to use CUDA.

Free to use on Nvidia hardware. AMD has to jump through hoops just to emulate some small parts.

Nvidia also completely gimps the GP-GPU performance of their more affordable GPUs. Want that performance, the cheapest option available to normal folk is the $3000 USD Titan V.

Yeah, you can use the older less functional and capable OpenCL 1.2. want those newer features... Well CUDA only on NV.

Yeah... Free... :rolleyes:

At least I'm a hater who uses GeForces and Quadros. :pimp:
 
Last edited:
When they launch this generation, I think it would be very hard for AMD to climb back up. They are at least 4 years away from competing this is like 2 generations away.
 
Free to use on Nvidia hardware. AMD has to jump through hoops just to emulate some small parts.
CUDA is a part of the ecosystem you buy into. But it's free to use.
Furthermore, you may or may not use it (since there are alternatives), so you actually have a choice (which you don't get with some other products).
And you can use it even when you don't own the hardware - this is not always the case.

In other words: there are no downsides. I honestly don't understand why people moan so much about CUDA (other than general hostility towards Nvidia).
Nvidia also completely gimps the GP-GPU performance of their more affordable GPUs. Want that performance, the cheapest option available to normal folk is the $3000 USD Titan V.
That's absolutely not true. What you mean is FP32. But some software uses it and some doesn't. It's just an instruction set.
One could say AMD gimps AVX-512 on all of their CPUs.

Many professional/scientific scenarios are fine with FP16.
Phoronix tested some GPUs in PlaidML, which is probably the most popular non-CUDA neural network framework.
2 things to observe here: how multiple Nvidia GPUs perform in FP16 and as a tasty bonus - how they perform in OpenCL compared to Polaris.
At least I'm a hater who uses GeForces and Quadros. :pimp:
I don't understand why people raise this argument. It simply makes you a miserable hater.
 
Imagine a world where Nvidia haters actually learn something about the products/company they attack. :eek:

CUDA is free to use (also commercially).
Furthermore, Nvidia cards obviously can run OpenCL programs, so it's not like anyone's forced to use CUDA.
Well, I'm not sure if Nvidia drivers do OpenCL 2.0. There was preliminary support like 3 years ago, but I haven't heard anything about it since.

The point is moot though, the world seems to be set on CUDA by now. More precisely, the world seems to be set on anything that's not OpenCL.
 
Wow, with that much GPU Ram your system RAM should typically be double GPU RAM. Finally a reason to have more than 16gb of system RAM.
 
Wow, with that much GPU Ram your system RAM should typically be double GPU RAM. Finally a reason to have more than 16gb of system RAM.

I've never heard of this rule of thumb/correlation between these two.
 
Well, I'm not sure if Nvidia drivers do OpenCL 2.0. There was preliminary support like 3 years ago, but I haven't heard anything about it since.

The point is moot though, the world seems to be set on CUDA by now. More precisely, the world seems to be set on anything that's not OpenCL.

After Apple, the defacto chair of the Kronos group (OpenCL committe/standards body) burned OpenCL - which Apple themselves created - in favor of their own proprietary "Metal," who is going to have faith in OpenCL's development?
 
I've never heard of this rule of thumb/correlation between these two.
Not sure it is a hard rule, but I remember your GPU will carve out an equal amount of system ram if available for Shadow Ram. Might be a myth.

"A quick rule of thumb is that you should have twice as much system memory as your graphics card has VRAM, so a 4GB graphics card means you'd want 8GB or more system memory, and an 8GB card ideally would have 16GB of system memory "

 
Back
Top