Wednesday, March 4th 2020

Three Unknown NVIDIA GPUs GeekBench Compute Score Leaked, Possibly Ampere?

(Update, March 4th: Another NVIDIA graphics card has been discovered in the Geekbench database, this one featuring a total of 124 CUs. This could amount to some 7,936 CUDA cores, should NVIDIA keep the same 64 CUDA cores per CU - though this has changed in the past, as when NVIDIA halved the number of CUDA cores per CU from Pascal to Turing. The 124 CU graphics card is clocked at 1.1 GHz and features 32 GB of HBM2e, delivering a score of 222,377 points in the Geekbench benchmark. We again stress that these can be just engineering samples, with conservative clocks, and that final performance could be even higher).

NVIDIA is expected to launch its next-generation Ampere lineup of GPUs during the GPU Technology Conference (GTC) event happening from March 22nd to March 26th. Just a few weeks before the release of these new GPUs, a Geekbench 5 compute score measuring OpenCL performance of the unknown GPUs, which we assume are a part of the Ampere lineup, has appeared. Thanks to the twitter user "_rogame" (@_rogame) who obtained a Geekbench database entry, we have some information about the CUDA core configuration, memory, and performance of the upcoming cards.
NVIDIA Ampere CUDA Information NVIDIA Ampere Geekbench
In the database, there are two unnamed GPUs. The first GPU is a version with 7552 CUDA cores running at 1.11 GHz frequency. Equipped with 24 GB of unknown VRAM type, the GPU is configured with 118 Compute Units (CUs) and it scores an incredible score of 184096 in the OpenCL test. Compared to something like a V100 which has a score of 142837 in the same test, we can see almost 30% improvement in performance. Next up, we have a GPU with 6912 CUDA cores running at 1.01 GHz and featuring 47 GB of VRAM. This GPU is a less powerful model as it has 108 CUs and scores 141654 in the OpenCL test. Some things to note are weird memory configurations in both models like 24 GB for the more powerful model and 47 GB (which should be 48 GB) for the weaker one. The results are not the latest, as they date back to October and November, so it may be that engineering samples are in question and the clock speed and memory configuration might change until the launch happens.
Sources: @_rogame (Twitter), Geekbench
Add your own comment

62 Comments on Three Unknown NVIDIA GPUs GeekBench Compute Score Leaked, Possibly Ampere?

#1
ratirt
The frequencies for both cards are pretty low. Maybe these aren't gaming GPUs but workstations or something like that? The Ram capacities are weird too. Wonder what ram is it.
Posted on Reply
#2
notb
Why is 24 "weird"? It's even and actually a multiple of 8 as well.
Nvidia has been making cards with 24 GB RAM since Maxwell.

47 is likely a mistake, though.
Posted on Reply
#3
ZoneDymo
notbWhy is 24 "weird"? It's even and actually a multiple of 8 as well.
Nvidia has been making cards with 24 GB RAM since Maxwell.

47 is likely a mistake, though.
24 isnt weird, its weird that the "more powerful" one is the one we less ram.
Posted on Reply
#4
john_
notbWhy is 24 "weird"? It's even and actually a multiple of 8 as well.
Nvidia has been making cards with 24 GB RAM since Maxwell.

47 is likely a mistake, though.
Maybe the GPU binds 256MB/1280MB of memory for some reason (cards show 23.8 and 46.8 GB memory) and that's why it shows less memory available. Like in a PC with an integrated GPU. A co-processor maybe?
ZoneDymo24 isnt weird, its weird that the "more powerful" one is the one we less ram.
Not if both GPUs will be available as 24GB and 48GB models.
Posted on Reply
#6
Metroid
24gb is about time, old 4k games with good textures require at least 8gb right now, new games minimum 16gb, 24gb to have a room for future demanding games.
Posted on Reply
#7
MagnuTron
Low frequency sure I guess.. But I am the only one looking at shader counts vs performance in OpenCL? If the V100 had 5100 ish, then why does a card with 7550 cores, as in, 50% more cores, only score 30% higher. Frequency and more black silicon I guess..
Posted on Reply
#8
ratirt
notbWhy is 24 "weird"? It's even and actually a multiple of 8 as well.
Nvidia has been making cards with 24 GB RAM since Maxwell.

47 is likely a mistake, though.
Because it is a lot? why would you need 24 or 47 GB in a graphics card for gaming? That is why it is weird and maybe these cards are workstation of some sort not gaming.
Posted on Reply
#9
ARF
Metroid24gb is about time, old 4k games with good textures require at least 8gb right now, new games minimum 16gb, 24gb to have a room for future demanding games.
Link to games that "require minimum 16 GB VRAM" ?
Posted on Reply
#11
bug
Probably Tesla cards. And since they're most likely engineering samples, they only give us some ballpark figures, nothing more.
Posted on Reply
#12
TheinsanegamerN
ARFLink to games that "require minimum 16 GB VRAM" ?
He's looking at SSD storage usage.
Posted on Reply
#13
xkm1948
These would be amazing for ML/DL applications. Deploying well trained CNN or RNN at bench level would be amazing for researchers. Does not look forward for the price though. It has been creeping up year to year.

Still I am surprised Nvidia allowed usage of gaming level GPU to access both Tensorflow and full CUDA developer toolkit. I am seeing 1060 all the way to Titan RTX used in genomic workstations. Maybe it is just to get all the research labs hooked up on their butter smooth computation experience.
Posted on Reply
#14
Metroid
ARFLink to games that "require minimum 16 GB VRAM" ?
I dont need to link, if you have at least 8gb gpu then you can do it yourself, put in 4k and all maximum and see how much video memory is using then you will have your answer.
Posted on Reply
#15
Flanker
xkm1948Still I am surprised Nvidia allowed usage of gaming level GPU to access both Tensorflow and full CUDA developer toolkit. I am seeing 1060 all the way to Titan RTX used in genomic workstations. Maybe it is just to get all the research labs hooked up on their butter smooth computation experience.
Just a guess:
Modern games do quite a lot of compute work on GPU aka compute shaders. But building compute pipelines (including openCL) is still not as productive as CUDA. I guess it could be an attempt to lure developers to make use of CUDA interoperability, and therefore more dependence on Nvidia GPU.
Posted on Reply
#16
bug
MetroidI dont need to link, if you have at least 8gb gpu then you can do it yourself, put in 4k and all maximum and see how much video memory is using then you will have your answer.
Well, we do have our answer, TPU has tested several games and none of them needs more than 8GB VRAM. They rarely need more than 4, actually. See: www.techpowerup.com/review/wolcen-benchmark-test-performance-analysis/4.html
You're claiming otherwise, so the burden of proof is on you.
Posted on Reply
#17
Chomiq
ARFLink to games that "require minimum 16 GB VRAM" ?
8K textures DLC's coming right up.

What the hell, they'd probably do it if they could market the card as "8K" optimized.
Posted on Reply
#18
Metroid
bugWell, we do have our answer, TPU has tested several games and none of them needs more than 8GB VRAM. They rarely need more than 4, actually. See: www.techpowerup.com/review/wolcen-benchmark-test-performance-analysis/4.html
You're claiming otherwise, so the burden of proof is on you.
Picking your own set of games to prove your argument is understandable, you can always pick old games put in 4k and "hey see 4k all maximum and only 4gb memory". There is a reason nvidia will launch a 24gb and minimum at the this time is 8gb memory for at least 2560x1440p. Nvidia and AMD work closely with game developers.

graphicscardhub.com/how-much-vram-for-gaming/

www.gamingscan.com/how-much-vram-do-i-need/

nvidia/comments/bbxic1
Check this quote and i do agree,

"Funny side note to prove this point; if you play the RE2 Remake it has a VRAM usage display next to the graphic settings. At 1440P with max settings the game will tell you it needs 11GB of VRAM and gives you a big warning, yet running the game itself, I never went over 7GB, and again, that's allocated not actually used. "

"Yeah, to actually run out of vram on re2 on 11gb I had to run 8k with maxed out graphics and at least "6gb high textures" option. At "4gb high textures" it would run fine. Of course fps was low but no stutters. "

Finished re2 remake and was playing on 4k and used all my vram memory, had to se4t it right to be at 90%, lower settings.
Posted on Reply
#19
bug
MetroidPicking your own set of games to prove your argument is understandable, you can always pick old games put in 4k and "hey see 4k all maximum and only 4gb memory". There is a reason nvidia will launch a 24gb and minimum at the this time is 8gb memory for at least 2560x1440p. Nvidia and AMD work closely with game developers.

graphicscardhub.com/how-much-vram-for-gaming/

www.gamingscan.com/how-much-vram-do-i-need/
But linking two sites that list no games needing more than 8GB VRAM is so much better :wtf:
Also, like I said those 24/48GB VRAM, if real, are most likely for professional cards.
Posted on Reply
#20
Metroid
bugBut linking two sites that list no games needing more than 8GB VRAM is so much better :wtf:
Also, like I said those 24/48GB VRAM, if real, are most likely for professional cards.
Okay, 2 games that I play and need more than 16gb on 4k to play nice, re2 remake and cities skylines. I'd say they are not new by any means, cities 2015 and re2 remake half and year ago. For you trolls that dont play on 4k, I cant for the sake of it make you agree with me, you need to play the games and see for yourself and like I already said, nvidia works closely with game devs, also for professional gpus, nvidia and amd have its own line of dedicated gpus. You might be probably referring to workstations and deep learning.
Posted on Reply
#21
bug
MetroidOkay, 2 games that I play and need more than 16gb on 4k to play nice, re2 remake and cities skylines. I'd say they are not new by any means, cities 2015 and re2 remake half and year ago.
I very much doubt that. More likely someone did texture packs for those leading you to believe they need all that VRAM.

Actually, looking around a bit, it seems RE2 Remake is downright broken when reporting VRAM needs: forums.overclockers.co.uk/threads/resident-evil-2-remake-biggest-use-of-vram-in-an-official-release.18843137/
Posted on Reply
#22
gamefoo21
Still won't support the latest OpenCL versions because NV is scared.

:laugh:
Posted on Reply
#23
bug
gamefoo21Still won't support the latest OpenCL versions because NV is scared.

:laugh:
Scared of what? Virtually no professional software/SDK uses OpenCL. It's all CUDA, Nvidia has the market covered.
And I've seen AMD OpenCL 2.0 cards beaten by Nvidia OpenCL 1.2 cards in less professional apps.
Posted on Reply
#24
T4C Fantasy
CPU & GPU DB Maintainer
the clocks seem low because most likely they are just base clocks, with boost being around 1500~, at base 1.11~ it would barely be as powerful as a Quadro 8000 and i think this gpu is a Quadro
Posted on Reply
#25
gamefoo21
bugScared of what? Virtually no professional software/SDK uses OpenCL. It's all CUDA, Nvidia has the market covered.
And I've seen AMD OpenCL 2.0 cards beaten by Nvidia OpenCL 1.2 cards in less professional apps.
If Intel really shows up, OpenCL will get a massive boost. Intel went Freesync or well VESA AFR, HDMI uses it too, NV suddenly stopped requiring port corrupting hardware for their AFR support.

NV has refused to support OpenCL 2.0 to force apps to use CUDA to support the newer functions. If they weren't scared, they'd enable the support.

As for performance a Radeon VII will smack around a 2080 Ti in OpenCL workloads.

For mining on GPUs, 290s, 390s, Vegas were god.

NV is scared because they pull in loads of cash from CUDA licensing. OpenCL torpedoes that.
Posted on Reply
Add your own comment
Dec 22nd, 2024 04:09 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts