• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Three Unknown NVIDIA GPUs GeekBench Compute Score Leaked, Possibly Ampere?

Joined
Aug 8, 2019
Messages
430 (0.22/day)
System Name R2V2 *In Progress
Processor Ryzen 7 2700
Motherboard Asrock X570 Taichi
Cooling W2A... water to air
Memory G.Skill Trident Z3466 B-die
Video Card(s) Radeon VII repaired and resurrected
Storage Adata and Samsung NVME
Display(s) Samsung LCD
Case Some ThermalTake
Audio Device(s) Asus Strix RAID DLX upgraded op amps
Power Supply Seasonic Prime something or other
Software Windows 10 Pro x64
Scared of what? Virtually no professional software/SDK uses OpenCL. It's all CUDA, Nvidia has the market covered.
And I've seen AMD OpenCL 2.0 cards beaten by Nvidia OpenCL 1.2 cards in less professional apps.

If Intel really shows up, OpenCL will get a massive boost. Intel went Freesync or well VESA AFR, HDMI uses it too, NV suddenly stopped requiring port corrupting hardware for their AFR support.

NV has refused to support OpenCL 2.0 to force apps to use CUDA to support the newer functions. If they weren't scared, they'd enable the support.

As for performance a Radeon VII will smack around a 2080 Ti in OpenCL workloads.

For mining on GPUs, 290s, 390s, Vegas were god.

NV is scared because they pull in loads of cash from CUDA licensing. OpenCL torpedoes that.
 

bug

Joined
May 22, 2015
Messages
13,843 (3.95/day)
Processor Intel i5-12600k
Motherboard Asus H670 TUF
Cooling Arctic Freezer 34
Memory 2x16GB DDR4 3600 G.Skill Ripjaws V
Video Card(s) EVGA GTX 1060 SC
Storage 500GB Samsung 970 EVO, 500GB Samsung 850 EVO, 1TB Crucial MX300 and 2TB Crucial MX500
Display(s) Dell U3219Q + HP ZR24w
Case Raijintek Thetis
Audio Device(s) Audioquest Dragonfly Red :D
Power Supply Seasonic 620W M12
Mouse Logitech G502 Proteus Core
Keyboard G.Skill KM780R
Software Arch Linux + Win10
If Intel really shows up, OpenCL will get a massive boost. Intel went Freesync or well VESA AFR, HDMI uses it too, NV suddenly stopped requiring port corrupting hardware for their AFR support.

NV has refused to support OpenCL 2.0 to force apps to use CUDA to support the newer functions. If they weren't scared, they'd enable the support.

As for performance a Radeon VII will smack around a 2080 Ti in OpenCL workloads.

For mining on GPUs, 290s, 390s, Vegas were god.

NV is scared because they pull in loads of cash from CUDA licensing. OpenCL torpedoes that.
Nope, general consensus seems to be OpenCL is just bad/poorly designed.

And here's Radeon VII "smacking around" the 2080Ti: https://www.phoronix.com/scan.php?page=article&item=radeon-vii-rocm24&num=2
Keep in mind 2080 Ti is only on OpenCL 1.2.
 
Joined
Mar 26, 2009
Messages
176 (0.03/day)
the clocks seem low because most likely they are just base clocks, with boost being around 1500~, at base 1.11~ it would barely be as powerful as a Quadro 8000 and i think this gpu is a Quadro
No, Geekbench reads boost clocks as well. The 118CU GPU is beating the Titan RTX by 40% despite the low clocks.
 

T4C Fantasy

CPU & GPU DB Maintainer
Staff member
Joined
May 7, 2012
Messages
2,566 (0.56/day)
Location
Rhode Island
System Name Whaaaat Kiiiiiiid!
Processor Intel Core i9-12900K @ Default
Motherboard Gigabyte Z690 AORUS Elite AX
Cooling Corsair H150i AIO Cooler
Memory Corsair Dominator Platinum 32GB DDR4-3200
Video Card(s) EVGA GeForce RTX 3080 FTW3 ULTRA @ Default
Storage Samsung 970 PRO 512GB + Crucial MX500 2TB x3 + Crucial MX500 4TB + Samsung 980 PRO 1TB
Display(s) 27" LG 27MU67-B 4K, + 27" Acer Predator XB271HU 1440P
Case Thermaltake Core X9 Snow
Audio Device(s) Logitech G935 Headset
Power Supply SeaSonic Platinum 1050W Snow Silent
Mouse Logitech G903 Lightspeed
Keyboard Logitech G915
Software Windows 11 Pro
Benchmark Scores FFXV: 19329
No, Geekbench reads boost clocks as well. The 118CU GPU is beating the Titan RTX by 40% despite the low clocks.
Ahh okay, but i think the clock tables may be different and its reading base clock anyways or even the wrong clocks in general, it happened to Navi aswel on launch and before it. literally reading 1ghz clock.
 

bug

Joined
May 22, 2015
Messages
13,843 (3.95/day)
Processor Intel i5-12600k
Motherboard Asus H670 TUF
Cooling Arctic Freezer 34
Memory 2x16GB DDR4 3600 G.Skill Ripjaws V
Video Card(s) EVGA GTX 1060 SC
Storage 500GB Samsung 970 EVO, 500GB Samsung 850 EVO, 1TB Crucial MX300 and 2TB Crucial MX500
Display(s) Dell U3219Q + HP ZR24w
Case Raijintek Thetis
Audio Device(s) Audioquest Dragonfly Red :D
Power Supply Seasonic 620W M12
Mouse Logitech G502 Proteus Core
Keyboard G.Skill KM780R
Software Arch Linux + Win10
Ahh okay, but i think the clock tables may be different and its reading base clock anyways or even the wrong clocks in general, it happened to Navi aswel on launch and before it. literally reading 1ghz clock.
*cough*engineering sample*cough*
 
Joined
Mar 18, 2008
Messages
5,717 (0.93/day)
System Name Virtual Reality / Bioinformatics
Processor Undead CPU
Motherboard Undead TUF X99
Cooling Noctua NH-D15
Memory GSkill 128GB DDR4-3000
Video Card(s) EVGA RTX 3090 FTW3 Ultra
Storage Samsung 960 Pro 1TB + 860 EVO 2TB + WD Black 5TB
Display(s) 32'' 4K Dell
Case Fractal Design R5
Audio Device(s) BOSE 2.0
Power Supply Seasonic 850watt
Mouse Logitech Master MX
Keyboard Corsair K70 Cherry MX Blue
VR HMD HTC Vive + Oculus Quest 2
Software Windows 10 P
Just a guess:
Modern games do quite a lot of compute work on GPU aka compute shaders. But building compute pipelines (including openCL) is still not as productive as CUDA. I guess it could be an attempt to lure developers to make use of CUDA interoperability, and therefore more dependence on Nvidia GPU.


Very few research labs in bioinformatics / biomedical use OpenCL. It is the exact opposite of user friendly. Sloppy documentation of almost everything, lack of active community engagement. As of right now it is almost abandonware, at least for us genetics/genomics researchers.

To put it simply, why would researchers devote their time, energy and resources into OpenCL when nobody will even cite and use their work afterwards?
 
Joined
Jun 22, 2014
Messages
446 (0.12/day)
System Name Desktop / "Console"
Processor Ryzen 5950X / Ryzen 5800X
Motherboard Asus X570 Hero / Asus X570-i
Cooling EK AIO Elite 280 / Cryorig C1
Memory 32GB Gskill Trident DDR4-3600 CL16 / 16GB Crucial Ballistix DDR4-3600 CL16
Video Card(s) RTX 4090 FE / RTX 2080ti FE
Storage 1TB Samsung 980 Pro, 1TB Sabrent Rocket 4 Plus NVME / 1TB Sabrent Rocket 4 NVME, 1TB Intel 660P
Display(s) Alienware AW3423DW / LG 65CX Oled
Case Lian Li O11 Mini / Sliger CL530 Conswole
Audio Device(s) Sony AVR, SVS speakers & subs / Marantz AVR, SVS speakers & subs
Power Supply ROG Loki 1000 / Silverstone SX800
VR HMD Quest 3
Because it is a lot? why would you need 24 or 47 GB in a graphics card for gaming? That is why it is weird and maybe these cards are workstation of some sort not gaming.

I'm not sure why someone would think that these are anything but professional cards. "Weird" RAM aside, a GTC reveal alone makes it pretty clear.
 
Joined
Dec 28, 2012
Messages
3,954 (0.90/day)
System Name Skunkworks 3.0
Processor 5800x3d
Motherboard x570 unify
Cooling Noctua NH-U12A
Memory 32GB 3600 mhz
Video Card(s) asrock 6800xt challenger D
Storage Sabarent rocket 4.0 2TB, MX 500 2TB
Display(s) Asus 1440p144 27"
Case Old arse cooler master 932
Power Supply Corsair 1200w platinum
Mouse *squeak*
Keyboard Some old office thing
Software Manjaro
Okay, 2 games that I play and need more than 16gb on 4k to play nice, re2 remake and cities skylines. I'd say they are not new by any means, cities 2015 and re2 remake half and year ago. For you trolls that dont play on 4k, I cant for the sake of it make you agree with me, you need to play the games and see for yourself and like I already said, nvidia works closely with game devs, also for professional gpus, nvidia and amd have its own line of dedicated gpus. You might be probably referring to workstations and deep learning.
Liar. I run Cities at 4k with all details fully maxxed out. Doesnt max out the framebuffer on a vega 64 GPU with 8GB of VRAM. And RE2R is jsut broken when it comes to reporting, what it "uses" is merely allocated, not actively used.

You do not need 16GB of VRAM to run either of these games at 4k. If you are running a ton of graphical mods on Cities, then you could push past the framebuffer. But thats mods. You could mod the liek sof skyrim to use 2-3x the VRAM of cards at the time, but that was not the native game, and mods are not often optimized like the base game is.
 
Joined
May 8, 2018
Messages
1,571 (0.65/day)
Location
London, UK
Liar. I run Cities at 4k with all details fully maxxed out. Doesnt max out the framebuffer on a vega 64 GPU with 8GB of VRAM. And RE2R is jsut broken when it comes to reporting, what it "uses" is merely allocated, not actively used.

You do not need 16GB of VRAM to run either of these games at 4k. If you are running a ton of graphical mods on Cities, then you could push past the framebuffer. But thats mods. You could mod the liek sof skyrim to use 2-3x the VRAM of cards at the time, but that was not the native game, and mods are not often optimized like the base game is.


How can you call me a liar, when your own explanation agrees with my statement? First of all, those 2 games with 8gb vram at 4k is just not enough if you want play it nicely. I'm not saying they will use 16gb of vram, I'm saying you will have enough free vram space if those games needs that. Nobody wants to play a game with stutters and other problems related to not have enough free vram.

About re2 remake showing memory size usage wrong, I wonder if gpu-z is also showing it wrong then, cause I used gpu-z the last time I saw just to check and windows taskbar manager just to see the usage.
 
Joined
Apr 8, 2010
Messages
1,012 (0.19/day)
Processor Intel Core i5 8400
Motherboard Gigabyte Z370N-Wifi
Cooling Silverstone AR05
Memory Micron Crucial 16GB DDR4-2400
Video Card(s) Gigabyte GTX1080 G1 Gaming 8G
Storage Micron Crucial MX300 275GB
Display(s) Dell U2415
Case Silverstone RVZ02B
Power Supply Silverstone SSR-SX550
Keyboard Ducky One Red Switch
Software Windows 10 Pro 1909
Very few research labs in bioinformatics / biomedical use OpenCL. It is the exact opposite of user friendly. Sloppy documentation of almost everything, lack of active community engagement. As of right now it is almost abandonware, at least for us genetics/genomics researchers.

To put it simply, why would researchers devote their time, energy and resources into OpenCL when nobody will even cite and use their work afterwards?
I know right, it's a bloody pain with little benefits. Nvidia won here with a productive API.

edit: I didn't mean people use OpenCL if that is what it looks like. What I was saying is that Nvidia exposed CUDA to all gaming gpus so game developers can use it too and see how much more productive it is than other API's.
 
Last edited:
Joined
Dec 28, 2012
Messages
3,954 (0.90/day)
System Name Skunkworks 3.0
Processor 5800x3d
Motherboard x570 unify
Cooling Noctua NH-U12A
Memory 32GB 3600 mhz
Video Card(s) asrock 6800xt challenger D
Storage Sabarent rocket 4.0 2TB, MX 500 2TB
Display(s) Asus 1440p144 27"
Case Old arse cooler master 932
Power Supply Corsair 1200w platinum
Mouse *squeak*
Keyboard Some old office thing
Software Manjaro
How can you call me a liar, when your own explanation agrees with my statement?
Because it doesnt agree with your statement, and you are trying to twist other people's statement to support yours. MODS are not part of the stock gameplay experience. They are made by the community. For every talented coder there are just as many mods that are poorly optimized, if optimized at all, and it is trivial to break a game by loading it with mod after mod. That is not the fault of the card, because it doesnt matter how much silicon and memory you throw at a problem, you will always be able to throw more software at it as well.

If you drop a turbo into your car and overheat it because the radiator didnt have enough capacity for the increased load, is that the fault of the radiator? No. You modded the application and ran out of capacity, for its designed use case it works perfectly.

First of all, those 2 games with 8gb vram at 4k is just not enough if you want play it nicely.
Citation needed, something that has been asked of you multiple times and you refuse to deliver. (here's a hint, a site with 0 benchmarks or proof of what you are claiming makes you look like a total mong).
I'm not saying they will use 16gb of vram, I'm saying you will have enough free vram space if those games needs that.
Except those games do not need that, that has been proven to you already in this very thread by @bug, and is readily disproven by casual googling of these very games being played in 4k for reviews and gameplay videos showing them running just fine.

Nobody wants to play a game with stutters and other problems related to not have enough free vram.
Good thing that isnt a problem with any game currently on the market, 8gb is currently sufficient for 4k.

About re2 remake showing memory size usage wrong, I wonder if gpu-z is also showing it wrong then, cause I used gpu-z the last time I saw just to check and windows taskbar manager just to see the usage.
What did you think we were talking about? RE2R "consumes" large amounts of VRAM because it is reserving way more then it actually needs. Much of that VRAM is unused, as is evident by the fact that lower VRAM cards run the game fine without stuttering.

Let me help you here Metroid: you came here making claims that games need more then 8GB of VRAM to play sufficiently in 4k. That has been proven false by information posted by other users. You have yet to post anything that backs up your claims. The burden of proof is on the back of those making claims. That's you.

Since you seem so sure about this, how about you record video on your computer of the games you are talking about, show the settings you are using, run an FCAT test and FPS test for us, use MSI afterburner to verify VRAM usage and FPS results. Shouldnt take more then 10 minutes to run the benchmarks and a bit of time to post the resulting video to youtube. Doesnt need to be edited or anything, just as long as it contains proof of what you are claiming.
 
Joined
May 31, 2016
Messages
4,440 (1.42/day)
Location
Currently Norway
System Name Bro2
Processor Ryzen 5800X
Motherboard Gigabyte X570 Aorus Elite
Cooling Corsair h115i pro rgb
Memory 32GB G.Skill Flare X 3200 CL14 @3800Mhz CL16
Video Card(s) Powercolor 6900 XT Red Devil 1.1v@2400Mhz
Storage M.2 Samsung 970 Evo Plus 500MB/ Samsung 860 Evo 1TB
Display(s) LG 27UD69 UHD / LG 27GN950
Case Fractal Design G
Audio Device(s) Realtec 5.1
Power Supply Seasonic 750W GOLD
Mouse Logitech G402
Keyboard Logitech slim
Software Windows 10 64 bit
I'm not sure why someone would think that these are anything but professional cards. "Weird" RAM aside, a GTC reveal alone makes it pretty clear.
Somebody did and this was my answer bro. Besides as mentioned, these are samples. we dont know whether these end up in which segment and if they have these RAM capacities. It all may change you know depending on the tiers NV would go with. Who knows what will happen? I surely don't. We know new cards from NV are around the corner.
 
Joined
Jun 28, 2016
Messages
3,595 (1.16/day)
Because it is a lot? why would you need 24 or 47 GB in a graphics card for gaming? That is why it is weird and maybe these cards are workstation of some sort not gaming.
But why did almost everyone here assume this is a desktop gaming card? :) Is it mentioned somewhere in the leak or what?

Both leaked cards look like next-gen top Quadro models. RTX 6000 and 8000 had 24 and 48GB RAM respectively.
 
Joined
Oct 8, 2015
Messages
774 (0.23/day)
Location
Earth's Troposphere
System Name 3 "rigs"-gaming/spare pc/cruncher
Processor R7-5800X3D/i7-7700K/R9-7950X
Motherboard Asus ROG Crosshair VI Extreme/Asus Ranger Z170/Asus ROG Crosshair X670E-GENE
Cooling Bitspower monoblock ,custom open loop,both passive and active/air tower cooler/air tower cooler
Memory 32GB DDR4/32GB DDR4/64GB DDR5
Video Card(s) Gigabyte RX6900XT Alphacooled/AMD RX5700XT 50th Aniv./SOC(onboard)
Storage mix of sata ssds/m.2 ssds/mix of sata ssds+an m.2 ssd
Display(s) Dell UltraSharp U2410 , HP 24x
Case mb box/Silverstone Raven RV-05/CoolerMaster Q300L
Audio Device(s) onboard/onboard/onboard
Power Supply 3 Seasonics, a DeltaElectronics, a FractalDesing
Mouse various/various/various
Keyboard various wired and wireless
VR HMD -
Software W10.someting or another,all 3
And , nowhere it says those alleged two products have video outputs.
 
Joined
May 31, 2016
Messages
4,440 (1.42/day)
Location
Currently Norway
System Name Bro2
Processor Ryzen 5800X
Motherboard Gigabyte X570 Aorus Elite
Cooling Corsair h115i pro rgb
Memory 32GB G.Skill Flare X 3200 CL14 @3800Mhz CL16
Video Card(s) Powercolor 6900 XT Red Devil 1.1v@2400Mhz
Storage M.2 Samsung 970 Evo Plus 500MB/ Samsung 860 Evo 1TB
Display(s) LG 27UD69 UHD / LG 27GN950
Case Fractal Design G
Audio Device(s) Realtec 5.1
Power Supply Seasonic 750W GOLD
Mouse Logitech G402
Keyboard Logitech slim
Software Windows 10 64 bit
But why did almost everyone here assume this is a desktop gaming card? :) Is it mentioned somewhere in the leak or what?

Both leaked cards look like next-gen top Quadro models. RTX 6000 and 8000 had 24 and 48GB RAM respectively.
I said it is professional card due to the ram capacity.
 
Joined
Mar 10, 2014
Messages
1,793 (0.45/day)
If Intel really shows up, OpenCL will get a massive boost. Intel went Freesync or well VESA AFR, HDMI uses it too, NV suddenly stopped requiring port corrupting hardware for their AFR support.

NV has refused to support OpenCL 2.0 to force apps to use CUDA to support the newer functions. If they weren't scared, they'd enable the support.

As for performance a Radeon VII will smack around a 2080 Ti in OpenCL workloads.

For mining on GPUs, 290s, 390s, Vegas were god.

NV is scared because they pull in loads of cash from CUDA licensing. OpenCL torpedoes that.

Intel uses it's own sysCL based oneAPIs DPC++ not opencl. While one can run OpenCL/Cuda/Syscl code through wrapper, it's still better to use direct oneAPi code with intel hw. Not sure how easy will it be to migrate from OneAPI to SysCL/OpenCL once you have done coding for Intel. So all in all intel's showing up won't necessary give OpenCL any boost, rather deprecate it even further(And I mean deprecating by things like Apple has moved all to Metal, intel might give their OpenCL support nvidia like second citizen status).

And what do you mean by AFR, some multi card rendering method or did you mess it with VRR. VESA VRR and HDMI Forum VRR are different things, HDMI Forum VRR is supported currently by Console manufacturers and Nvidia, amd's support for it is still pending.

I don't think CUDA license have any fee, but nvidia can lock HW for them with CUDA.
 
Joined
Jun 28, 2016
Messages
3,595 (1.16/day)
NV is scared because they pull in loads of cash from CUDA licensing. OpenCL torpedoes that.
Imagine a world where Nvidia haters actually learn something about the products/company they attack. :eek:

CUDA is free to use (also commercially).
Furthermore, Nvidia cards obviously can run OpenCL programs, so it's not like anyone's forced to use CUDA.
 
Joined
Aug 8, 2019
Messages
430 (0.22/day)
System Name R2V2 *In Progress
Processor Ryzen 7 2700
Motherboard Asrock X570 Taichi
Cooling W2A... water to air
Memory G.Skill Trident Z3466 B-die
Video Card(s) Radeon VII repaired and resurrected
Storage Adata and Samsung NVME
Display(s) Samsung LCD
Case Some ThermalTake
Audio Device(s) Asus Strix RAID DLX upgraded op amps
Power Supply Seasonic Prime something or other
Software Windows 10 Pro x64
Intel uses it's own sysCL based oneAPIs DPC++ not opencl. While one can run OpenCL/Cuda/Syscl code through wrapper, it's still better to use direct oneAPi code with intel hw. Not sure how easy will it be to migrate from OneAPI to SysCL/OpenCL once you have done coding for Intel. So all in all intel's showing up won't necessary give OpenCL any boost, rather deprecate it even further(And I mean deprecating by things like Apple has moved all to Metal, intel might give their OpenCL support nvidia like second citizen status).

And what do you mean by AFR, some multi card rendering method or did you mess it with VRR. VESA VRR and HDMI Forum VRR are different things, HDMI Forum VRR is supported currently by Console manufacturers and Nvidia, amd's support for it is still pending.

I don't think CUDA license have any fee, but nvidia can lock HW for them with CUDA.

According to Intel all of their GPUs from 2010 on support OpenCL.

I run OpenCL on my 7700K's UHD630 without any translation. I only need to have the drivers enabled. Intel also offers SDK support for their FPGAs to do OpenCL.

CUDA costs a bunch because you pay for the hardware. Only one supplier of hardware that can run CUDA. I'd actually be really interested in wrapping CUDA and running it on non nV hardware but they NV has never shied away from locking their software down as hard as possible.

I remember when I could have PhysX on while using a Radeon GPU to do the drawing.

I'm also very sure it would be very easy for NV to turn on OCL 2 support.

Edit: Isn't One API open? I thought it was basically OpenCL 3... I have to look into it more.

Edit 2: One API is basically a unified open standard that offers full cross platform use. According to Phoronix articles, porting it to AMD will be easy because Intel and AMD both use open source drivers, NV on the other hand locks the good stuff up with closed source drivers on Linux

Edit 3: Too many abbreviations in my head, I meant VRR instead of AFR. Somehow it was Adaptive Frame Rendering... LoL I meant the lovely piece of hardware that NV required for VRR rather than just supporting the Display port spec. I mean it's kinda awesome that the Xbox One X supports VRR if you plug it into a compatible display.

Imagine a world where Nvidia haters actually learn something about the products/company they attack. :eek:

CUDA is free to use (also commercially).
Furthermore, Nvidia cards obviously can run OpenCL programs, so it's not like anyone's forced to use CUDA.

Free to use on Nvidia hardware. AMD has to jump through hoops just to emulate some small parts.

Nvidia also completely gimps the GP-GPU performance of their more affordable GPUs. Want that performance, the cheapest option available to normal folk is the $3000 USD Titan V.

Yeah, you can use the older less functional and capable OpenCL 1.2. want those newer features... Well CUDA only on NV.

Yeah... Free... :rolleyes:

At least I'm a hater who uses GeForces and Quadros. :pimp:
 
Last edited:
Joined
Oct 10, 2018
Messages
943 (0.42/day)
When they launch this generation, I think it would be very hard for AMD to climb back up. They are at least 4 years away from competing this is like 2 generations away.
 
Joined
Jun 28, 2016
Messages
3,595 (1.16/day)
Free to use on Nvidia hardware. AMD has to jump through hoops just to emulate some small parts.
CUDA is a part of the ecosystem you buy into. But it's free to use.
Furthermore, you may or may not use it (since there are alternatives), so you actually have a choice (which you don't get with some other products).
And you can use it even when you don't own the hardware - this is not always the case.

In other words: there are no downsides. I honestly don't understand why people moan so much about CUDA (other than general hostility towards Nvidia).
Nvidia also completely gimps the GP-GPU performance of their more affordable GPUs. Want that performance, the cheapest option available to normal folk is the $3000 USD Titan V.
That's absolutely not true. What you mean is FP32. But some software uses it and some doesn't. It's just an instruction set.
One could say AMD gimps AVX-512 on all of their CPUs.

Many professional/scientific scenarios are fine with FP16.
Phoronix tested some GPUs in PlaidML, which is probably the most popular non-CUDA neural network framework.
2 things to observe here: how multiple Nvidia GPUs perform in FP16 and as a tasty bonus - how they perform in OpenCL compared to Polaris.
At least I'm a hater who uses GeForces and Quadros. :pimp:
I don't understand why people raise this argument. It simply makes you a miserable hater.
 

bug

Joined
May 22, 2015
Messages
13,843 (3.95/day)
Processor Intel i5-12600k
Motherboard Asus H670 TUF
Cooling Arctic Freezer 34
Memory 2x16GB DDR4 3600 G.Skill Ripjaws V
Video Card(s) EVGA GTX 1060 SC
Storage 500GB Samsung 970 EVO, 500GB Samsung 850 EVO, 1TB Crucial MX300 and 2TB Crucial MX500
Display(s) Dell U3219Q + HP ZR24w
Case Raijintek Thetis
Audio Device(s) Audioquest Dragonfly Red :D
Power Supply Seasonic 620W M12
Mouse Logitech G502 Proteus Core
Keyboard G.Skill KM780R
Software Arch Linux + Win10
Imagine a world where Nvidia haters actually learn something about the products/company they attack. :eek:

CUDA is free to use (also commercially).
Furthermore, Nvidia cards obviously can run OpenCL programs, so it's not like anyone's forced to use CUDA.
Well, I'm not sure if Nvidia drivers do OpenCL 2.0. There was preliminary support like 3 years ago, but I haven't heard anything about it since.

The point is moot though, the world seems to be set on CUDA by now. More precisely, the world seems to be set on anything that's not OpenCL.
 
Joined
Mar 4, 2005
Messages
3,629 (0.50/day)
System Name TheReactor / HTPC
Processor AMD 7800x3d 5050Mhz / Intel 10700kf (5.1ghz All Core)
Motherboard ASrock x670e Taichi / ROG Strix z490-e gaming
Cooling HeatKiller VI CPU/GPU Block -2xBlackIce GTX 360 Radiators - Swiftech MCP655 Pump
Memory 32GB G.Skill 6000Mhz DDR5 / 32GB G.Skill 3400Mhz DDR4
Video Card(s) Nvidia 3090ti / Nvidia 2080ti
Storage Crucial T700 2TB Gen 5 / Samsung Evo 2Tb
Display(s) Acer Predator xb271hu - 2560x1440 @144hz
Case Corsiar 550
Audio Device(s) on board
Power Supply Antec Quattro 1000W
Mouse Logitech G502
Keyboard Corsair Gaming k70
Software Windows 10 Pro 64bit
Wow, with that much GPU Ram your system RAM should typically be double GPU RAM. Finally a reason to have more than 16gb of system RAM.
 
Joined
Jun 29, 2011
Messages
455 (0.09/day)
System Name ---
Processor Ryzen 1600
Motherboard ASRock Taichi X370
Cooling Noctua D15
Memory G.Skill 3200 DDR4 2x8GB
Video Card(s) EVGA 1080 TI SC
Storage 500GB Samsung Evo 970 NVMe + 860 Evo 2TB SSD + 5x 2TB HDDs
Display(s) LG CX 65"
Case Phanteks P600S (white)
Audio Device(s) Onboard
Power Supply Corsair RM850x (white)
Wow, with that much GPU Ram your system RAM should typically be double GPU RAM. Finally a reason to have more than 16gb of system RAM.

I've never heard of this rule of thumb/correlation between these two.
 
Joined
Jan 31, 2011
Messages
238 (0.05/day)
Processor 3700X
Motherboard X570 TUF Plus
Cooling U12
Memory 32GB 3600MHz
Video Card(s) eVGA GTX970
Storage 512GB 970 Pro
Case CM 500L vertical
Well, I'm not sure if Nvidia drivers do OpenCL 2.0. There was preliminary support like 3 years ago, but I haven't heard anything about it since.

The point is moot though, the world seems to be set on CUDA by now. More precisely, the world seems to be set on anything that's not OpenCL.

After Apple, the defacto chair of the Kronos group (OpenCL committe/standards body) burned OpenCL - which Apple themselves created - in favor of their own proprietary "Metal," who is going to have faith in OpenCL's development?
 
Joined
Mar 4, 2005
Messages
3,629 (0.50/day)
System Name TheReactor / HTPC
Processor AMD 7800x3d 5050Mhz / Intel 10700kf (5.1ghz All Core)
Motherboard ASrock x670e Taichi / ROG Strix z490-e gaming
Cooling HeatKiller VI CPU/GPU Block -2xBlackIce GTX 360 Radiators - Swiftech MCP655 Pump
Memory 32GB G.Skill 6000Mhz DDR5 / 32GB G.Skill 3400Mhz DDR4
Video Card(s) Nvidia 3090ti / Nvidia 2080ti
Storage Crucial T700 2TB Gen 5 / Samsung Evo 2Tb
Display(s) Acer Predator xb271hu - 2560x1440 @144hz
Case Corsiar 550
Audio Device(s) on board
Power Supply Antec Quattro 1000W
Mouse Logitech G502
Keyboard Corsair Gaming k70
Software Windows 10 Pro 64bit
I've never heard of this rule of thumb/correlation between these two.
Not sure it is a hard rule, but I remember your GPU will carve out an equal amount of system ram if available for Shadow Ram. Might be a myth.

"A quick rule of thumb is that you should have twice as much system memory as your graphics card has VRAM, so a 4GB graphics card means you'd want 8GB or more system memory, and an 8GB card ideally would have 16GB of system memory "

 
Top