• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA Announces GeForce Ampere RTX 3000 Series Graphics Cards: Over 10000 CUDA Cores

Joined
May 2, 2017
Messages
7,762 (2.81/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
A 50% perf/watt uplift from Navi puts a 5700XT class card at 3080 performance for around 300W based on some back of the envelope math.
It does, but I don't quite think 50% will be an average number, especially since unlike Nvidia AMD doesn't have the benefit of a node shrink. There's also a question of whether AMD will be willing to go big enough on their top end die. Those decisions were likely made two years ago, so it'll be interesting to see where they placed their bets.




On a different topic, after processing those massive CUDA core counts for a couple of hours I'm now wondering if Ampere is the generation where Nvidia's gaming performance/Tflop comes crashing down. No doubt they'll still be powerful, but doubling the ALUs and leaving everything else the same is bound to create heaps of bottlenecks.




DirectStorage aims to reduce IO overhead not necessarily memory requirements, 1 GB of textures are still going to be 1 GB of textures, they'll just load more efficiently. Just because an engine no longer needs to load as many things ahead of time doesn't mean the memory wont fill up with something else, in facts that's the goal, to allow for an increases in the amount of assets used.
But that's the thing, isn't it - if you load textures more efficiently, i.e. you stop loading ones you don't actually need, you inherently reduce the memory footprint as you are by default loading fewer textures. Sure, you can then load other things more aggressively, but wouldn't it then make sense to use the same JIT principle for those loads as well? And what other data is supposed to fill several GB of VRAM? Reducing the texture prefetch time from an assumed 1-2s (HDD speed) to .1s or even less (NVMe SSD speed) can lead to dramatic drops in the amount of texture data that needs to be in memory. I'm obviously not saying this will necessarily result in dramatic across-the-board drops in VRAM usage, but it's well documented that current VRAM usage is massively bloated and wasteful and not actually necessary to sustain or even increase performance.

Second, 2080 Ti have more memory bandwidth than the 3070. That's why 3070 needs a lot more Cuda Cores.
That is literally the opposite of how this works. More cores necessitates more memory bandwidth for the cores to have data to work on. That would be like compensating for your car having no wheels by giving it a more powerful engine.
 
Joined
Jul 1, 2011
Messages
362 (0.07/day)
System Name Matar Extreme PC.
Processor Intel Core i9-12900KS 5.3GHZ All P-Cores ,4.2GHZ All E-Cores & Ring 4.2GhZ
Motherboard NZXT N5 Z690 Wi-Fi 6E
Cooling CoolerMaster ML240L V2 AIO with MX6
Memory 4x16 64GB DDR4 3600MHZ CL16-19-19-36-55 G.SKILL Trident Z NEO
Video Card(s) Nvidia ZOTAC RTX 3080 Ti Trinity + overclocked 100 core 1000 mem. Re-pasted MX6
Storage WD black 1GB Nvme OS + 1TB 970 Nvme Samsung & 4TB WD Blk 256MB cache 7200RPM
Display(s) Lenovo 34" Ultra Wide 3440x1440 144hz 1ms G-Snyc
Case NZXT H510 Black with Cooler Master RGB Fans
Audio Device(s) Internal , EIFER speakers & EasySMX Wireless Gaming Headset
Power Supply Aurora R9 850Watts 80+ Gold, I Modded cables for it.
Mouse Onn RGB Gaming Mouse & Logitech G923 & shifter & E-Break Sim setup.
Keyboard GOFREETECH RGB Gaming Keyboard, & Xbox 1 X Controller & T-Flight Hotas Joystick
VR HMD Oculus Rift S
Software Windows 10 Home 22H2
Benchmark Scores https://www.youtube.com/user/matttttar/videos
RTX 3080 is the sweet spot.
 
Joined
Jun 28, 2018
Messages
299 (0.13/day)
I think Nvidia is somewhat reviewing the term of what is a Cuda Core with the introduction of these new shaders. I don't think it will be directly comparable to the Cuda Cores of the previous generation.

Anyway, soon we should have it all dissected.
 
Joined
Jan 24, 2011
Messages
287 (0.06/day)
Processor AMD Ryzen 5900X
Motherboard MSI MAG X570 Tomahawk
Cooling Dual custom loops
Memory 4x8GB G.SKILL Trident Z Neo 3200C14 B-Die
Video Card(s) AMD Radeon RX 6800XT Reference
Storage ADATA SX8200 480GB, Inland Premium 2TB, various HDDs
Display(s) MSI MAG341CQ
Case Meshify 2 XL
Audio Device(s) Schiit Fulla 3
Power Supply Super Flower Leadex Titanium SE 1000W
Mouse Glorious Model D
Keyboard Drop CTRL, lubed and filmed Halo Trues
It does, but I don't quite think 50% will be an average number, especially since unlike Nvidia AMD doesn't have the benefit of a node shrink. There's also a question of whether AMD will be willing to go big enough on their top end die. Those decisions were likely made two years ago, so it'll be interesting to see where they placed their bets.

That's fair, but AMD has been pretty honest about their projected performance under Su. Should be interesting!

On a different topic, after processing those massive CUDA core counts for a couple of hours I'm now wondering if Ampere is the generation where Nvidia's gaming performance/Tflop comes crashing down. No doubt they'll still be powerful, but doubling the ALUs and leaving everything else the same is bound to create heaps of bottlenecks.

There's definitely a big architectural change there that I'm interested to hear about. At a very high, naive level it seems like a move toward a more GCN-like layout, or rather like AMD and NVIDIA are converging a bit in terms of general shader design.
 
Joined
Feb 1, 2013
Messages
1,265 (0.29/day)
System Name Gentoo64 /w Cold Coffee
Processor 9900K 5.2GHz @1.312v
Motherboard MXI APEX
Cooling Raystorm Pro + 1260mm Super Nova
Memory 2x16GB TridentZ 4000-14-14-28-2T @1.6v
Video Card(s) RTX 4090 LiquidX Barrow 3015MHz @1.1v
Storage 660P 1TB, 860 QVO 2TB
Display(s) LG C1 + Predator XB1 QHD
Case Open Benchtable V2
Audio Device(s) SB X-Fi
Power Supply MSI A1000G
Mouse G502
Keyboard G815
Software Gentoo/Windows 10
Benchmark Scores Always only ever very fast
I think Nvidia is somewhat reviewing the term of what is a Cuda Core with the introduction of these new shaders. I don't think it will be directly comparable to the Cuda Cores of the previous generation.

Anyway, soon we should have it all dissected.
They are taking a page from AMD's Bulldozer and Piledriver cores days, obviously not to catch up, but to distance their lead even further. As someone already said, it's probably not easy to keep the extra ALU's fed completely, thereby losing some of the scaling.
 
Joined
Jul 5, 2008
Messages
337 (0.06/day)
System Name Roxy
Processor i7 5930K @ 4.5GHz (167x27 1.35V)
Motherboard X99-A/USB3.1
Cooling Barrow Infinity Mirror, EK 45x420mm, EK X-Res w 10W DDC
Memory 2x16GB Patriot Viper 3600 @3333 16-20-20-38
Video Card(s) XFX 5700 XT Thicc III Ultra
Storage Sabrent Rocket 2TB, 4TB WD Mechanical
Display(s) Acer XZ321Q (144Mhz Freesync Curved 32" 1080p)
Case Modded Cosmos-S Red, Tempered Glass Window, Full Frontal Mesh, Black interior
Audio Device(s) Soundblaster Z
Power Supply Corsair RM 850x White
Mouse Logitech G403
Keyboard CM Storm QuickFire TK
Software Windows 10 Pro
Benchmark Scores https://valid.x86.fr/e5uz5f
When he got the 3090 out of the oven!

A nice nod to the using an oven to fix the half baked solder on the 8800 GTX.
 
Joined
Sep 11, 2015
Messages
624 (0.19/day)
You can always use TFLOPS for that, pick any two random GPUs and compare their TFLOPS ratings then the actual performance. I'll be you anything that probably 80% of the time the GPU with more TFLOPS will be faster in the real world as well. This time around Nvidia is doing something finicky with the way they count CUDA "cores", I put that in quote marks because they were never real cores (same with AMD's stream processors), it's the SM/CU that's the real "core" of the GPU. But for some reason this time around they chose to be even more inconsistent as to what that means. Probably to make it look more impressive.

Nvidia would sure like you to believe that. Shading languages don't run on specialized hardware, they can't, they need generic all-purpose processors.

The TFLOPS he was referring to was 20 TFLOPS of the 3070 compared to 13 TFLOPS of the 2080 ti. If these cards have equivalent performance, TFLOPS doesn't matter!!!

And of course they are using specialized hardware! Do you actually think they are going to waste general-purpose CPUs just to compute graphics??? And you probably know that CPUs aren't even that good at those kinds of computations. That's the reason we have GPUs in the first place. Your argument doesn't even make any sense for that reason. And to just add to that, a GPU has many thousands little processing cores that are all the same, all doing pretty much the same exact matrix computations and manipulations for those graphics. That's a far cry from what a general-purpose CPU does, to say the least. How would Nvidia even hide something like this?

The part about shading language blatantly makes no sense.
 
Last edited:
Joined
Mar 10, 2010
Messages
11,878 (2.21/day)
Location
Manchester uk
System Name RyzenGtEvo/ Asus strix scar II
Processor Amd R5 5900X/ Intel 8750H
Motherboard Crosshair hero8 impact/Asus
Cooling 360EK extreme rad+ 360$EK slim all push, cpu ek suprim Gpu full cover all EK
Memory Corsair Vengeance Rgb pro 3600cas14 16Gb in four sticks./16Gb/16GB
Video Card(s) Powercolour RX7900XT Reference/Rtx 2060
Storage Silicon power 2TB nvme/8Tb external/1Tb samsung Evo nvme 2Tb sata ssd/1Tb nvme
Display(s) Samsung UAE28"850R 4k freesync.dell shiter
Case Lianli 011 dynamic/strix scar2
Audio Device(s) Xfi creative 7.1 on board ,Yamaha dts av setup, corsair void pro headset
Power Supply corsair 1200Hxi/Asus stock
Mouse Roccat Kova/ Logitech G wireless
Keyboard Roccat Aimo 120
VR HMD Oculus rift
Software Win 10 Pro
Benchmark Scores 8726 vega 3dmark timespy/ laptop Timespy 6506
Soo, when is NDA up on reviews.

Release Day?
 
Joined
Mar 23, 2016
Messages
4,841 (1.53/day)
Processor Core i7-13700
Motherboard MSI Z790 Gaming Plus WiFi
Cooling Cooler Master RGB something
Memory Corsair DDR5-6000 small OC to 6200
Video Card(s) XFX Speedster SWFT309 AMD Radeon RX 6700 XT CORE Gaming
Storage 970 EVO NVMe M.2 500GB,,WD850N 2TB
Display(s) Samsung 28” 4K monitor
Case Phantek Eclipse P400S
Audio Device(s) EVGA NU Audio
Power Supply EVGA 850 BQ
Mouse Logitech G502 Hero
Keyboard Logitech G G413 Silver
Software Windows 11 Professional v23H2
Joined
Sep 11, 2015
Messages
624 (0.19/day)
There is already a 3070 Super and a 3080 ti listed on TPU with 16 GB and 20 GB VRAM respectively! Also probably a significant performance boost. As if Nvidia already knew they'll have people complain about the VRAM. I assume they'll probably cost a premium compared to these "low" VRAM versions, unfortunately.... A big reason why 3090 cost $1500 is the 24 GB of VRAM. But who knows, NVIDIA hasn't even mentioned them yet.

I wonder where TPU gets this information.

https://www.techpowerup.com/gpu-specs/geforce-rtx-3070-super.c3675
https://www.techpowerup.com/gpu-specs/geforce-rtx-3080-ti.c3581
 
Last edited:
Joined
Jan 31, 2011
Messages
2,210 (0.44/day)
System Name Ultima
Processor AMD Ryzen 7 5800X
Motherboard MSI Mag B550M Mortar
Cooling Arctic Liquid Freezer II 240 rev4 w/ Ryzen offset mount
Memory G.SKill Ripjaws V 2x16GB DDR4 3600
Video Card(s) Palit GeForce RTX 4070 12GB Dual
Storage WD Black SN850X 2TB Gen4, Samsung 970 Evo Plus 500GB , 1TB Crucial MX500 SSD sata,
Display(s) ASUS TUF VG249Q3A 24" 1080p 165-180Hz VRR
Case DarkFlash DLM21 Mesh
Audio Device(s) Onboard Realtek ALC1200 Audio/Nvidia HD Audio
Power Supply Corsair RM650
Mouse Rog Strix Impact 3 Wireless | Wacom Intuos CTH-480
Keyboard A4Tech B314 Keyboard
Software Windows 10 Pro
So the CUDA cores doing double calculations and marketing needed them too look good on paper so they double the numbers?
 
Joined
Sep 11, 2015
Messages
624 (0.19/day)
So why then are they saying that the 3070 is for 1440p it's interesting, reviews will tell all.
Because they are probably just being honest. 2080 ti was never truly a 4K card. Especially now that we have something like 3090 that will really handle 4K easily I assume.. It was even called a 4K/8K card in the presentation but I'd be very skeptical about the 8K part. Honest on one side but then dishonesty back again on the other. Classic marketing.
 
Joined
Jan 8, 2017
Messages
9,434 (3.28/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
The TFLOPS he was referring to was 13 TFLOPS of the 3070 compared to 20 TFLOPS of the 2080 ti. If these cards have equivalent performance, TFLOPS doesn't matter!!!

The 2080ti has no where near 20 TFLOPS , it has about 13 TFLOPS. TFLOPS and performance are highly correlated, it's the most objective measure of performance possible whether you like it or not. Rarely do you ever come across an example counter to that general rule.

Do you actually think they are going to waste general-purpose CPUs just to compute graphics???

GPUs are general purpose. Have been since early 2000s, that's why we have programmable shaders.

And to just add to that, a GPU has many thousands little processing cores that are all the same, all doing pretty much the same exact matrix computations and manipulations for those graphics. That's a far cry from what a general-purpose CPU does, to say the least.

First of all like I said these things don't really have thousands of cores but I'm not going to go into that, the point is that the analogous of a core is the SM. They do 4x4 matrix arithmetic if you chose to program that, they might as well do something else, which they do often within shaders because they're general purpose.

That's a far cry from what a general-purpose CPU does, to say the least.

The part about shading language blatantly makes no sense.


No it's not and it makes perfect sense, you think so because you probably have never seen a shader and don't know what I am talking about.

This is some random GLSL shader I found on the internet:

1599000321069.png


A lot more than matrix multiplication huh ? It's basically C code and you can't run C on special purpose hardware, you need a fairly robust ISA and control logic just like in a typical CPU. A GPU core is very similar to a CPU core, they're just optimized differently.

How would Nvidia even hide something like this?

I don't know what you are on about, you make it sound like it's some sort of conspiracy. It's really funny.
 
Joined
Sep 11, 2015
Messages
624 (0.19/day)
The 2080ti has no where near 20 TFLOPS , it has about 13 TFLOPS. TFLOPS and performance are highly correlated, it's the most objective measure of performance possible whether you like it or not. Rarely do you ever come across an example counter to that general rule.
It's the other way around. 3070 has 20 TFLOPS and 2080 ti has 13 TFLOPS... You could just read the original post about that and figure that out by now. Even if I miswrote the correct order, TFLOPS still don't predict performance, if these two cards have very similar performance.

First of all like I said these things don't really have thousands of cores but I'm not going to go into that, the point is that the analogous of a core is the SM. They do 4x4 matrix arithmetic if you chose to program that, they might as well do something else, which they do often within shaders because they're general purpose.

No, it's not. When I (and most people) say cores on a GPU, I mean Shading Units. The 3080 has 8704 cores in this case. They all work in parallel because GPU makes use of parallel computing WAY more than CPU. That is the difference that makes the whole GPU very different from a CPU.

And that C code runs purely on the GPU? Are you so sure of that? C is run on the CPU and the CPU eventually just controls the GPU...
 
Last edited:
Joined
Jan 8, 2017
Messages
9,434 (3.28/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
TFLOPS still don't predict performance.

It predicts performance incredibly well, strikingly so. I know people get angry about that but it's the truth. Size matters, or in this case TFLOPS.

It's the other way around. 3070 has 20 TFLOPS and 2080 ti has 13 TFLOPS... You could just read the original post about that and figure that out by now.

You don't get it, even if you go by Nvidia's numbers, the GPU with more TFLOPS is the faster one.


5888*2*1730 = ~20 TFLOPS

Nvidia claims the 3070 is faster than the 2080ti and guess what, the 3070 has more TFLOPS. Tada !
 
Joined
Sep 11, 2015
Messages
624 (0.19/day)
It predicts performance incredibly well, strikingly so. I know people get angry about that but it's the truth. Size matters, or in this case TFLOPS.



You don't get it, even if you go by Nvidia's numbers, the GPU with more TFLOPS is the faster one.


5888*2*1730 = -20 TFLOPS

Nvidia claims the 3070 is faster than the 2080ti and guess what, the 3070 has more TFLOPS. Tada !
It's not 50% faster as it should be if you just compare the TFLOPS! Your point still makes no sense. I'm pretty sure it'll just maybe be 10% faster if you're lucky. So yea, if you're off by 40%, that's not predicting performance. All the people I heard are saying it's going to be pretty much the same performance.
 
Last edited:
Joined
Jan 8, 2017
Messages
9,434 (3.28/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
It's not 50% faster as it should be if you just compare the TFLOPS! Your point still makes no sense.

Did you hear me say it's exactly 50% or whatever ? I said higher TFLOPS means higher performance in general, which is true. I don't know why you are so reluctant to accept it.

No, it's not. When I (and most people) say cores on a GPU, I mean Shading Units. The 3080 has 8704 cores in this case.

A core needs to fetch decode and execute introductions on it's own, CUDA cores or whatever Nvidia calls them don't do that, that's just marketing. Functionally speaking the SM is the core in a GPU. Have you noticed how Nvidia never says "core" but always makes sure to write "CUDA core" ? It's because they're not really cores, they're something else. They don't even do any shading, a CUDA core just means a FP32 unit.

And that C code runs purely on the GPU? Are you so sure of that? C is run on the CPU and the CPU eventually just controls the GPU...

Yes, it runs purely on the GPU, instruction by instruction for each instance of the shader. Look man, you are clearly not knowledgeable about these things, that's fine. You can either take my word for it or look all of this up on your own.
 
Joined
Sep 11, 2015
Messages
624 (0.19/day)
Did you hear me say it's exactly 50% or whatever ? I said higher TFLOPS means higher performance in general, which is true. I don't know why you are so reluctant to accept it.
Because no one even talked about "higher means higher". The poster who I was referring to before you interjected, was questioning the 20 TFLOPS vs 13 TFLOPS..... You just don't seem to get it still.

A core needs to fetch decode and execute introductions on it's own, CUDA cores whatever Nvidia calls them don't do that, that's just marketing. Functionally speaking the SM is the core in a GPU. Have you noticed how Nvidia never says "core" but always makes sure to write "CUDA core" ? It's because they're not really cores, they're something else. They don't even do any shading, a CUDA core just means a FP32 unit.
And I never talked about Cuda cores, you're just strawmanning me again. I'm talking about Shading Units, which do show the performance. By having more, you get faster GPUs. SM aren't even the "Cores", they are just arrays of Shading Units, which do the actual work.

Yes, it runs purely on the GPU. Look man, you are clearly not knowledgeable about these things, that's fine. You can either take my word for it or look all of this up on your own.
That is garbage. CPU always works together with the GPU. CPU instructs the GPU to do things all the time. I think you are way less knowledgeable than you believe.
 
Last edited:
Joined
Jan 8, 2017
Messages
9,434 (3.28/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
Because no one even talked about "higher means higher". The poster who I was referring to before you interjected, was questioning the 20 TFLOPS vs 13 TFLOPS..... You just don't seem to get it still.

I'll lay it out as simple as I can :

You said that you can't predict performance with TFLOPS, except you can, given a value you can tell with a fairly good degree of accuracy if it will be faster or not than an existing GPU. How does that not qualify as a prediction only you know.
 
Joined
Jun 29, 2011
Messages
455 (0.09/day)
System Name ---
Processor Ryzen 1600
Motherboard ASRock Taichi X370
Cooling Noctua D15
Memory G.Skill 3200 DDR4 2x8GB
Video Card(s) EVGA 1080 TI SC
Storage 500GB Samsung Evo 970 NVMe + 860 Evo 2TB SSD + 5x 2TB HDDs
Display(s) LG CX 65"
Case Phanteks P600S (white)
Audio Device(s) Onboard
Power Supply Corsair RM850x (white)
There is already a 3070 Super and a 3080 ti listed on TPU with 16 GB and 20 GB VRAM respectively! Also probably a significant performance boost. As if Nvidia already knew they'll have people complain about the VRAM. I assume they'll probably cost a premium compared to these "low" VRAM versions, unfortunately.... A big reason why 3090 cost $1500 is the 24 GB of VRAM. But who knows, NVIDIA hasn't even mentioned them yet.

I wonder where TPU gets this information.

https://www.techpowerup.com/gpu-specs/geforce-rtx-3070-super.c3675
https://www.techpowerup.com/gpu-specs/geforce-rtx-3080-ti.c3581

Why is stuff like this even on the main page? The more I look at TPU the more its credibility takes a hit with me. Should Reddit speculation be siteworthy?
 

Nkd

Joined
Sep 15, 2007
Messages
364 (0.06/day)
I think Navi is going to struggle catching this 3080 to be honest. AMD has yet to surpass 2070S performance convincingly, and now they're making an 80% jump ahead? Not likely, unless they make something absolutely gargantuan. But let's not dive into the next pond of speculation... my heart... :p

By the by, do we have TDPs for these Ampere releases already? The real numbers?

I am going by pure data that’s out their and what’s rumored. If big Navi has minimum double the CUs of 5700xt that will get right close to 3080 territory. There should also be other tweaks made it increase IPC and big Navi should get fairly high in clock speeds given the speeds on Xbox series X and how efficient that chip is as an APU.

I am suspecting them to compete with 3080 at minimum. Nvidia seems to have done right here pricing 3080 at 699.99. That does put amd in a tough spot and will have to under cut NVIDIA even at same speed. They would have to be faster to sell close to 699.99-750.
 
Joined
Sep 11, 2015
Messages
624 (0.19/day)
I'll lay it out as simple as I can :

You said that you can't predict performance with TFLOPS, except you can, given a value you can tell with a fairly good degree of accuracy if it will be faster or not than an existing GPU. How does that not qualify as a prediction only you know.
Because that wasn't even the thing in question... This is getting annoying to discuss with you because you obviously are trying to mischaracterize completely what I was talking about. Just stop, you missed the point. It's ok and move on. The point is that a potential 3060 could also have many more TFLOPS than 2080 ti but still be slower. That's the whole point. It just works in this case but it's still a difference of 50% more TFLOPS for pretty much the same performance on 3070, so TFLOPS again, don't reflect the actual PERFORMANCE of the GPU, as I have repeated many times to you...

Why is stuff like this even on the main page? The more I look at TPU the more its credibility takes a hit with me. Should Reddit speculation be siteworthy?
It was there for the 3070, 3080 and 3090 with all the details like this for at least a week now. And I think that was all the correct information, too. I thought that was weird as well.
 
Last edited:

Nkd

Joined
Sep 15, 2007
Messages
364 (0.06/day)
It predicts performance incredibly well, strikingly so. I know people get angry about that but it's the truth. Size matters, or in this case TFLOPS.



You don't get it, even if you go by Nvidia's numbers, the GPU with more TFLOPS is the faster one.


5888*2*1730 = ~20 TFLOPS

Nvidia claims the 3070 is faster than the 2080ti and guess what, the 3070 has more TFLOPS. Tada !

I think what he meant is it’s not not as fast as tflops show. 7 more tflops is a lot if you are taking Turing tflops. So ampere you are actually getting less performance Since 3070 is not 1.7x performance of 2080ti. So it’s almost like they are like tflops of gcn where you get less gaming performance.

so yes it has higher tflops but not as fast as it shows.
 
Joined
Feb 13, 2017
Messages
143 (0.05/day)
The power and heat for 3080/3090 is really bad, but what killed these cards for me is the vram size, just pathetic. 3070 should have 12GB and 3080 16GB at this point - and everyone can try to ignore the reality as much as they want, but these cards will severely lack enough memory, both now but even more in the next few years. Consoles are getting 16GB of GDDR6 and should cost 500 USD for the entire thing, and people are going crazy for a 3070 for the same price and just 8GB. People got so used to the 2000 series ridiculous pricing that they are now blind and just see the price drops. The 2k series was nVidia true colours when AMD couldn't compete, the current pricing is a direct response to RDNA2 - the arch that will power the next 5 years of consoles and the arch that will receive the most optimizations from developers - again, because everything is made to/for consoles where the bulk of gamers are, and then ported to PC. nVidia is scared of becoming another Intel, and I'm loving all of this, since the new Radeons will definitely be more power efficient, age better, apparently have more vram, and now are limited by nVidia prices!
 
Joined
Aug 6, 2009
Messages
1,162 (0.21/day)
Location
Chicago, Illinois
how do you all like my new sig? never thought i'd see the day... LMAO
It's dumb. Plenty of us with 2080tis will just keep them and put them in our other rigs. Plenty of people who bought 2080tis can afford another video card that's just as expensive. Guess who will be buying 3090s? A lot of the same people who bought 2080tis. Personally I'll probably exchange my 2080ti for a 3080 since I don't think I need a 3090.
 
Top