• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA GeForce RTX 3070 and RTX 3070 Ti Rumored Specifications Appear

Joined
May 31, 2016
Messages
4,443 (1.42/day)
Location
Currently Norway
System Name Bro2
Processor Ryzen 5800X
Motherboard Gigabyte X570 Aorus Elite
Cooling Corsair h115i pro rgb
Memory 32GB G.Skill Flare X 3200 CL14 @3800Mhz CL16
Video Card(s) Powercolor 6900 XT Red Devil 1.1v@2400Mhz
Storage M.2 Samsung 970 Evo Plus 500MB/ Samsung 860 Evo 1TB
Display(s) LG 27UD69 UHD / LG 27GN950
Case Fractal Design G
Audio Device(s) Realtec 5.1
Power Supply Seasonic 750W GOLD
Mouse Logitech G402
Keyboard Logitech slim
Software Windows 10 64 bit
Not understanding the angst here.
It's basically the same arch going to a smaller, more efficient, and probably faster process node.
So you get 2080 arch -> 3070, + faster memory, +more power efficient, + a few percent better IPC from the new node

I bet the 3070 is at least 110% the performance of a 2080.
You don't? I'm worried about the price due to the high bandwidth mem and capacity and GDDR6X. With this comparisons 3070 equals/or better than 2080 performance I wouldn't be so sure. Since you mentioned it is the same arch that has just a shrink and basically we know nothing about these new NV graphics so not sure where you get this information from. Node shrink gives efficiency or frequency not IPC btw. If it is the same arch as Turing, the IPC will be exactly the same.
Like I said I'm worried about the price because that one will be higher for sure but the performance may not exactly justify the price bump. Do you get it now?
Look at it this way price/performance ratio.
 
Last edited:
Joined
May 15, 2020
Messages
697 (0.41/day)
Location
France
System Name Home
Processor Ryzen 3600X
Motherboard MSI Tomahawk 450 MAX
Cooling Noctua NH-U14S
Memory 16GB Crucial Ballistix 3600 MHz DDR4 CAS 16
Video Card(s) MSI RX 5700XT EVOKE OC
Storage Samsung 970 PRO 512 GB
Display(s) ASUS VA326HR + MSI Optix G24C4
Case MSI - MAG Forge 100M
Power Supply Aerocool Lux RGB M 650W
Well Moore's Law is Dead had talked a few months ago about an engineering sample that was tested with 21GHz memory and clocked close to 2.5 GHz.
We'll just have to see which is the 3070 and which is the 3080, and that's what's all over the place in the latest rumors. But maybe that's because it's not decided yet if, as usual, it depends on the competition and the pricing is left to the last minute.
 

ARF

Joined
Jan 28, 2020
Messages
4,670 (2.61/day)
Location
Ex-usa | slava the trolls
Good news:
1594294307318.png



This means these chips will be seen at least 6 months from now, most likely Q2 2021.
 
Joined
Dec 31, 2009
Messages
19,372 (3.54/day)
Benchmark Scores Faster than yours... I'd bet on it. :)
Good news:
View attachment 161668


This means these chips will be seen at least 6 months from now, most likely Q2 2021.
Why is that good news... if this is true...

I want these in 4Q...
 

ARF

Joined
Jan 28, 2020
Messages
4,670 (2.61/day)
Location
Ex-usa | slava the trolls
Why is that good news... if this is true...

I want these in 4Q...


Well, Nvidia's CEO was right :)

The gamers hoping, wishing, and praying for a new generation of GeForce cards to arrive this week got some bad news from the company’s CEO during a Computex press briefing: The hardware won’t show up for a “long time.”

Nvidia CEO: No next-gen GeForce GPUs for a 'long time,' but G-Sync BFGDs are coming soon


At that time I said don't expect RTX 3000 before H2 2020, most likely H1 2021.
 
Joined
Jan 27, 2015
Messages
1,746 (0.48/day)
System Name Legion
Processor i7-12700KF
Motherboard Asus Z690-Plus TUF Gaming WiFi D5
Cooling Arctic Liquid Freezer 2 240mm AIO
Memory PNY MAKO DDR5-6000 C36-36-36-76
Video Card(s) PowerColor Hellhound 6700 XT 12GB
Storage WD SN770 512GB m.2, Samsung 980 Pro m.2 2TB
Display(s) Acer K272HUL 1440p / 34" MSI MAG341CQ 3440x1440
Case Montech Air X
Power Supply Corsair CX750M
Mouse Logitech MX Anywhere 25
Keyboard Logitech MX Keys
Software Lots
Say, if stuff is to be released for CP2077, shouldn't RDNA2 cards be in full swing AIB production already? Yet, there have been no leaks so far.

Is it me or the gap between 3070Ti and 3080 is rather large?


This makes no sense. You get faster cycles (higher clocks) you don't miraculously get circuits that are capable of doing something in 1 cycle if it took 2.

Faster transistor switching is not the same as higher clocks. And faster transistor switching does translate into potentially faster completion of instructions.

I'm not going to get into this with you people, just use google, there are dozens if not hundreds of references to this.

You might start here :

 
Joined
Dec 31, 2009
Messages
19,372 (3.54/day)
Benchmark Scores Faster than yours... I'd bet on it. :)
Well, Nvidia's CEO was right :)



Nvidia CEO: No next-gen GeForce GPUs for a 'long time,' but G-Sync BFGDs are coming soon


At that time I said don't expect RTX 3000 before H2 2020, most likely H1 2021.
Aims at another goal post........

Ok....... that tells us nothing. What does a long time mean? Stop guessing ARFy...lol
 
Joined
May 2, 2017
Messages
7,762 (2.78/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
Faster transistor switching is not the same as higher clocks. And faster transistor switching does translate into potentially faster completion of instructions.

I'm not going to get into this with you people, just use google, there are dozens if not hundreds of references to this.

You might start here :

But it can still only switch once per clock cycle, no? So faster switching speeds would help drive up clock speeds (as the time needed for a transistor to complete a cycle is shortened), but otherwise not change anything as a shorter time won't help do anything without a signal to make it do something. No?
 
Joined
May 2, 2017
Messages
7,762 (2.78/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
lol, I didn't catch that was from 2018, lol. What did you intend to convey with that? Nothing that matters for today?
I believe they are trying to say that the cards will launch at some point far from sumer 2018. I.e. a statement that without specific context and explanation could mean tomorrow or in ten years. Call me an optimist, but ersonally, I'm leaning towards it being a lot closer to tomorrow than ten years from now. More than two years is definitely a long time in the GPU world.
 
Joined
Dec 31, 2009
Messages
19,372 (3.54/day)
Benchmark Scores Faster than yours... I'd bet on it. :)
I believe they are trying to say that the cards will launch at some point far from sumer 2018. I.e. a statement that without specific context and explanation could mean tomorrow or in ten years. Call me an optimist, but ersonally, I'm leaning towards it being a lot closer to tomorrow than ten years from now. More than two years is definitely a long time in the GPU world.
lol, yeah no clue what ARF's point was with that, lol...

We're talking about now and release dates and an article from 2018 gets put up, lol.... I've got to log off forums today, sillyness is all around, lol.
 
Joined
Jan 27, 2015
Messages
1,746 (0.48/day)
System Name Legion
Processor i7-12700KF
Motherboard Asus Z690-Plus TUF Gaming WiFi D5
Cooling Arctic Liquid Freezer 2 240mm AIO
Memory PNY MAKO DDR5-6000 C36-36-36-76
Video Card(s) PowerColor Hellhound 6700 XT 12GB
Storage WD SN770 512GB m.2, Samsung 980 Pro m.2 2TB
Display(s) Acer K272HUL 1440p / 34" MSI MAG341CQ 3440x1440
Case Montech Air X
Power Supply Corsair CX750M
Mouse Logitech MX Anywhere 25
Keyboard Logitech MX Keys
Software Lots
But it can still only switch once per clock cycle, no? So faster switching speeds would help drive up clock speeds (as the time needed for a transistor to complete a cycle is shortened), but otherwise not change anything as a shorter time won't help do anything without a signal to make it do something. No?

I don't think you're understanding what is happening. There are some entire microcode instructions that complete in one clock. In some cases, more than 1 instruction completes in a single clock *on average* because multiple instructions are being decoded at once (multiple pipelines). I

The speed that happens all comes down to transistor switching.

Look at this picture of a NAND gate. The 2nd transistor needs a result from the first to get an output. Now consider, a single micrcode instruction can have thousands of these gates (and other items like registers - storage locations - etc). If you make those gates switch faster for a given power input, you get a result faster. OR you can get the same performance at lower power because the instructions are completing faster.

Now you can make a transistor switch faster by giving it more power to overcome the impedance. This is why, when overclocking, it's common to hit a point where you have to increase voltage. The transistors need the extra power to keep up with the higher clocks.

This type of improvement is why you'll see TSMC stating things like getting a 20% improvement in performance from one node to another. That's an ideal situation for marketing purposes but the performance improvements are there.



 
Joined
May 2, 2017
Messages
7,762 (2.78/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
I don't think you're understanding what is happening. There are some entire microcode instructions that complete in one clock. In some cases, more than 1 instruction completes in a single clock *on average* because multiple instructions are being decoded at once (multiple pipelines). I

The speed that happens all comes down to transistor switching.

Look at this picture of a NAND gate. The 2nd transistor needs a result from the first to get an output. Now consider, a single micrcode instruction can have thousands of these gates (and other items like registers - storage locations - etc). If you make those gates switch faster for a given power input, you get a result faster. OR you can get the same performance at lower power because the instructions are completing faster.

Now you can make a transistor switch faster by giving it more power to overcome the impedance. This is why, when overclocking, it's common to hit a point where you have to increase voltage. The transistors need the extra power to keep up with the higher clocks.

This type of improvement is why you'll see TSMC stating things like getting a 20% improvement in performance from one node to another. That's an ideal situation for marketing purposes but the performance improvements are there.



But again, all of that comes down to increased clock speeds, both your description of speeding up instruction decoding and the performance increases cited by foundries. When TSMC is talking about a 20% performance increase for a new node, they are talking about a 20% clock speed increase at the same power draw, as that is the only (somewhat) architecture-independent metric possible.
 
Joined
Jan 27, 2015
Messages
1,746 (0.48/day)
System Name Legion
Processor i7-12700KF
Motherboard Asus Z690-Plus TUF Gaming WiFi D5
Cooling Arctic Liquid Freezer 2 240mm AIO
Memory PNY MAKO DDR5-6000 C36-36-36-76
Video Card(s) PowerColor Hellhound 6700 XT 12GB
Storage WD SN770 512GB m.2, Samsung 980 Pro m.2 2TB
Display(s) Acer K272HUL 1440p / 34" MSI MAG341CQ 3440x1440
Case Montech Air X
Power Supply Corsair CX750M
Mouse Logitech MX Anywhere 25
Keyboard Logitech MX Keys
Software Lots
But again, all of that comes down to increased clock speeds, both your description of speeding up instruction decoding and the performance increases cited by foundries. When TSMC is talking about a 20% performance increase for a new node, they are talking about a 20% clock speed increase at the same power draw, as that is the only (somewhat) architecture-independent metric possible.

No that is incorrect. You seem to think all instructions complete in one clock so everything is based on clock. They don't. Most instructions take multiple clocks and pass through tens if not hundreds of thousands of gates during that clock cycle, and typically *are not* complete in that clock cycle. If you make your gates switch quicker, you get the result faster, it is simple as that. You can do your own research, not going to waste more time here.
 
Joined
Jun 3, 2010
Messages
2,540 (0.48/day)
Aims at another goal post........

Ok....... that tells us nothing. What does a long time mean? Stop guessing ARFy...lol
But, Nvidia doesn't ever tell you anything. That is what is missing.
Nvidia launches products.
They don't make industry progress. For instance, the same monitor interface data compression method would improve frame doubling pipelines had it been implemented in displays, however they don't develop for outside markets.
 

ARF

Joined
Jan 28, 2020
Messages
4,670 (2.61/day)
Location
Ex-usa | slava the trolls
But, Nvidia doesn't ever tell you anything. That is what is missing.
Nvidia launches products.
They don't make industry progress. For instance, the same monitor interface data compression method would improve frame doubling pipelines had it been implemented in displays, however they don't develop for outside markets.


Yup, things around Nvidia's architectures are strictly hidden for the outside world to the developers and the drivers do the whole job.
All around Nvidia is closed and locked.
 
Joined
Dec 31, 2009
Messages
19,372 (3.54/day)
Benchmark Scores Faster than yours... I'd bet on it. :)
But, Nvidia doesn't ever tell you anything. That is what is missing.
Nvidia launches products.
They don't make industry progress. For instance, the same monitor interface data compression method would improve frame doubling pipelines had it been implemented in displays, however they don't develop for outside markets.
... I think I missed your point? We all know it's more of a closed ecosystem... but that has nothing to do with this discussion (at least what I'm talking about).

I'm simply wondering why the hell a 2 y.o article was used to..... I dont even know why tf it posted........and now here we are discussion whatever point has nothing to do with what i said... man I love TPU......... :ohwell:
 

ARF

Joined
Jan 28, 2020
Messages
4,670 (2.61/day)
Location
Ex-usa | slava the trolls
... I think I missed your point? We all know it's more of a closed ecosystem... but that has nothing to do with this discussion (at least what I'm talking about).

I'm simply wondering why the hell a 2 y.o article was used to..... I dont even know why tf it posted........and now here we are discussion whatever point has nothing to do with what i said... man I love TPU......... :ohwell:


lol You said you want something in Q4, I told you to wait a bit longer. :D
 
Joined
Jun 3, 2010
Messages
2,540 (0.48/day)
I think I missed your point?
You missed the point on how industry's inflection point is Nvidia. Nvidia competes with monitor scaler manufacturers. There is no cooperation between them.
Think of it this way: LCD beats OLED in every manner apart from pixel transitions. That is what is important about the convention. At the advent of the VVC codec, this could tap into vrr methods. Lcds overdrive better if they get multiple frame signals. It is due to liquid crystal alignment, they get jumbled up if voltage applied is direct current.
 
Joined
Dec 31, 2009
Messages
19,372 (3.54/day)
Benchmark Scores Faster than yours... I'd bet on it. :)
You missed the point on how industry's inflection point is Nvidia. Nvidia competes with monitor scaler manufacturers. There is no cooperation between them.
Think of it this way: LCD beats OLED in every manner apart from pixel transitions. That is what is important about the convention. At the advent of the VVC codec, this could tap into vrr methods. Lcds overdrive better if they get multiple frame signals. It is due to liquid crystal alignment, they get jumbled up if voltage applied is direct current.
what.jpg


I wasn't aiming at that goal post either. And even when I pointed at the right goal post... we still start talking hockey sticks.

Anyway, thanks gentlemen for the information. I apologize if it was just me not getting it... but I've read through this multiple times and can't make the connection. Really... this was about the thread title and then I mentioned I wanted the cards in 4Q and then a post from 2018 like that was going to help...

... then some shiza about NV monitor scaling and other things......?????????!!!!!!!!!??????????
 
Last edited:
Joined
May 2, 2017
Messages
7,762 (2.78/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
No that is incorrect. You seem to think all instructions complete in one clock so everything is based on clock. They don't. Most instructions take multiple clocks and pass through tens if not hundreds of thousands of gates during that clock cycle, and typically *are not* complete in that clock cycle. If you make your gates switch quicker, you get the result faster, it is simple as that. You can do your own research, not going to waste more time here.
I didn't say instructions complete in a single cycle, just that I would assume that any increase in transistor switching speed is typically absorbed into the margins needed for increased clocks, meaning there is little room left for further utilizing this to lower the amount of cycles needed to finish an instruction. Some, sure, but a few percent isn't enough to allow you to finish in one cycle rather than two unless you were already very, very close to that target or you redesign the hardware to reach this goal, in which case one could argue that the increase in switching speed is less important than the redesign (though it obviously lowers the bar).
 
Joined
Jan 27, 2015
Messages
1,746 (0.48/day)
System Name Legion
Processor i7-12700KF
Motherboard Asus Z690-Plus TUF Gaming WiFi D5
Cooling Arctic Liquid Freezer 2 240mm AIO
Memory PNY MAKO DDR5-6000 C36-36-36-76
Video Card(s) PowerColor Hellhound 6700 XT 12GB
Storage WD SN770 512GB m.2, Samsung 980 Pro m.2 2TB
Display(s) Acer K272HUL 1440p / 34" MSI MAG341CQ 3440x1440
Case Montech Air X
Power Supply Corsair CX750M
Mouse Logitech MX Anywhere 25
Keyboard Logitech MX Keys
Software Lots
I didn't say instructions complete in a single cycle, just that I would assume that any increase in transistor switching speed is typically absorbed into the margins needed for increased clocks, meaning there is little room left for further utilizing this to lower the amount of cycles needed to finish an instruction. Some, sure, but a few percent isn't enough to allow you to finish in one cycle rather than two unless you were already very, very close to that target or you redesign the hardware to reach this goal, in which case one could argue that the increase in switching speed is less important than the redesign (though it obviously lowers the bar).

Uh no, that is *NOT* what you said........ You are now backtracking.

What you said was (emphasis added) :

But again, all of that comes down to increased clock speeds, both your description of speeding up instruction decoding and the performance increases cited by foundries. When TSMC is talking about a 20% performance increase for a new node, they are talking about a 20% clock speed increase at the same power draw, as that is the only (somewhat) architecture-independent metric possible.

Clock speed has nothing to do with IPC increase from new nodes.

If I have a microcode instruction that completes in 1.1 cycles, it will have to wait for the next (2nd) cycle for anything to be done with the result. This essentially means it takes 2 cycles to complete in a useful way. If I improve the time it takes to complete that instruction by 20% (a common claim by TSMC) it now takes ~0.9 cycles, it now went from being a 2 cycle instruction to a one cycle instruction. This *directly* impacts IPC.

You are welcome for the education.
 
Top