Monday, December 25th 2023

ASUS GeForce RTX 4070 SUPER Dual OC Snapped—Goodbye 8-pin

Here are some of the first pictures of the ASUS GeForce RTX 4070 SUPER Dual OC, the company's close-to-MSRP custom-design implementation of the upcoming RTX 4070 SUPER, which is expected to be announced on January 8, with reviews and retail availability a week later. The card very closely resembles the design of the RTX 4070 Dual OC, but with one major difference—the single 8-pin PCIe power connector makes way for a 16-pin 12VHPWR. Considering that the ASUS Dual OC series tends to come with a nominal factory OC at power limits matching NVIDIA reference, this is the first sign that the RTX 4070 SUPER in general might have typical graphics power (TGP) above what a single 8-pin could fulfill, and so we've given a 12VHPWR, just like every RTX 4070 Ti. The cards will include an NVIDIA-designed adapter that converts two 8-pin PCIe to a 12VHPWR, with its signal pins set to tell the graphics card that it can deliver 300 W of continuous power.

The GeForce RTX 4070 SUPER is based on the same AD104 silicon as the RTX 4070 and RTX 4070 Ti, with its ASIC code rumored to be "AD104-350." The SKU allegedly enables 56 out of 60 streaming multiprocessors (SM) present on the silicon, giving it 7,168 out of 7,680 CUDA cores. This is a big increase from the 5,888 CUDA cores (46 SM) that the vanilla RTX 4070 is configured with. The memory subsystem is expected to be unchanged from the RTX 4070 and RTX 4070 Ti—12 GB of 21 Gbps GDDR6X across a 192-bit memory interface; leaving NVIDIA with one possible lever, the ROP count. While the RTX 4070 Ti has 80 ROPs, the RTX 4070 has 64. It remains to be seen how many the RTX 4070 SUPER gets. Its rumored TGP of 225 W is behind the switch to 12VHPWR connectors.
Sources: momomo_us (Twitter), VideoCardz
Add your own comment

54 Comments on ASUS GeForce RTX 4070 SUPER Dual OC Snapped—Goodbye 8-pin

#26
Craptacular
Beginner Micro DeviceThat's a daisy chain element. Can't be considered a valid option: both fugly and potentially dangerous.
If it works then it is a valid option. Anything that has electricity is potentially dangerous.
Posted on Reply
#27
Macro Device
CraptacularIf it works then it is a valid option.
Shooting pigeons with an RPG technically also works.

Additional tension spot due to the fact it's not a straight up cable and rather an adaptor is untasty.
Posted on Reply
#28
Dimitriman
wolfPricing will be key on the super cards, if they move the bar forward at all, mild success, if priced in like with current price to perf for40 series, it's a bust. No way they price higher than launch msrps of cards they replace.
I can almost 100% guarantee you they will raise price and slot in between existing cards. This is why the 4090 was raised in price recently. My prediction:

4070S = $699
4070TiS = $899
4080S = $1299

Anyone thinking Nvidia actually cares to increase perf/$ has been in hybernation for 2 years.
Posted on Reply
#29
Macro Device
Dimitrimanhas been in hybernation forever.
FTFY.
Posted on Reply
#30
nguyen
trsttteThat's a band aid, not a solution. I'm all for moving forward but this new power connector was poorly thought out and it's roll out was even worse: just a few days ago cable mods had to issue a recall on their adapter, just the most recent example of problems.

Don't like using 2 connectors? Use just one, an 8 pin molex can easily handle 300w, the 150w was decided by PCI sig because they didn't think more would ever be necessary when they wrote the standard, this would be a simpler, cheaper and safer solution than the new connector.
Nothing wrong with the included adapter, CableMod makes low quality cables that their 8pin PCIe can also melt anyways


I'm happy that the multiple power connectors is going away, having to plug the connector in properly is a small price to pay for a clean looking PC build
Posted on Reply
#31
khohada
More CUDA means more cost, do not expect NV will make any expection...
Posted on Reply
#32
Craptacular
Beginner Micro DeviceShooting pigeons with an RPG technically also works.

Additional tension spot due to the fact it's not a straight up cable and rather an adaptor is untasty.
Lol, someone is salty.

Yeah, shooting pigeons with an RPG also works and thus is a valid option. Now it just comes down to cons vs pros with each valid option in the scenario it is being proposed for.

For the purpose of shooting pigeons, Will it be used for purpose of killing an invasive species (think hogs in the south of USA, Burmese python in Florida, Lion Fish in the Carribean and Gulf of Mexico, or just killing for food, will it be used in a city area or will it be used out in wild away from populated human settlements.

The issue with the cable was that people were not inserting the cable all the way in, gamers nexus did a video on that, basically they were able to sum it up that all of the issues with the cable was that people were not inserting the cables all the way in. You may not like using an adapter but the PCIe cables that come with a PSU designed to power a Geforce 3090, Vega 64, etc. They are perfectly fine to be used with the adapter.

You also notice that there haven't been any new reports this year since the gamers nexus video of people running into issues with the adapter?
Posted on Reply
#33
3x0
CraptacularThe issue with the cable was that people were not inserting the cable all the way in, gamers nexus did a video on that, basically they were able to sum it up that all of the issues with the cable was that people were not inserting the cables all the way in. You may not like using an adapter but the PCIe cables that come with a PSU designed to power a Geforce 3090, Vega 64, etc. They are perfectly fine to be used with the adapter.

You also notice that there haven't been any new reports this year since the gamers nexus video of people running into issues with the adapter?
Plenty of recent issues, check some of his videos
youtube.com/@NorthridgeFix
Posted on Reply
#34
trsttte
CraptacularThe issue with the cable was that people were not inserting the cable all the way in, gamers nexus did a video on that, basically they were able to sum it up that all of the issues with the cable was that people were not inserting the cables all the way in. You may not like using an adapter but the PCIe cables that come with a PSU designed to power a Geforce 3090, Vega 64, etc. They are perfectly fine to be used with the adapter.
A cable that will be operated by the general public (aka idiots) needs to be idiot proof. The 16pin 12VHPWR connector failed at that, a high ammount of user error still means it's a bad design.
Posted on Reply
#35
Vayra86
CraptacularIf it works then it is a valid option. Anything that has electricity is potentially dangerous.
Ah yes, those old molex adapters are totally valid for anything you can chain it to, as well, right?

'If it works its valid' might fly in Soviet Russia... As long as it flies. I'll pass though thanks.
CraptacularLol, someone is salty.

Yeah, shooting pigeons with an RPG also works and thus is a valid option. Now it just comes down to cons vs pros with each valid option in the scenario it is being proposed for.

For the purpose of shooting pigeons, Will it be used for purpose of killing an invasive species (think hogs in the south of USA, Burmese python in Florida, Lion Fish in the Carribean and Gulf of Mexico, or just killing for food, will it be used in a city area or will it be used out in wild away from populated human settlements.

The issue with the cable was that people were not inserting the cable all the way in, gamers nexus did a video on that, basically they were able to sum it up that all of the issues with the cable was that people were not inserting the cables all the way in. You may not like using an adapter but the PCIe cables that come with a PSU designed to power a Geforce 3090, Vega 64, etc. They are perfectly fine to be used with the adapter.

You also notice that there haven't been any new reports this year since the gamers nexus video of people running into issues with the adapter?
The point is, obviously, that you don't need another connector for this kind of power target at all. Its quite similar to RPG's for pigeons that way.

The issue with the cable is that its a shit design, simple. It has flaws 8/6 pins do not, the tolerances are lower, etc.
The other issue is with consumers happily accepting this.
Posted on Reply
#36
Macro Device
CraptacularLol, someone is salty.
I'm not just salty, I'm severely depressed because of idiocracy. Adapting things the way they can't be a threat if operated by idiots made things unnecessarily expensive, ugly and inefficient. I want manufacturers to have a legal right to make idiots suffer for non-intended usage, maybe this will make their neurons activate for something useful like actual learning process, yet I doubt it hard it's not too late. And nVidia despite originally planning on making this plug more foolproof than a stock standard 8-pin made it even less foolproof, and in the way I don't like it.

And this nVidia's new plug is just a product that "solves" the issue that never existed and produces new issues as well. This is as idiotic as it gets. I will never support this decision, I will never spend any money on purchasing such GPUs and I will make as many people as possible know it's a horribly wrong design and it must be banned.
Posted on Reply
#38
trsttte
3x0
I have a lot of mixed feelings on that video... The main conclusion is obviously right, this connector is garbage, but he made such a weird and complicated argument about it.

He's talking about safety margins and arrived at that cable power rating of 288W assuming psu manufacturers use a specific type of wire which is not totally wrong but it's a much more complicated answer than necessary: psu manufacturers are using 8pin to 2x 8pin cables so a single 8pin obviously can handle at least 300w and so can whatever cables they use. This entire 150w has nothing to do with power ratings or flimsy older power suplies, it's simply a value the PCI-sig thought was enough and would never be exceeded.

There's also the simple point that if a 12VPWR can carry 600W in 6 wires (100W/wire), there's no reason for an 8pin not to be able to carry 300W in 3 wires (100W/wire).

His last suggestion of using 2 12VPWR connectors is also completely idiotic, the solution is go back to the drawing board, not to use more of the same flawed connector :kookoo:

Thank god at least AMD is smartly avoiding all this non sense
Posted on Reply
#39
Craptacular
3x0Plenty of recent issues, check some of his videos
youtube.com/@NorthridgeFix
Most recent video about the cable is from seven months ago and he doesn't disprove that the melting was caused by the user not putting the cable all the way in. No one here is disputing, including GamersNexus, that if you have a design that results in a lot of user errors that the design is bad. The point still stands, if you insert the cable all the way in then the cable works just fine. The odds of you having a melted cable go way way down, the primary reason for why the cables is melting is due to users not inserting the cable all the way in.
Vayra86Ah yes, those old molex adapters are totally valid for anything you can chain it to, as well, right?

'If it works its valid' might fly in Soviet Russia... As long as it flies. I'll pass though thanks.


The point is, obviously, that you don't need another connector for this kind of power target at all. Its quite similar to RPG's for pigeons that way.

The issue with the cable is that its a shit design, simple. It has flaws 8/6 pins do not, the tolerances are lower, etc.
The other issue is with consumers happily accepting this.
Depends on how you are using the molex, if you are daisy chaining them then you will forsure overload them, just like you would do if you were to daisy chain the pcie 6 and 8 ping connectors.

Most things in life you don't need to switch to a new standard but we do because we believe the new standards' advantages outweigh its cons.

Besides you will be happy to know that the 12VHPWR cable is already on the way out, a new revision has been approved and will be replacing it, it is called the 12V-2x6.
Posted on Reply
#40
3x0
CraptacularMost recent video about the cable is from seven months ago
No, it's from 11 days ago

The one before that is from a month ago
Craptacularand he doesn't disprove that the melting was caused by the user not putting the cable all the way in.
No one can prove or disprove if the user plugged it all the way in, that's a moot point.

The point is, this is still happening, whether it's a design fault or the connector not being idiot proof, it doesn't excuse nvidia from not acting quicker and fixing a potential fire hazard.
Posted on Reply
#41
Craptacular
3x0No, it's from 11 days ago

The one before that is from a month ago


No one can prove or disprove if the user plugged it all the way in, that's a moot point.

The point is, this is still happening, whether it's a design fault or the connector not being idiot proof, it doesn't excuse nvidia from not acting quicker and fixing a potential fire hazard.
My bad for missing those.

You can prove or disprove if the user plugged the cord all the way in if you have the cord as GN shows:

Agreed on that with Nvidia, but if you know ahead of time as the customer what is causing the issue and that it is due to just simply not plugging the cable all the way in..... that isn't really that big of a deal as it is a very easy fix and that is making sure the cable is in all the way before powering on, and quite frankly you should be doing that with any cable.

This is really only an issue for those that are ignorant that this issue exists and or what the fix is because they haven't watched GN's video.
Posted on Reply
#42
3x0
CraptacularYou can prove or disprove if the user plugged the cord all the way in if you have the cord as GN shows:
That's only showing a way to reproduce the results, it doesn't say anything about whether all of the problems are caused by improper plug in. See the Derbauer video where he shows the power connector completely plugged in and just slightly moving the cable results in the GPU power light shining red indicating a problem.
Posted on Reply
#43
Vayra86
CraptacularMost recent video about the cable is from seven months ago and he doesn't disprove that the melting was caused by the user not putting the cable all the way in. No one here is disputing, including GamersNexus, that if you have a design that results in a lot of user errors that the design is bad. The point still stands, if you insert the cable all the way in then the cable works just fine. The odds of you having a melted cable go way way down, the primary reason for why the cables is melting is due to users not inserting the cable all the way in.


Depends on how you are using the molex, if you are daisy chaining them then you will forsure overload them, just like you would do if you were to daisy chain the pcie 6 and 8 ping connectors.

Most things in life you don't need to switch to a new standard but we do because we believe the new standards' advantages outweigh its cons.

Besides you will be happy to know that the 12VHPWR cable is already on the way out, a new revision has been approved and will be replacing it, it is called the 12V-2x6.
Equally unnecessary and pointless. Will postpone as long as possible. I consider it e-waste.

Large cards are big so they have space for more connectors. There is no need whatsoever and no advantage but only cons to it. You need adapters or a new PSU.
Posted on Reply
#44
Craptacular
3x0That's only showing a way to reproduce the results, it doesn't say anything about whether all of the problems are caused by improper plug in. See the Derbauer video where he shows the power connector completely plugged in and just slightly moving the cable results in the GPU power light shining red indicating a problem.
GN nor I have said that all of the problems are caused by improper plug-in, only that the vast majority are caused by it. They pointed out that there are indeed multiple reasons, foreign object debris:

At the time of GNs video when talking to the AIB partners the issue was impacting 0.1% of all 4090s. So, if the design is that bad it should be a lot higher than 0.1% of all 4090s melting. So, you either have foreign object debris when manufacturing the cables or you have foreign object debris being introduced due to the repeated unplugging and plugging the cord back in and final reason is you just have users not fully inserting the cable all the way.

I did see the Derbauer video, it would be interesting to see how common that is across multiple instances of the same GPU and or cables.
Vayra86Equally unnecessary and pointless. Will postpone as long as possible. I consider it e-waste.

Large cards are big so they have space for more connectors. There is no need whatsoever and no advantage but only cons to it. You need adapters or a new PSU.
You do realize that the whole reason for why the PCIe connector being created was to reduce the size of the cables as well as reduce the number of ports/cables that needed to be used when powering a gpu instead of using Molex for GPUs such as the Radeon 9800 Pro. By your logic it was unnecessary and pointless to switch to PCIe connectors for power instead of just using multiple Molex and it would have been e-waste to switch over to the PCIe cable instead of just sticking with Molex and just adding a whole bunch of Molex connectors to the GPUs because you would have to buy a new adapter or a new psu.

The truth of the matter is that the top of line GPUs use a hell of a lot more power than top of the line GPUs of past they also have issues with transient spikes. The port is more efficient for power delivery and provides more stable power under stress load and then on top of that you have sense pins that help communicate what the safe maximum load is between the components and the PSU reducing system instability.

The truth of the matter is that the high-end there is a need, it may not be the 12V-2x6 but a new standard is coming.
Posted on Reply
#45
trsttte
3x0it doesn't excuse nvidia from not acting quicker and fixing a potential fire hazard.
This is a PCIe standard, nvidia just happens to be on the forefront of adoption.
CraptacularYou do realize that the whole reason for why the PCIe connector being created was to reduce the size of the cables as well as reduce the number of ports/cables that needed to be used when powering a gpu instead of using Molex for GPUs such as the Radeon 9800 Pro. By your logic it was unnecessary and pointless to switch to PCIe connectors for power instead of just using multiple Molex and it would have been e-waste to switch over to the PCIe cable instead of just sticking with Molex and just adding a whole bunch of Molex connectors to the GPUs because you would have to buy a new adapter or a new psu.
Completely different comparison, there are very obvious advantages to an 6 or 8 pin connector vs a molex: more power available, bigger footprint to dissipate heat, no redundant voltages that the gpu doesn't need, etc. That doesn't happen with the new 12VHPWR or whatever they will call the next (4th by my count) iteration of trying to fix this stupid connector.
CraptacularThe truth of the matter is that the top of line GPUs use a hell of a lot more power than top of the line GPUs of past they also have issues with transient spikes. The port is more efficient for power delivery and provides more stable power under stress load and then on top of that you have sense pins that help communicate what the safe maximum load is between the components and the PSU reducing system instability.

The truth of the matter is that the high-end there is a need, it may not be the 12V-2x6 but a new standard is coming.
Transient spikes have nothing to do with the connector, if anything a 6/8 pin is better equipped to handle them because it has more area to dissipate the heat. The port is also not more efficient, quite the opposite, where before you had 2 or more bigger connectors to spread the current being delivered you now need to make do with a single smaller one. Power stability also doesn't depend on the connector, it's all on the PSU spec. and the new ATX 3.0 transcient requirements apply to both the new 12VHPWR and older 6/8 pin connectors.

The sense pins also don't communicate shit at the moment, they're just used to encode the power available in the same exact way 6 and 8 pin connectors do it - no one is implementing that part of the standard yet and I don't even see any motivation to so on desktop computers, I only see that mattering in servers where redundant power supplies might need to communicate to the gpu to slow their horses because only half the power is available, but then again servers are not bothering with any of this connector stupidity and often simply use cpu power connectors not even following the pcie spec.

There's absolutely no need for this, an 8pin connector by design can carry more than 300w, it just doesn't because they decided to set the spec at a lower power. 2x 8pin power connector have the same 6x 12V cables 12VHPWR uses but instead of being crammed together into a smaller connector they have a resonable footprint for the power that will be handled on an application where space is not a concern.
Posted on Reply
#46
3x0
trsttteThis is a PCIe standard, nvidia just happens to be on the forefront of adoption.
nVidia and Dell are working with PCI-SIG on defining that PCIe 12VHPWR standard, one would expect them to act on any failures, no?
Posted on Reply
#47
trsttte
3x0nVidia and Dell are working with PCI-SIG on defining that PCIe 12VHPWR standard, one would expect them to act on any failures, no?
Sure, but there are more companies involved as well. I don't quite get how the same guys who did countless fluid dinamics simulations to optimize the cooling solutions of the current founders editions cards (kind of screwing their partners in the process) don't do simple stress and thermal tests before shrinking a power connector. I think the biggest thing i'd blame nvidia for though is forcing other board partners to use this connector as well
Posted on Reply
#48
Vayra86
CraptacularGN nor I have said that all of the problems are caused by improper plug-in, only that the vast majority are caused by it. They pointed out that there are indeed multiple reasons, foreign object debris:

At the time of GNs video when talking to the AIB partners the issue was impacting 0.1% of all 4090s. So, if the design is that bad it should be a lot higher than 0.1% of all 4090s melting. So, you either have foreign object debris when manufacturing the cables or you have foreign object debris being introduced due to the repeated unplugging and plugging the cord back in and final reason is you just have users not fully inserting the cable all the way.

I did see the Derbauer video, it would be interesting to see how common that is across multiple instances of the same GPU and or cables.


You do realize that the whole reason for why the PCIe connector being created was to reduce the size of the cables as well as reduce the number of ports/cables that needed to be used when powering a gpu instead of using Molex for GPUs such as the Radeon 9800 Pro. By your logic it was unnecessary and pointless to switch to PCIe connectors for power instead of just using multiple Molex and it would have been e-waste to switch over to the PCIe cable instead of just sticking with Molex and just adding a whole bunch of Molex connectors to the GPUs because you would have to buy a new adapter or a new psu.

The truth of the matter is that the top of line GPUs use a hell of a lot more power than top of the line GPUs of past they also have issues with transient spikes. The port is more efficient for power delivery and provides more stable power under stress load and then on top of that you have sense pins that help communicate what the safe maximum load is between the components and the PSU reducing system instability.

The truth of the matter is that the high-end there is a need, it may not be the 12V-2x6 but a new standard is coming.
Just no, as @trsttte explained very well. You compare things that dont compare and are simply not true.
Posted on Reply
#49
wolf
Better Than Native
DimitrimanI can almost 100% guarantee you they will raise price and slot in between existing cards. This is why the 4090 was raised in price recently. My prediction:

4070S = $699
4070TiS = $899
4080S = $1299

Anyone thinking Nvidia actually cares to increase perf/$ has been in hybernation for 2 years.
I called it, launching at or below cards they replace / add to the lineup. Hibernation unecessary.
Posted on Reply
#50
chrcoluk
Wow the industry is sure being stubborn.

What would happen if reviewers collectively refused to review any GPU with the 16pin connector?

Also why wasnt this bumped to 16 gigs, its just rude at this point.
Posted on Reply
Add your own comment
Dec 27th, 2024 05:36 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts