Monday, December 25th 2023
ASUS GeForce RTX 4070 SUPER Dual OC Snapped—Goodbye 8-pin
Here are some of the first pictures of the ASUS GeForce RTX 4070 SUPER Dual OC, the company's close-to-MSRP custom-design implementation of the upcoming RTX 4070 SUPER, which is expected to be announced on January 8, with reviews and retail availability a week later. The card very closely resembles the design of the RTX 4070 Dual OC, but with one major difference—the single 8-pin PCIe power connector makes way for a 16-pin 12VHPWR. Considering that the ASUS Dual OC series tends to come with a nominal factory OC at power limits matching NVIDIA reference, this is the first sign that the RTX 4070 SUPER in general might have typical graphics power (TGP) above what a single 8-pin could fulfill, and so we've given a 12VHPWR, just like every RTX 4070 Ti. The cards will include an NVIDIA-designed adapter that converts two 8-pin PCIe to a 12VHPWR, with its signal pins set to tell the graphics card that it can deliver 300 W of continuous power.
The GeForce RTX 4070 SUPER is based on the same AD104 silicon as the RTX 4070 and RTX 4070 Ti, with its ASIC code rumored to be "AD104-350." The SKU allegedly enables 56 out of 60 streaming multiprocessors (SM) present on the silicon, giving it 7,168 out of 7,680 CUDA cores. This is a big increase from the 5,888 CUDA cores (46 SM) that the vanilla RTX 4070 is configured with. The memory subsystem is expected to be unchanged from the RTX 4070 and RTX 4070 Ti—12 GB of 21 Gbps GDDR6X across a 192-bit memory interface; leaving NVIDIA with one possible lever, the ROP count. While the RTX 4070 Ti has 80 ROPs, the RTX 4070 has 64. It remains to be seen how many the RTX 4070 SUPER gets. Its rumored TGP of 225 W is behind the switch to 12VHPWR connectors.
Sources:
momomo_us (Twitter), VideoCardz
The GeForce RTX 4070 SUPER is based on the same AD104 silicon as the RTX 4070 and RTX 4070 Ti, with its ASIC code rumored to be "AD104-350." The SKU allegedly enables 56 out of 60 streaming multiprocessors (SM) present on the silicon, giving it 7,168 out of 7,680 CUDA cores. This is a big increase from the 5,888 CUDA cores (46 SM) that the vanilla RTX 4070 is configured with. The memory subsystem is expected to be unchanged from the RTX 4070 and RTX 4070 Ti—12 GB of 21 Gbps GDDR6X across a 192-bit memory interface; leaving NVIDIA with one possible lever, the ROP count. While the RTX 4070 Ti has 80 ROPs, the RTX 4070 has 64. It remains to be seen how many the RTX 4070 SUPER gets. Its rumored TGP of 225 W is behind the switch to 12VHPWR connectors.
54 Comments on ASUS GeForce RTX 4070 SUPER Dual OC Snapped—Goodbye 8-pin
Additional tension spot due to the fact it's not a straight up cable and rather an adaptor is untasty.
4070S = $699
4070TiS = $899
4080S = $1299
Anyone thinking Nvidia actually cares to increase perf/$ has been in hybernation for 2 years.
I'm happy that the multiple power connectors is going away, having to plug the connector in properly is a small price to pay for a clean looking PC build
Yeah, shooting pigeons with an RPG also works and thus is a valid option. Now it just comes down to cons vs pros with each valid option in the scenario it is being proposed for.
For the purpose of shooting pigeons, Will it be used for purpose of killing an invasive species (think hogs in the south of USA, Burmese python in Florida, Lion Fish in the Carribean and Gulf of Mexico, or just killing for food, will it be used in a city area or will it be used out in wild away from populated human settlements.
The issue with the cable was that people were not inserting the cable all the way in, gamers nexus did a video on that, basically they were able to sum it up that all of the issues with the cable was that people were not inserting the cables all the way in. You may not like using an adapter but the PCIe cables that come with a PSU designed to power a Geforce 3090, Vega 64, etc. They are perfectly fine to be used with the adapter.
You also notice that there haven't been any new reports this year since the gamers nexus video of people running into issues with the adapter?
youtube.com/@NorthridgeFix
'If it works its valid' might fly in Soviet Russia... As long as it flies. I'll pass though thanks. The point is, obviously, that you don't need another connector for this kind of power target at all. Its quite similar to RPG's for pigeons that way.
The issue with the cable is that its a shit design, simple. It has flaws 8/6 pins do not, the tolerances are lower, etc.
The other issue is with consumers happily accepting this.
And this nVidia's new plug is just a product that "solves" the issue that never existed and produces new issues as well. This is as idiotic as it gets. I will never support this decision, I will never spend any money on purchasing such GPUs and I will make as many people as possible know it's a horribly wrong design and it must be banned.
He's talking about safety margins and arrived at that cable power rating of 288W assuming psu manufacturers use a specific type of wire which is not totally wrong but it's a much more complicated answer than necessary: psu manufacturers are using 8pin to 2x 8pin cables so a single 8pin obviously can handle at least 300w and so can whatever cables they use. This entire 150w has nothing to do with power ratings or flimsy older power suplies, it's simply a value the PCI-sig thought was enough and would never be exceeded.
There's also the simple point that if a 12VPWR can carry 600W in 6 wires (100W/wire), there's no reason for an 8pin not to be able to carry 300W in 3 wires (100W/wire).
His last suggestion of using 2 12VPWR connectors is also completely idiotic, the solution is go back to the drawing board, not to use more of the same flawed connector :kookoo:
Thank god at least AMD is smartly avoiding all this non sense
Most things in life you don't need to switch to a new standard but we do because we believe the new standards' advantages outweigh its cons.
Besides you will be happy to know that the 12VHPWR cable is already on the way out, a new revision has been approved and will be replacing it, it is called the 12V-2x6.
The one before that is from a month ago
The point is, this is still happening, whether it's a design fault or the connector not being idiot proof, it doesn't excuse nvidia from not acting quicker and fixing a potential fire hazard.
You can prove or disprove if the user plugged the cord all the way in if you have the cord as GN shows:
Agreed on that with Nvidia, but if you know ahead of time as the customer what is causing the issue and that it is due to just simply not plugging the cable all the way in..... that isn't really that big of a deal as it is a very easy fix and that is making sure the cable is in all the way before powering on, and quite frankly you should be doing that with any cable.
This is really only an issue for those that are ignorant that this issue exists and or what the fix is because they haven't watched GN's video.
Large cards are big so they have space for more connectors. There is no need whatsoever and no advantage but only cons to it. You need adapters or a new PSU.
At the time of GNs video when talking to the AIB partners the issue was impacting 0.1% of all 4090s. So, if the design is that bad it should be a lot higher than 0.1% of all 4090s melting. So, you either have foreign object debris when manufacturing the cables or you have foreign object debris being introduced due to the repeated unplugging and plugging the cord back in and final reason is you just have users not fully inserting the cable all the way.
I did see the Derbauer video, it would be interesting to see how common that is across multiple instances of the same GPU and or cables. You do realize that the whole reason for why the PCIe connector being created was to reduce the size of the cables as well as reduce the number of ports/cables that needed to be used when powering a gpu instead of using Molex for GPUs such as the Radeon 9800 Pro. By your logic it was unnecessary and pointless to switch to PCIe connectors for power instead of just using multiple Molex and it would have been e-waste to switch over to the PCIe cable instead of just sticking with Molex and just adding a whole bunch of Molex connectors to the GPUs because you would have to buy a new adapter or a new psu.
The truth of the matter is that the top of line GPUs use a hell of a lot more power than top of the line GPUs of past they also have issues with transient spikes. The port is more efficient for power delivery and provides more stable power under stress load and then on top of that you have sense pins that help communicate what the safe maximum load is between the components and the PSU reducing system instability.
The truth of the matter is that the high-end there is a need, it may not be the 12V-2x6 but a new standard is coming.
The sense pins also don't communicate shit at the moment, they're just used to encode the power available in the same exact way 6 and 8 pin connectors do it - no one is implementing that part of the standard yet and I don't even see any motivation to so on desktop computers, I only see that mattering in servers where redundant power supplies might need to communicate to the gpu to slow their horses because only half the power is available, but then again servers are not bothering with any of this connector stupidity and often simply use cpu power connectors not even following the pcie spec.
There's absolutely no need for this, an 8pin connector by design can carry more than 300w, it just doesn't because they decided to set the spec at a lower power. 2x 8pin power connector have the same 6x 12V cables 12VHPWR uses but instead of being crammed together into a smaller connector they have a resonable footprint for the power that will be handled on an application where space is not a concern.
What would happen if reviewers collectively refused to review any GPU with the 16pin connector?
Also why wasnt this bumped to 16 gigs, its just rude at this point.