• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

GeForce RTX 4070 with Slower GDDR6 Memory Priced on-par with Regular RTX 4070 Cards


Disregarding workstation and mining, these are all the same Ellesmere die (the most common Polaris 10/20 variant), sold as Radeon RX gaming GPUs. The only differences between them are minor revisions, such as memory speed or core configuration:

RX 470
RX 470D
RX 480
RX 570
RX 570X
RX 580G
RX 580X
RX 580 2048SP
RX 580
RX 590 GME

Let's see, that's 2x5, aka 10. It's perfectly normal for high volume, middle segment products to have subvariants and different SKUs. Not the first time NVIDIA did it, not the first time AMD did it - both will continue to do so in the future.

This is an utter BS. Even if the performance is on par with GDDR6X, the 5% less in bandwidth specs, should manifest into 5% less price. Period.
The sole fact, that "usual" GDDR6 have much broader offer from more RAM makers, should have already drive the price down, significantly. I'm somewhat sure, that the "savings" from this transition alone, should have extricated a huge pile of money. Not to mention, it should show up in the smaller coolers, due to lesser heat output.
Unless the VRAM in these cards comes from the similar corrupt "exclusive" deal, or contract with Micron, there's no way this is real price.
But yeah, this is targeted at non-savvy/unaware people, who will just see the giant "4070" symbols on the green fancy box, and will mistakenly depart with more money, than they should.

The RTX 4070 uses the same memory IC in demand for building the high-margin RTX 4090. This is why this GDDR6 variant was even built, so they can reallocate the chip supply to the RTX 4090 assembly lines. If the performance is in the same ballpark, most users won't mind, so in that regard, it makes no business sense to reduce the price. Demand is as high as it has ever been, after all.
 
Last edited:
The other thing is that we saw the GDDR6 temperatures to be moderately lower than GDDR6X and the power consumption of the card was also a bit lower.

Yep. It's like I was saying all along. Net zero performance impact, a slight increase in power efficiency and lower thermals on the memory area. Nothing to see here.
 
Yep. It's like I was saying all along. Net zero performance impact, a slight increase in power efficiency and lower thermals on the memory area. Nothing to see here.
The text doesn't match the power consumption graphs. All in all, it looks like GDDR6X is now slightly more efficient than GDDR6.

1726083252658.png


1726083177857.png
 
Yep. It's like I was saying all along. Net zero performance impact, a slight increase in power efficiency and lower thermals on the memory area. Nothing to see here.
For the common none-the-wiser buyer, it may be just this simple.
But for us, tech-savvy enthusiasts who know what's going on, there's no way we won't ask for a price reduction when we know the BOM was lowered and NVIDIA's profit increased.
 
The text doesn't match the power consumption graphs. All in all, it looks like GDDR6X is now slightly more efficient than GDDR6.

View attachment 363015

View attachment 363014

Could be swapped, more than a few errors in that review. They need a proofreader badly, most of it seems to be referring to it as a Super. It's such a minute change anyway, the normal variance between chips as well as the probably slightly higher clocks of the "OC" variant could also tilt it the less than one percent there. Won't invalidate that they are functionally equal, G6 variant is 99% performance to 101% efficiency.

Hopefully W1zz will get a sample as well.

For the common none-the-wiser buyer, it may be just this simple.
But for us, tech-savvy enthusiasts who know what's going on, there's no way we won't ask for a price reduction when we know the BOM was lowered and NVIDIA's profit increased.

I agree but that's where they'll tell us to go pound sand :oops:
 
For the common none-the-wiser buyer, it may be just this simple.
But for us, tech-savvy enthusiasts who know what's going on, there's no way we won't ask for a price reduction when we know the BOM was lowered and NVIDIA's profit increased.

We don't know that Nvidia's profit increased. They only sell the GPU. The card manufacturers have the expense of which VRAM to buy calculated into their bottom line. Shouldn't they be the ones to adjust the price?
 
@evernessince
I haven’t mentioned pricing in this thread at all. Of course, hypothetically, it would be nice for the 4070 GDDR6 version to be cheaper. Hell, the 4070 itself has always been relatively overpriced for what it was. But here’s the rub - it won’t be cheaper. You know it, I know it. There’s no reason for it to be. It will still sell. Because that’s what the market in which one of the players holds 80%+ share looks like. You saying it’s an “objectively cheaper card to build” kind of ignores the fact that there are A LOT of objectively inexpensive to make products that are still priced with insane margins because that’s how branding and mind share works. What, you think additional 128 gigs on an iPhone cost Apple 150 dollars?

I see no reason to charge against windmills. Saying that something should be cheaper in a comment on an enthusiast forum won’t make NV reconsider their pricing policy. What would is real competition. And we’re fairly barren on that nowadays.

Can't argue with any of this. I just wish the market wasn't the way it is...
 
Man the persecution complex for some who have entangled their whole identity in AMD is real, to show that level of support I'd expect a pay check, or free products.

This is looking like pretty much nothing burger as predicted. I do think Nvidia should at least make it transparent which memory variant you buy, and with a smidge of luck price cuts will start in 3-6 months as the 50 series and RDNA4 hit the market.
 
Well, some people are still buying Radeon 6000 and RTX 3000 series. I can't imagine that either, yet it happens.

Check out the guy who's chased the performance dragon so far up the high-horse's arse that he lost sight of what the real world does. Or pretends to be clueless anyway - maybe you're one of those fake elitists. So, the real world buys what's available where they are at, at what they can afford. You prolly know this, your own online ego has been pressuring you to lie to yourself to gain clout points you'll never gain from people you'll never see. There's a lot of guys acting like you in this forum atm smh.

Has nothing to do with brand loyalty, most people don't really care - If AMD GPUs were competitive, they would sell.

Now you think you speak for "most people" lol. You're one of those. When it comes to die-hard Radeon users, there _is_ brand loyalty (I for example have not bought nV since G92), and there are many gamers using exclusively Radeons because they ARE competitive and they DO sell. As most of us know, most consoles use Radeons, and there are more console users than there are PC gamers. BTW, Radeons work just fine in PC games, too, even if it's onboard.
 
Man the persecution complex for some who have entangled their whole identity in AMD is real, to show that level of support I'd expect a pay check, or free products.

This is looking like pretty much nothing burger as predicted. I do think Nvidia should at least make it transparent which memory variant you buy, and with a smidge of luck price cuts will start in 3-6 months as the 50 series and RDNA4 hit the market.
Not having such strong brand loyalty for Nvidia that you aren't defending a product being priced the same yet not having the same specs as before isn't a persecution complex.
Is the leather jacket man paying well? When Nvidia launches a different version of the same card its nothing but when AMD does it people are upset over it.
 
Check out the guy who's chased the performance dragon so far up the high-horse's arse that he lost sight of what the real world does. Or pretends to be clueless anyway - maybe you're one of those fake elitists. So, the real world buys what's available where they are at, at what they can afford. You prolly know this, your own online ego has been pressuring you to lie to yourself to gain clout points you'll never gain from people you'll never see. There's a lot of guys acting like you in this forum atm smh.



Now you think you speak for "most people" lol. You're one of those. When it comes to die-hard Radeon users, there _is_ brand loyalty (I for example have not bought nV since G92), and there are many gamers using exclusively Radeons because they ARE competitive and they DO sell. As most of us know, most consoles use Radeons, and there are more console users than there are PC gamers. BTW, Radeons work just fine in PC games, too, even if it's onboard.

Yeah AMD is doing really well in the GPU space:


- Barely any AMD dGPUs in top 25



This is a hardware forum, some of us cares about hardware.
 
Is the leather jacket man paying well? When Nvidia launches a different version of the same card its nothing but when AMD does it people are upset over it.
When it performs within a margin of error, it's effectively the same product, plus you can quote me saying they should tell buyers what version they're getting and that changing the specs is scummy.
Not having such strong brand loyalty for Nvidia that you aren't defending a product being priced the same yet not having the same specs as before isn't a persecution complex.
Zero brand loyalty, literally, zero. From what's been tested, the spec difference *checks notes* doesn't seem to make a difference to the product performance. How about we see when and if that happens to an AMD product rather than a pre-emptive cry about a made up argument? We've got people in here with AMD tattooed on their brain complaining about something that hasn't happened, and if it did, Mark this post so you can quote me and how I feel about that [currently theoretical] situation. Because yes, having a pre emptive cry that if AMD did the same they'd be dealt with worse is tantamount to a persecution complex. I hold everyone to the same standard, unlike some who wear their bias on their sleeve.
 
Not having such strong brand loyalty for Nvidia that you aren't defending a product being priced the same yet not having the same specs as before isn't a persecution complex.
Happens all the time. Sometimes manufacturers will replace resistors or capacitors because there are cheaper alternatives or because they can't source the original version anymore*. You were getting a 4070 before, you're still getting a 4070 now. If you were after a specific GDDR version, that's on the box, too.

*And then there's Kingston who will only guarantee certain parameters for a SSD model, while reserving the right to replace any and all components inside. That's a valid business, too. Even if it will give some pause to the tech-savvy users.
 
@Dr. Dro
Right. I don’t think anything indicated that the 4070 was ever bandwidth starved (complaints were about the AMOUNT of VRAM) and 5% is a negligible decrease. And the regular GDDR6 should be less power hungry and cooler than 6X, so it might be an arguable improvement.

Just a follow up to this, we can confirm at this point the GDDR6 model is 4% slower on average and is less power efficient:

The 4070 GDDR6 model consumes 1.52% less power while being 4% slower. It also consumes more power at idle. So overall a small reduction in efficiency.

1726326806350.png



I believe the video describes it perfectly, it's a small sneaky downgrade. It's baffling why Nvidia does things like this when it would be so easy for them to avoid.
 
Just a follow up to this, we can confirm at this point the GDDR6 model is 4% slower on average and is less power efficient:

The 4070 GDDR6 model consumes 1.52% less power while being 4% slower. It also consumes more power at idle. So overall a small reduction in efficiency.

View attachment 363392


I believe the video describes it perfectly, it's a small sneaky downgrade. It's baffling why Nvidia does things like this when it would be so easy for them to avoid.
First of all, I don't see how a 5% bandwidth reduction results in a 4% performance penalty*.
Second of all, 4% is margin of error, you can get that if you benchmark the same video card model from different batches.

*Notice how you can from 8200 all the way down to 6800 to see that kind of difference. Even then, it's not across the board.
 
First of all, I don't see how a 5% bandwidth reduction results in a 4% performance penalty*.
Second of all, 4% is margin of error, you can get that if you benchmark the same video card model from different batches.

*Notice how you can from 8200 all the way down to 6800 to see that kind of difference. Even then, it's not across the board.

GPU memory bandwidth is generally more taxed with higher framerates, thus showing a greater effect at lower resolutions. The 4% drop is at 1080p, whereas it's 3% at 1440p where this GPU is more likely to be used. 2% at 4K FWIW to further show the trend. If it's repeatable, then it's not within the margin of error but it's well within the margin of not noticing it when you're playing if there's no fps counter active. However with no reduction in price it's also lower value and frankly, I'd like that 3% price reduction to $534 list.
 
First of all, I don't see how a 5% bandwidth reduction results in a 4% performance penalty*.
Second of all, 4% is margin of error, you can get that if you benchmark the same video card model from different batches.

*Notice how you can from 8200 all the way down to 6800 to see that kind of difference. Even then, it's not across the board.

Most card variants are within margin of error. That doesn't mean the data on their performance delta between the reference is irrelevant. If the data can be repeated, it's valid.

Mind you IMO that's only part of the problem. Nvidia is selling customers a materially cheaper product without letting customers know (like for example adding something to the model name). Some SSD manufacturer's do this and it ruins trust in their brand because people don't know the exact specs of what they are getting. Customers are required to be vigilant because in a capitalist market with no laws preventing this sort of thing, that's the only protection we have against it becoming even worse.
 
Last edited:
Mind you IMO that's only part of the problem. Nvidia is selling customers a materially cheaper product without letting customers know (like for example adding something to the model name).
Are you insane? It says GDDR6 on the box, that's not different enough for you?
Do you also happen to think car manufacturers should release a different model for each trim?
 
Are you insane? It says GDDR6 on the box, that's not different enough for you?
Do you also happen to think car manufacturers should release a different model for each trim?

Look at the box graphics on a typical GPU box. There is a single letter difference between these 2 models. One letter and it's not very large and that's only if you have 2 GPUs side by side to compare and you know that there's supposed to be a difference. GPU version of Where's Waldo.

Yes by the time you're spending $550 on a GPU you should be doing a lot of homework but the difference in labeling between the 2 is very subtle. There was a thread in the forums here last week with someone who wanted a 3060-class GPU and thought the 3060 8GB was the standard configuration and not cut down from 12GB. Same problem with a single different number, and an experienced forum member didn't know this product existed and assumed it was new when it's been out for almost 2 years now.

These changes are deceptive when the major model name doesn't change.
 
GPU memory bandwidth is generally more taxed with higher framerates, thus showing a greater effect at lower resolutions. The 4% drop is at 1080p, whereas it's 3% at 1440p where this GPU is more likely to be used. 2% at 4K FWIW to further show the trend. If it's repeatable, then it's not within the margin of error but it's well within the margin of not noticing it when you're playing if there's no fps counter active.
This is really strange. Memory bandwidth usually plays a larger part when resolution increases. The difference percentages would make much more sense the other way around.
Will TPU make a full review of a 4070 GDDR6? I am interested in how the power consumption and clock speeds turn out.
 
This is really strange. Memory bandwidth usually plays a larger part when resolution increases. The difference percentages would make much more sense the other way around.
Will TPU make a full review of a 4070 GDDR6? I am interested in how the power consumption and clock speeds turn out.

I'm just going on what I observe in GPU-Z when testing with the same GPU under different resolutions. As the fps goes up at smaller resolutions, when keeping the same overall GPU power usage, you'll see the Memory Controller Load % readout go up. Maybe it only works this way in some games or benchmarks and not others? I certainly haven't done an exhaustive look at this but now I'm interested. Generally I'm testing this with 3DMark Time Spy's second Graphics Test as it's picky about both Memory speed and GPU core overclocking, though I also use their Speed Way and more recently Steel Nomad tests.
 
Back
Top