Tuesday, August 23rd 2022

NVIDIA GeForce RTX 4080 Could Get 23 Gbps GDDR6X Memory with 340 Watt Total Board Power

NVIDIA's upcoming GeForce RTX 40 series graphics cards are less than two months from the official launch. As we near the final specification draft, we are constantly getting updates from hardware leakers claiming that the specification is ever-changing. Today, @kopite7kimi has updated his GeForce RTX 4080 GPU predictions with some exciting changes. First off, the GPU memory will get an upgrade over the previously believed specification. Before, we thought that the SKU used GDDR6X running at 21 Gbps; however, now, it is assumed that it uses a 23 Gbps variant. Faster memory will definitely result in better overall performance, and we are yet to see what it can achieve with overclocking.

Next, another update for NVIDIA GeForce RTX 4080 comes with the SKU's total board power (TBP). Previously we believed it came with a 420 Watt TBP; however, the sources of kopite7kimi claim that it has a 340 Watt TBP. This 60 Watt reduction is rather significant and could be attributed to NVIDIA's optimization to have the most efficient design possible.
Sources: kopite7kimi (Twitter), via VideoCardz
Add your own comment

82 Comments on NVIDIA GeForce RTX 4080 Could Get 23 Gbps GDDR6X Memory with 340 Watt Total Board Power

#51
Xaled
Crackong700->600->450->340

I think we could see a '150W TDP' rumor in the middle of September.
The only reason that made them pull it from 700w to 340w is the decrease of crypto-currency value, after they figured that under current circumstances, even a 700w card wont make a good mining card, therefore they pulled values to more "humane" ones. All rumors about optimizations and etc is just BS.
Posted on Reply
#52
tpu7887
trstttePeople need to stop rationalizing this senseless increases in power draw, it's absurd pure and simple.
3080 is about 40% faster than the 2080Ti.
3080 has about 30% higher TDP than 2080Ti.

3080 was built on 8nm compared to 12nm of 2080Ti...

That's not progress!

What are these CLOWNS up to?
Seriously
WHAT THE F**

I had a write up about CPUs done up, essentially reaming out Intel. But as I wrote what's above,...

WHAT THE F*


Anyone?
Posted on Reply
#53
nguyen
tpu78873080 is about 40% faster than the 2080Ti.
3080 has about 30% higher TDP than 2080Ti.

3080 was built on 8nm compared to 12nm of 2080Ti...

That's not progress!

What are these CLOWNS up to?
Seriously
WHAT THE F**

I had a write up about CPUs done up, essentially reaming out Intel. But as I wrote what's above,...

WHAT THE F*


Anyone?
Just drag the power slider on the 3080 down to 250W and it would still outperform 2080Ti by 30%.
If people don't know how to do that, perhaps just buy an rx6400 and call it a day
Posted on Reply
#54
beedoo
Vayra86Isnt the reality without cognitive dissonance, that we should ALL show some restraint to provide future generations a habitable planet?
I hope, while you're fretting about the fate of the planet, you stop farting, don't breed and don't have pets.

I also hope that if one day, it's discovered that this is one of the greatest con's of our time, that those driving the climate change narrative, get jail time.

Also, I hear the earth is actually flat.
Posted on Reply
#55
trsttte
wolfThey're not the only ones ;)

No 'founders/reference model' did, but AIB's certainly did and the high end of the stack still had 250w+ parts, like Dual GPU cards, same for AMD.

Also AMD single GPU between Fermi and current generation:
  • R9 290/390 - 275w
  • R9 290X/390X - 290w/275W
  • R9 Fury X - 275w
  • Vega 64 - 295w
  • Radeon VII - 295w
I'm not accounting for factory OC of course and the higher power from the amd side (which still couldn't compete with nvidia at the time) goes to show the point I made in a different comment, this power level increases with Ampere happened because they needed them to retain the gaming/benchmark crown.
TheinsanegamerNIf you don’t like it….. don’t buy it. Nobody is forcing you to buy a 4080. You can buy a 4070 or 4060 with the power draw you desire, and better performance, and save money! Win win!
It's not as simple or clear cut, this is an overarching trend that extends to the entire lineup and it will only aggravate if not scrutinized. My original response that many interpreted as just climate conscious or whatever (not false but not really the point I was going for) was specifically aimed at the "normal in this day and age" comment. What in this day and age makes it ok to up their own self imposed for more than a decade power target? Day and age with all current problems no less and day and age with more software solutions to bridge the gap like dlss (that nvidia even pioneered!)

I'm not telling anyone what to buy or not, enjoy whatever you enjoy, but not that long ago there was pretty much only one player on the GPU space (amd wasn't really competitive), right now there's still only 2 (intel has yet to deliver anything other than teasers), and they both just using cheap tricks like moving the power needle to beat the other instead of actually innovating (i'd say nvidia more so since amd is closer to their usual higher target - that should be viewed like a disadvantage as it was for years)
Posted on Reply
#56
kiakk
Seems Green comapany has fckN DARK future plan with high power consuming products. :slap: I can not see this trend a good way.
1. At least do not raise the TDP, but should better to lowering it bit by bit with newer architechures and
2. soft devs should work hardver with better optimmization.
This 2 thing should be a trend of this time period. Not just beacuse of energy price, but much more due to humanity energy consuming.

USA administration should regulate these companies by this 2 directive.
Posted on Reply
#57
Vayra86
beedooI hope, while you're fretting about the fate of the planet, you stop farting, don't breed and don't have pets.

I also hope that if one day, it's discovered that this is one of the greatest con's of our time, that those driving the climate change narrative, get jail time.

Also, I hear the earth is actually flat.
Everybody farts! Im on a real farting spree the last couple days tbh. I kid you not
tpu78873080 is about 40% faster than the 2080Ti.
3080 has about 30% higher TDP than 2080Ti.

3080 was built on 8nm compared to 12nm of 2080Ti...

That's not progress!

What are these CLOWNS up to?
Seriously
WHAT THE F**

I had a write up about CPUs done up, essentially reaming out Intel. But as I wrote what's above,...

WHAT THE F*


Anyone?
The price of RT.
Posted on Reply
#58
Sisyphus
Every new GPU generation has a higher computing power per Watt. Rational discussion end.
What a person believes to need for gaming (high end GPUs . . . ), is irrational.
Suggested solution:
Only the non-overclockable or underclocked GPUs and CPUs are sold in the USA.

The free versions in the rest of the world. :clap:
Posted on Reply
#59
MentalAcetylide
wolfAs far as I'm concerned, unless you're gaming on a steam deck, switch or equivalent, you're bloody irresponsible with your outrageous and wasteful power usage in the name of a better gaming experience.

Your 125w GFX card is a disgrace to future generations, when power is only going to cost more and the planet is heating up, we're in a recession and next summer is set to be the hottest on record!

Responsible gamers should all limit themselves to sub 30w draw for the entire device, or you'll be answering to my sweaty, poor grandchildren dammit.
I think the most I draw is around 340 watts when gaming, but it usually stays between 240-285. Even running Frontline 30 vs. 30 in World of Tanks with max settings doesn't seem to go above 300 watts much. Rendering draws a lot more since iray just goes full blast on whatever devices you have set up to render, which could be CPU, 1 or more graphics cards, or both CPU & graphics card(s). The one card gets up to 85C when rendering since its a workstation card that uses a blower.
Posted on Reply
#60
TheinsanegamerN
wolfI mean, I outlined the many whays in which I do

That's pretty personal dude, I wouldn't call running an undervolted 3080 or say 4080 an utter waste, but hey, you do you. I told you all I can really do is rationalise it myself, the whataboutism is a fun little extra.

Again man, you do you. I don't think my 3080 is bullshit, and yeah, RT so far has been a very fun path to walk with this class card. Metro EE, DOOM Eternal, Control, Dying Light 2, Spiderman... The experience with all the dials up is right up my alley, because it's not up yours doesn't automatically make it bullshit to everyone else.

Well, just keep buying into that segment then, easy. Both RDNA3 and Ada will have products that want for 1x8pin / <225w, and they'll be faster than Ampere or RDNA2 (at equal power draw), vote with that wallet.

They're not the only ones ;)

No 'founders/reference model' did, but AIB's certainly did and the high end of the stack still had 250w+ parts, like Dual GPU cards, same for AMD.

Also AMD single GPU between Fermi and current generation:
  • R9 290/390 - 275w
  • R9 290X/390X - 290w/275W
  • R9 Fury X - 275w
  • Vega 64 - 295w
  • Radeon VII - 295w
Winning Logic. I also consistently see a lot of complaints from what amounts to people who would've never bought the card anyway, even if it fit their power budget, because of things like brand and price.
What always gets me about the complaints is the new xx8x cards drawing so much more power then the old xx8x cards, while ignoring the sheer performance difference.
Like, if you want a 225watt card, not only can you still buy them, but now they are the cheaper xx7x or xx6x tier cards. So you don’t need the flagship anymore!

do people just want the top tier cards to be limited in power so they feel justified in spending $700 on a RTX 4060? Doesnt make any sense to me.
Posted on Reply
#61
kiakk
SisyphusEvery new GPU generation has a higher computing power per Watt. Rational discussion end.
Efficiency and overall power consumptions is 2 different thing. Especially that paradox when efficiency grows that leads toward more power consuming by the gowing level of lifestyle:
Jevons Paradox. Read about it.
That is one of the reason we see the overall global energy consumption is growing without stop.

Seems a bit off under an nVidia topic. BUT actually it is not, beacuse of I play games and I do and even more ppl do care about enviroment and consume less energy.

And I am not an idiot fun of Greta. I also have allergic reactions if I just see her. We just simply have to do things a little better and live a little bit better nature conscious life. That's it. That is why a wrote at least do not raise the TDP. But they even push forward by the time. Not so far in the past GTX 1080 was only 180W TDP.
(source: TPU GTX 1080) If the technology is developing and efficiency is growing, enough to limit the new card TDP because the overall performance will increase. But by the history datas Jevons paradox works...
Posted on Reply
#62
20mmrain
Dirt ChipAnother 'leak', another tweet, another thread, another 'inside info'.
No matter the content, if you engaged in any sort then it worked.
The PR is very effective now days, see how much talk about W is being generated.
So easy to make people move to participate in the PR loop.

I'm doing my part also of course, free entertainment that keep my busy for a while, sort of geek HW escapism, and the PR machine get it's profit.
Win-Win situations :)
This is the trend in ALL Journalism. Report the first thing you hear, damn the facts, don't check references, and never retract anything.

All journalism has turned into the Jerry Springer rumor mill. (Politics, Tech, Economics, Crime, all of it!)

TPU at least used to say things like "Take this with a grain of salt" but that is becoming more rare here as well.
That said, when it comes to tech I do like to read some rumors :)~
Posted on Reply
#63
maxfly
My grandchildren game on phones or pathetic little tablets (friggin blasphemy). They hate using the computers I built them, for anything (equally blasphemous). Touchscreen or death for them.

So don't worry, GPUs aren't going to kill the world.


This is clearly stated to be a prediction. TPU apparently needs to stop assuming its forum members can read AND comprehend now?
Posted on Reply
#64
Vader
Lew ZealandYou can do both with good design, but you can't get those chart-topping scores that everyone wants to brag about without over the top power consumption.

Edit: Added Horizon: Zero Dawn

I did some tests recently with my 6600XT, an efficient card to begin with but which is still overclocked out of it's efficency range out of the box.

Default core clock: 2668 MHz, ~2600 MHz in-game, 16Gbps Mem, 132-150W (Power +20%), 1.15v
Overclocked: 2750 MHz, ~2690 MHz in-game, 17.6Gbps Mem, 145-163W (Power +20%, hits power limits), 1.15v
Underclocked: 2050 MHz, ~2000 MHz in-game, 16Gbps Mem, 70-75W, 0.862v

But what of performance? Canned game benchmark runs as I'm no in-game consistent benchmarker:

R5 5600, 3200 MHz CL16, 1440p, PCIe 3.0 (B450 board), no Motion Blur

CP2077 - 18% lower fps at High settings
SotTR - 12% lower fps at HUB rec settings (~High but better visuals)
Forza Horizon 4 - 17% lower fps at Ultra settings
Horizon: Zero Dawn - 16% lower fps at Favor Quality settings (High)

1% lows were about the same in CP2077 (need to do more runs at 2690 to confirm), -13% in SotRT and -14% in FH4, -11% in H:ZD.

So about a -15% FPS tradeoff for half the power usage. Comparing runs directly to each other, the 2000MHz test used 51% of the power of the 2690MHz tests in the same games (148W in CP2077 and SotTR vs 75W, 118W in FH4 vs 60W, 139W in H:ZD vs 71W).

15% fewer frames is a lot and it's not a lot, depending on what you're looking for.
Thank you, this is the type of discussion we need right now. Anyone who's worried about power consumption can do their own tweaking. The tools are there, have been for years now, and are really easy to use with no risk whatsoever, and without breaking the warranty even.
Posted on Reply
#65
Sisyphus
kiakkEfficiency and overall power consumptions is 2 different thing. Especially that paradox when efficiency grows that leads toward more power consuming by the gowing level of lifestyle:
Jevons Paradox. Read about it.
That is one of the reason we see the overall global energy consumption is growing without stop.
Overall global energy consumption is growing, because large parts of Asia were underdeveloped. About 2 billion people left poverty during industrialization. And world population is rising. In the western world, energy consumption is going down for about the last 10-15 years. This trend will hold on.
Seems a bit off under an nVidia topic. BUT actually it is not, beacuse of I play games and I do and even more ppl do care about enviroment and consume less energy.

And I am not an idiot fun of Greta. I also have allergic reactions if I just see her. We just simply have to do things a little better and live a little bit better nature conscious life. That's it. That is why a wrote at least do not raise the TDP. But they even push forward by the time. Not so far in the past GTX 1080 was only 180W TDP.
(source: TPU GTX 1080) If the technology is developing and efficiency is growing, enough to limit the new card TDP because the overall performance will increase. But by the history datas Jevons paradox works...
They raise the TDP, because there is competence. There are consumers, who wants the fastest and consumer, who wants the most economic solutions. I prefer the economic solution, use undervolting anyway, have my 2070 around 100-150W and will buy a 4070, which I will bring down to about 150W either, more than doubling the computing power. It depends on the consumer, what he wants.

From my life experience, I would say, you can't stop climate change, there is no global dictatorship who is able, to bring all people's consume down to a scientifically calculated, sustainable level and stop population growth, bring all people to the same level. So what will happen: Production cost in the west rise, production went to countries like China, who are much worse in environmental protection and ideology. Keep technological leadership and big economic resources, free markets. Then you will be able, to react adequately to climate change and protect your population. You can see this everywhere. Poor countries aren't able to protect their people and are destroying their nature more rapidly. If you interfere, you get a war. nVidia or AMD will change nothing here, these people have other problems to discuss as power consumption of high end GPUs.
Posted on Reply
#66
Lew Zealand
VaderThank you, this is the type of discussion we need right now. Anyone who's worried about power consumption can do their own tweaking. The tools are there, have been for years now, and are really easy to use with no risk whatsoever, and without breaking the warranty even.
Most people don't care and I do only because I came into PC gaming using NUCs and old laptops where squeezing every last bit of efficiency with undervolting and controlling clocks is needed to eke out the optimal gaming and cooling experience from power and thermally-limited hardware.

Getting a Gaming X 1060 PC was cool-running luxury but soon I was back to the efficiency game with a cheap PNY 1080 with a crap cooler. Running that at its efficiency peak (1911-1923 MHz @0.9v, 135W max) taught me where current GPUs do their best and allowed it to run at 74°C in warm summer rooms instead of thermal throttling at 83°C. Plus, electronics are likely to last longer if not run at their very edge of capability. IMO 5 or even 10% down is an OK tradeoff for longer equipment life.

It does seem to me based on this 6600XT that using an Infinity Cache with a narrower memory bus allows for greater reduction in minimum power usage, or at least a closer to linear reduction in power usage when underclocking into the efficiency zone. In other words, I expect the RTX 3000 series not to be able to lower their power as much as the RX 6000 series because of their wider busses. Also using GDDR6X memory will kill these types of efficiency improvements. I'd love to see how a 6700XT or 6800/XT does at these lower clocks to see if they also benefit as much from running in their efficiency zone.

It would be interesting to see if there's a sweet spot of perhaps even larger cache and relatively narrow but fastest GDDR6 non-X memory bus which allows for greater FPS per watt when run in the efficiency zone. Like a 6800XT core count with a 192-bit bus but 256MB IC that performs like a 6800 but with 6700XT or even lower power requirements when run around 2000MHz cores? It'll never happen but I wonder if that would even be a viable performer or if instead it runs up against another performance brick wall that I'm not considering.
Posted on Reply
#67
wolf
Better Than Native
MentalAcetylideI think the most I draw is around 340 watts when gaming
For 99.9% of my games I run my 3080 at 1725mhz @ 775mv, most games are 200-230w, absolute ball-tearers that use every last ounce can have it hit 255w. Stock is about 5% faster than the UV and uses the full 320w, unlimited power and a overclock is 7-8% faster than the UV and uses 370w. Bearing in mind it's at 4k targeting 120fps too, so it's really pushing the Ampere arch.

No matter the TDP of the next product I buy, and it could be from either brand, I'd certainly undervolt it to increase efficiency.
Posted on Reply
#68
Lei
XaledThe only reason that made them pull it from 700w to 340w is the decrease of crypto-currency value, after they figured that under current circumstances, even a 700w card wont make a good mining card, therefore they pulled values to more "humane" ones. All rumors about optimizations and etc is just BS.
Hello, Ethereum's going proof of stake. Not Bitcoin.
Satoshi Nakamoto is dead you know, no on can make Bitcoin PoS. It will be mined for another 140 years

Posted on Reply
#69
mb194dc
Nvidia earnings a bloodbath on the consumer side. This before any real recession has got going. This will be why the 4080 specs are changing. It'll need to be cheap and reasonable power consumption.
Nvidia's gaming department revenue was down 33% year-over-year to $2.04 billion, which was a sharper decline than the company anticipated. Nvidia said that the miss was because of lower sales of its gaming products, which are primarily graphics cards for PCs.9 hours ago
Posted on Reply
#70
Dirt Chip
mb194dcNvidia earnings a bloodbath on the consumer side. This before any real recession has got going. This will be why the 4080 specs are changing. It'll need to be cheap and reasonable power consumption.
Very naive, to think this kind of GPU spec changes according to post response..

Much more resenable is that many people have profit in the 'leak' prior to lunch to generate trafic and to get your 'unbelievable and amazed' reaction. All PR stunts.

You may see many power consumption numbers and all are true, given a spacific test. Each test look into spacific parameter and some of the tests need to loos all TDP constraints hance 900w+ for 4090 and so on.
So the company get endless, free, exposure to it's upcoming product by leat you see and talk about each test power consumption and the 'fluctuation' between each test. Power consumption, TDP and global warming buzz words together all in the service of a very intensional marketing campaign. Nothing more.

And if they, as a bonus, made you think you have any degree of influence on the process before lunch, well then, it's a pure win on tham side and probably you will earn them even more free PR in the future because you posted and, for sure, made a change.
;)
Posted on Reply
#71
Sisyphus
mb194dcNvidia earnings a bloodbath on the consumer side. This before any real recession has got going. This will be why the 4080 specs are changing. It'll need to be cheap and reasonable power consumption.
There will be no cheap and reasonable power consumption for high end GPUs. Wrong peer group.
Posted on Reply
#72
MentalAcetylide
SisyphusThere will be no cheap and reasonable power consumption for high end GPUs. Wrong peer group.
Exactly. Unless there's some new earth-shattering technological breakthrough in regards to energy, high performance + cheap will never be possible. Otherwise, we would all be driving sports cars & using high end computers that can get 120+FPS on a 16k display at the cost of pennies per day for the energy consumed. This equates to something like doing WORK requiring the energy of nuclear fusion/fission on a small scale in your own house or car without the need for putting the energy into it. Solar is about as good as it gets, but the costs become astronomical when you consider the hardware & recurrent costs. The laws of physics has us by the balls. :(
Posted on Reply
#73
mahirzukic2
MentalAcetylideExactly. Unless there's some new earth-shattering technological breakthrough in regards to energy, high performance + cheap will never be possible. Otherwise, we would all be driving sports cars & using high end computers that can get 120+FPS on a 16k display at the cost of pennies per day for the energy consumed. This equates to something like doing WORK requiring the energy of nuclear fusion/fission on a small scale in your own house or car without the need for putting the energy into it. Solar is about as good as it gets, but the costs become astronomical when you consider the hardware & recurrent costs. The laws of physics has us by the balls. :(
It's not necessarily that the laws of physics has us by the balls, but rather that we have insatiable wants. We always want more and can never be fully content.
It's a never ending battle of the ever moving goal posts.
Posted on Reply
#74
tpu7887
nguyenJust drag the power slider on the 3080 down to 250W and it would still outperform 2080Ti by 30%.
If people don't know how to do that, perhaps just buy an rx6400 and call it a day
The 1030 in the HTPC I made for my parents runs at 1733MHz with 0.900V (instead of ~1.05V)
I don't have a 2080Ti, but I bet... if you dragged its slider down too...

Edit: every device should have a button that switches clocks and voltages to "most for least".

One thing needs to be taken into account to make this button work its best though: so that things don't become unstable during their warranty periods (become "broken" to most people), as time passes, extra voltage over the minimum required at manufacture is needed

Up to this point what has been done by AMD/Intel/nVidia/all semiconductor companies, is the use of a voltage offset. They ask: What's required for stability of the chip while running at its peak temperature before throttling (usually 100deg C)? 1.000V. So what's its voltage set to? 1.100V. This ensures that at month 6, when 1.007V is required, an RMA isn't happening.

Instead of doing this, there is no reason why voltage can't be optimized to increase over time depending on hours powered on, temperature, and utilization. To keep things REALLY simple they could even just go by time since manufacture and be really conservative with their ramp up - There would still be a span of like two years where they could ramp up from 1.000V to 1.100V - TONNES of power would be saved worldwide, even from that.

My 2500K, I ran it at 4200MHz undervolted by more than 0.25V less than its VID for 3700MHz (the stock max speed of turbo boost), so this isn't a new problem.
Today, because it's old and still in use (parents HTPC), it's getting 0.02 less volts than specified by its VID now and running (Prime stable), at 4800MHz with DDR 2133 C10 at 1.675V. VCCIO was increased to 1.200V to keep the minimum 0.5V voltage differential that prevents damage to the IMC (people with Nehalem/Sandy/Ivy/Haswell/Broadwell with DDR3 at 1.65V should have done this, but I don't think the majority of people did. I guess it didn't end up being too important lol) but back to my point - so much extra voltage is used than is needed, and so much power is wasted because of it
Posted on Reply
#75
agent_x007
tpu7887One thing needs to be taken into account to make this button work its best though: so that things don't become unstable during their warranty periods (become "broken" to most people), as time passes, extra voltage over the minimum required at manufacture is needed

Up to this point what has been done by AMD/Intel/nVidia/all semiconductor companies, is the use of a voltage offset. They ask: What's required for stability of the chip while running at its peak temperature before throttling (usually 100deg C)? 1.000V. So what's its voltage set to? 1.100V. This ensures that at month 6, when 1.007V is required, an RMA isn't happening.

Instead of doing this, there is no reason why voltage can't be optimized to increase over time depending on hours powered on, temperature, and utilization. To keep things REALLY simple they could even just go by time since manufacture and be really conservative with their ramp up - There would still be a span of like two years where they could ramp up from 1.000V to 1.100V - TONNES of power would be saved worldwide, even from that.
There is no "work best", because each card/GPU WILL have different performance based on how good quality of a die it got.
Manufacturer(s) will be sued by not delivering marketing promises, because clock speeds/performance aren't at level they were claimed to be at for all cards of the same name.

Offset is partially true, they simply use highest stable frequencies tested at highest "rated" (by TSMC/Samsung/etc.) voltage numbers (because guys in marketing like bigger numbers).

My solution :
Make cards with max. boost limited to lower frequency (1.5 - 2GHz) and voltage of 0.8-0.85V, with additional V/F options being available through "OC" mode that can be enabled via driver_option/AIB_program/3rd-party_OCtool. Maybe display appropriate warning about massive implications (for longevity/thermal/noise/long-term-performance/etc.), BEFORE using those higher rated modes (or just do a "Uber" vBIOS switch on PCB ?).

BTW : ALL NV Boost 3.0 cards are capable of this (ie. since Pascal), but NV simply likes to "push things to 11", because... reasons.
Proof?
ALL GPUs mentioned contain stable, and manufacturer tested lower frequencies/voltage combinations a particular card can use. It's in the table, and you can "lock" GPU to any of them via Afterburner (for example).
Posted on Reply
Add your own comment
Aug 27th, 2024 06:47 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts