Tuesday, August 23rd 2022
NVIDIA GeForce RTX 4080 Could Get 23 Gbps GDDR6X Memory with 340 Watt Total Board Power
NVIDIA's upcoming GeForce RTX 40 series graphics cards are less than two months from the official launch. As we near the final specification draft, we are constantly getting updates from hardware leakers claiming that the specification is ever-changing. Today, @kopite7kimi has updated his GeForce RTX 4080 GPU predictions with some exciting changes. First off, the GPU memory will get an upgrade over the previously believed specification. Before, we thought that the SKU used GDDR6X running at 21 Gbps; however, now, it is assumed that it uses a 23 Gbps variant. Faster memory will definitely result in better overall performance, and we are yet to see what it can achieve with overclocking.
Next, another update for NVIDIA GeForce RTX 4080 comes with the SKU's total board power (TBP). Previously we believed it came with a 420 Watt TBP; however, the sources of kopite7kimi claim that it has a 340 Watt TBP. This 60 Watt reduction is rather significant and could be attributed to NVIDIA's optimization to have the most efficient design possible.
Sources:
kopite7kimi (Twitter), via VideoCardz
Next, another update for NVIDIA GeForce RTX 4080 comes with the SKU's total board power (TBP). Previously we believed it came with a 420 Watt TBP; however, the sources of kopite7kimi claim that it has a 340 Watt TBP. This 60 Watt reduction is rather significant and could be attributed to NVIDIA's optimization to have the most efficient design possible.
82 Comments on NVIDIA GeForce RTX 4080 Could Get 23 Gbps GDDR6X Memory with 340 Watt Total Board Power
3080 has about 30% higher TDP than 2080Ti.
3080 was built on 8nm compared to 12nm of 2080Ti...
That's not progress!
What are these CLOWNS up to?
Seriously
WHAT THE F**
I had a write up about CPUs done up, essentially reaming out Intel. But as I wrote what's above,...
WHAT THE F*
Anyone?
If people don't know how to do that, perhaps just buy an rx6400 and call it a day
I also hope that if one day, it's discovered that this is one of the greatest con's of our time, that those driving the climate change narrative, get jail time.
Also, I hear the earth is actually flat.
I'm not telling anyone what to buy or not, enjoy whatever you enjoy, but not that long ago there was pretty much only one player on the GPU space (amd wasn't really competitive), right now there's still only 2 (intel has yet to deliver anything other than teasers), and they both just using cheap tricks like moving the power needle to beat the other instead of actually innovating (i'd say nvidia more so since amd is closer to their usual higher target - that should be viewed like a disadvantage as it was for years)
1. At least do not raise the TDP, but should better to lowering it bit by bit with newer architechures and
2. soft devs should work hardver with better optimmization.
This 2 thing should be a trend of this time period. Not just beacuse of energy price, but much more due to humanity energy consuming.
USA administration should regulate these companies by this 2 directive.
What a person believes to need for gaming (high end GPUs . . . ), is irrational.
Suggested solution:
Only the non-overclockable or underclocked GPUs and CPUs are sold in the USA.
The free versions in the rest of the world. :clap:
Like, if you want a 225watt card, not only can you still buy them, but now they are the cheaper xx7x or xx6x tier cards. So you don’t need the flagship anymore!
do people just want the top tier cards to be limited in power so they feel justified in spending $700 on a RTX 4060? Doesnt make any sense to me.
Jevons Paradox. Read about it.
That is one of the reason we see the overall global energy consumption is growing without stop.
Seems a bit off under an nVidia topic. BUT actually it is not, beacuse of I play games and I do and even more ppl do care about enviroment and consume less energy.
And I am not an idiot fun of Greta. I also have allergic reactions if I just see her. We just simply have to do things a little better and live a little bit better nature conscious life. That's it. That is why a wrote at least do not raise the TDP. But they even push forward by the time. Not so far in the past GTX 1080 was only 180W TDP.
(source: TPU GTX 1080) If the technology is developing and efficiency is growing, enough to limit the new card TDP because the overall performance will increase. But by the history datas Jevons paradox works...
All journalism has turned into the Jerry Springer rumor mill. (Politics, Tech, Economics, Crime, all of it!)
TPU at least used to say things like "Take this with a grain of salt" but that is becoming more rare here as well.
That said, when it comes to tech I do like to read some rumors :)~
So don't worry, GPUs aren't going to kill the world.
This is clearly stated to be a prediction. TPU apparently needs to stop assuming its forum members can read AND comprehend now?
From my life experience, I would say, you can't stop climate change, there is no global dictatorship who is able, to bring all people's consume down to a scientifically calculated, sustainable level and stop population growth, bring all people to the same level. So what will happen: Production cost in the west rise, production went to countries like China, who are much worse in environmental protection and ideology. Keep technological leadership and big economic resources, free markets. Then you will be able, to react adequately to climate change and protect your population. You can see this everywhere. Poor countries aren't able to protect their people and are destroying their nature more rapidly. If you interfere, you get a war. nVidia or AMD will change nothing here, these people have other problems to discuss as power consumption of high end GPUs.
Getting a Gaming X 1060 PC was cool-running luxury but soon I was back to the efficiency game with a cheap PNY 1080 with a crap cooler. Running that at its efficiency peak (1911-1923 MHz @0.9v, 135W max) taught me where current GPUs do their best and allowed it to run at 74°C in warm summer rooms instead of thermal throttling at 83°C. Plus, electronics are likely to last longer if not run at their very edge of capability. IMO 5 or even 10% down is an OK tradeoff for longer equipment life.
It does seem to me based on this 6600XT that using an Infinity Cache with a narrower memory bus allows for greater reduction in minimum power usage, or at least a closer to linear reduction in power usage when underclocking into the efficiency zone. In other words, I expect the RTX 3000 series not to be able to lower their power as much as the RX 6000 series because of their wider busses. Also using GDDR6X memory will kill these types of efficiency improvements. I'd love to see how a 6700XT or 6800/XT does at these lower clocks to see if they also benefit as much from running in their efficiency zone.
It would be interesting to see if there's a sweet spot of perhaps even larger cache and relatively narrow but fastest GDDR6 non-X memory bus which allows for greater FPS per watt when run in the efficiency zone. Like a 6800XT core count with a 192-bit bus but 256MB IC that performs like a 6800 but with 6700XT or even lower power requirements when run around 2000MHz cores? It'll never happen but I wonder if that would even be a viable performer or if instead it runs up against another performance brick wall that I'm not considering.
No matter the TDP of the next product I buy, and it could be from either brand, I'd certainly undervolt it to increase efficiency.
Satoshi Nakamoto is dead you know, no on can make Bitcoin PoS. It will be mined for another 140 years
Much more resenable is that many people have profit in the 'leak' prior to lunch to generate trafic and to get your 'unbelievable and amazed' reaction. All PR stunts.
You may see many power consumption numbers and all are true, given a spacific test. Each test look into spacific parameter and some of the tests need to loos all TDP constraints hance 900w+ for 4090 and so on.
So the company get endless, free, exposure to it's upcoming product by leat you see and talk about each test power consumption and the 'fluctuation' between each test. Power consumption, TDP and global warming buzz words together all in the service of a very intensional marketing campaign. Nothing more.
And if they, as a bonus, made you think you have any degree of influence on the process before lunch, well then, it's a pure win on tham side and probably you will earn them even more free PR in the future because you posted and, for sure, made a change.
;)
It's a never ending battle of the ever moving goal posts.
I don't have a 2080Ti, but I bet... if you dragged its slider down too...
Edit: every device should have a button that switches clocks and voltages to "most for least".
One thing needs to be taken into account to make this button work its best though: so that things don't become unstable during their warranty periods (become "broken" to most people), as time passes, extra voltage over the minimum required at manufacture is needed
Up to this point what has been done by AMD/Intel/nVidia/all semiconductor companies, is the use of a voltage offset. They ask: What's required for stability of the chip while running at its peak temperature before throttling (usually 100deg C)? 1.000V. So what's its voltage set to? 1.100V. This ensures that at month 6, when 1.007V is required, an RMA isn't happening.
Instead of doing this, there is no reason why voltage can't be optimized to increase over time depending on hours powered on, temperature, and utilization. To keep things REALLY simple they could even just go by time since manufacture and be really conservative with their ramp up - There would still be a span of like two years where they could ramp up from 1.000V to 1.100V - TONNES of power would be saved worldwide, even from that.
My 2500K, I ran it at 4200MHz undervolted by more than 0.25V less than its VID for 3700MHz (the stock max speed of turbo boost), so this isn't a new problem.
Today, because it's old and still in use (parents HTPC), it's getting 0.02 less volts than specified by its VID now and running (Prime stable), at 4800MHz with DDR 2133 C10 at 1.675V. VCCIO was increased to 1.200V to keep the minimum 0.5V voltage differential that prevents damage to the IMC (people with Nehalem/Sandy/Ivy/Haswell/Broadwell with DDR3 at 1.65V should have done this, but I don't think the majority of people did. I guess it didn't end up being too important lol) but back to my point - so much extra voltage is used than is needed, and so much power is wasted because of it
Manufacturer(s) will be sued by not delivering marketing promises, because clock speeds/performance aren't at level they were claimed to be at for all cards of the same name.
Offset is partially true, they simply use highest stable frequencies tested at highest "rated" (by TSMC/Samsung/etc.) voltage numbers (because guys in marketing like bigger numbers).
My solution :
Make cards with max. boost limited to lower frequency (1.5 - 2GHz) and voltage of 0.8-0.85V, with additional V/F options being available through "OC" mode that can be enabled via driver_option/AIB_program/3rd-party_OCtool. Maybe display appropriate warning about massive implications (for longevity/thermal/noise/long-term-performance/etc.), BEFORE using those higher rated modes (or just do a "Uber" vBIOS switch on PCB ?).
BTW : ALL NV Boost 3.0 cards are capable of this (ie. since Pascal), but NV simply likes to "push things to 11", because... reasons.
Proof?
ALL GPUs mentioned contain stable, and manufacturer tested lower frequencies/voltage combinations a particular card can use. It's in the table, and you can "lock" GPU to any of them via Afterburner (for example).