Saturday, January 13th 2024

NVIDIA Corrects L2 Cache Spec for GeForce RTX 4070 SUPER

NVIDIA has recently revised its specification sheet for the upcoming GeForce RTX 4070 SUPER GPU—a small mistake was included in their review guide and marketing material. Team Green workers were likely in a rush to get everything ready for RTX 40xx SUPER range's official unveiling at CES 2024, so a typo here and there is not unexpected. The RTX 4070 SUPER's AD104 GPU configuration was advertised as offering a 20% core count upgrade over the vanilla RTX 4070 (non-SUPER), but detail sensitive sleuths were somewhat puzzled with the SUPER's L2 cache designation of 36 MB. Various 2023 leaks suggested that 48 MB was the correct value; representing a 33% jump up from the standard 4070's L2 pool. We must note that TPU's GPU database had the correct spec entry since day one.
Sources: NVIDIA News, Wccftech, VideoCardz, Tom's Hardware
Add your own comment

24 Comments on NVIDIA Corrects L2 Cache Spec for GeForce RTX 4070 SUPER

#1
Dr. Dro
I think the only SKU where NVIDIA has intentionally disabled L2 cache is the RTX 4090 (96 on full AD102 to 72 MB on 4090). The other models need the "boost" to be competitive with AMD's parts that feature wider and/or faster memory buses.
Posted on Reply
#2
bug
May I add it's very sad if that's indeed "best of CES 2024"?
Posted on Reply
#3
KrazyT
T0@stWe must note that TPU's GPU database had the correct spec entry since day one.
Posted on Reply
#4
Dr. Dro
bugMay I add it's very sad if that's indeed "best of CES 2024"?
I've spent my day fixing up a mid-2010 Mac mini that I got recently. It only really needed a new SSD and plugging the thermal probe cables that the previous owner made a mess of (which I've done to the best of my ability, I don't think it's 100%).

If anything I came to the realization that... perhaps around 10 years ago, home computing reached a bit of a "plateau" of sorts. There's nothing in our "daily lives" that average computers won't handle, consequently, they start to get very boring very fast, especially when prices are too high. The aforementioned Mac is a perfectly capable home entertainment system, if all you do is watch movies, casually browse the internet and do office work.
Posted on Reply
#5
Selaya
almost true.
however, i've usually found pre-skylake intels to be rather dragged down by their (lack of) igp performance these days. some of them even struggle to playback 1080p.
now, on an actual desktop you could just jam in a 1030 or something, but on a mac mini that's unfortunately not possible
Posted on Reply
#6
kondamin
Dr. DroI've spent my day fixing up a mid-2010 Mac mini that I got recently. It only really needed a new SSD and plugging the thermal probe cables that the previous owner made a mess of (which I've done to the best of my ability, I don't think it's 100%).

If anything I came to the realization that... perhaps around 10 years ago, home computing reached a bit of a "plateau" of sorts. There's nothing in our "daily lives" that average computers won't handle, consequently, they start to get very boring very fast, especially when prices are too high. The aforementioned Mac is a perfectly capable home entertainment system, if all you do is watch movies, casually browse the internet and do office work.
You are not wrong, for doing basic day to day stuff a 12 year old cpu and 8GB of ram is enough for 1080p
dont know about modern codecs though,
Posted on Reply
#7
mb194dc
Dr. DroI've spent my day fixing up a mid-2010 Mac mini that I got recently. It only really needed a new SSD and plugging the thermal probe cables that the previous owner made a mess of (which I've done to the best of my ability, I don't think it's 100%).

If anything I came to the realization that... perhaps around 10 years ago, home computing reached a bit of a "plateau" of sorts. There's nothing in our "daily lives" that average computers won't handle, consequently, they start to get very boring very fast, especially when prices are too high. The aforementioned Mac is a perfectly capable home entertainment system, if all you do is watch movies, casually browse the internet and do office work.
Interesting you mention that, I have one rig with a 4790k in it still, 32GB DDR 3 1800 and 6800xt. CPU will be 10 years old on May 1st, For my use case oc at 4.7 still don't see any reason to change it.

That gen has avx2 and not found anything that won't run on it.

If you go back 10 years before that, you're in the single core era still with the fx55...
Posted on Reply
#8
Macro Device
For AMD GPUs, the similarly functioning Infinity Cache doesn't contribute almost anything to the 4K or higher resolution performance where extremely low memory bandwidth feels the worst. Is that the same for nVidia's L2?
mb194dcstill don't see any reason to change it.
Any 2020s AAA game is a reason. You will notice at least 40 percent (about 120 percent median) performance boost at 1440p gaming if you upgrade to the current-gen i7.
Productivity workloads are also heavily dependent on CPU performance.

Wallet is still yours though.
Posted on Reply
#9
N/A
This is what nvidia tried to push as a 4080 last year. the only difference being 512 more cuda. and would perform the same. in reality a glorified 4060.
Posted on Reply
#10
RayneYoruka
Still seems to be a good gpu for many, finally
Posted on Reply
#11
dgianstefani
TPU Proofreader
Beginner Micro DeviceFor AMD GPUs, the similarly functioning Infinity Cache doesn't contribute almost anything to the 4K or higher resolution performance where extremely low memory bandwidth feels the worst. Is that the same for nVidia's L2?


Any 2020s AAA game is a reason. You will notice at least 40 percent (about 120 percent median) performance boost at 1440p gaming if you upgrade to the current-gen i7.
Productivity workloads are also heavily dependent on CPU performance.

Wallet is still yours though.
NV cards have a bit better memory compression algorithms and subsystems generally.

These aren't really 4K cards though.
Posted on Reply
#12
Macro Device
dgianstefaniThese aren't really 4K cards though.
This doesn't matter. What I'm asking is does this nVidia's L2 improve performance when it's most needed.
Posted on Reply
#13
R0H1T
Dr. DroI think the only SKU where NVIDIA has intentionally disabled L2 cache is the RTX 4090
No that's GTX970 & probably the actual reason why they lost that class action(?) lawsuit!
Posted on Reply
#14
Dr. Dro
Beginner Micro DeviceThis doesn't matter. What I'm asking is does this nVidia's L2 improve performance when it's most needed.
Functionally the same as AMD's infinity cache
R0H1TNo that's GTX970 & probably the actual reason why they lost that class action(?) lawsuit!
I was referring to Ada, but GTX 970's issue wasn't its cache, it was the internal partitioning of the processor and how it was unable to address the full memory in a single segment due to the crossbar configuration.

www.anandtech.com/show/8935/geforce-gtx-970-correcting-the-specs-exploring-memory-allocation/2

GTX 970 is a 4 GB GPU and all advertised cache and specifications are present and enabled. Inefficient, but it is one. AMD also lost the class action lawsuit regarding the FX processor's status as an 8-core processor. Which it undisputably is - sometimes it's much cheaper to settle than litigate. ;)
Posted on Reply
#15
Macro Device
Dr. DroFunctionally the same as AMD's infinity cache
So it's basically a gimmick for people like me (target: 4K60 with best IQ possible).
Posted on Reply
#16
Dr. Dro
Beginner Micro DeviceSo it's basically a gimmick for people like me (target: 4K60 with best IQ possible).
I mean it's what makes it possible for these cards with relatively insufficient memory bandwidth to do what they do. Otherwise we'd need HBM's bus width and G6X's throughput to start to get even at this point.

Cache size is super important on many architectures, and this isn't just for GPUs - for example, Core 2's entire segmentation was strictly on its L2 cache size, and two identically clocked CPUs with the same core count would exhibit major performance differences, take the E8400 and the E7600, they're both 3 GHz CPUs, difference being the 8400 has 6 MB and a 333 FSB and the 7600 has 3 MB and a 266 FSB (with a higher multiplier to match), the 8400 will walk on the 7600. Pentium at the time further reduced cache to 2 MB and Celeron at just 1 MB. On AMD's side, the Athlon II, for example, was just a Phenom II with the L3 entirely disabled - it's always been a significant step up in performance, but one that is costly in die area, heavily affects thermals and is particularly sensitive to fabrication imperfections so it's always been a very expensive addition to any processor design.
Selayaalmost true.
however, i've usually found pre-skylake intels to be rather dragged down by their (lack of) igp performance these days. some of them even struggle to playback 1080p.
now, on an actual desktop you could just jam in a 1030 or something, but on a mac mini that's unfortunately not possible
Just saw this and I'd have to say this vintage Mac mini doesn't exactly qualify for that (the idea behind it was to get something that could run Snow Leopard) but... I expect people to keep their M1 Minis for example for a very long time. There's just little point in an upgrade if these things can play 4K video, browse the internet, and even run light games nowadays, really.

Seems the only reason we need ever faster machines is to keep up with gaming demand, unless you're doing actual work with your PC.
Posted on Reply
#17
Macro Device
Dr. Drothe 8400 will walk on the 7600
I owned both and I can assure you it's a complete non-issue for the 7600 to reach 333 MHz FSB, effectively running at 3.83 GHz, or +28% frequency. This way E7600 walks on the E8400 slowly but surely. Temperature wise it's almost a draw with E7600 being about 5 to 10 Watts hotter than the 6 MB L2 cached variant.
Dr. DroHBM's bus width and G6X's throughput
I would protest none bit against such real bandwidth, that's for sure. xD
Posted on Reply
#18
Dr. Dro
Beginner Micro DeviceI owned both and I can assure you it's a complete non-issue for the 7600 to reach 333 MHz FSB, effectively running at 3.83 GHz, or +28% frequency. This way E7600 walks on the E8400 slowly but surely. Temperature wise it's almost a draw with E7600 being about 5 to 10 Watts hotter than the 6 MB L2 cached variant.

I would protest none bit against such real bandwidth, that's for sure. xD
It is but you can't make up for the double cache. It was an interesting thing with Core 2 specifically, if your application wasn't responsive to cache or memory bandwidth, your go-to CPU could be the Celeron E3200 and you'd be pretty much getting it all ;)
Posted on Reply
#19
bug
Dr. DroI've spent my day fixing up a mid-2010 Mac mini that I got recently. It only really needed a new SSD and plugging the thermal probe cables that the previous owner made a mess of (which I've done to the best of my ability, I don't think it's 100%).

If anything I came to the realization that... perhaps around 10 years ago, home computing reached a bit of a "plateau" of sorts. There's nothing in our "daily lives" that average computers won't handle, consequently, they start to get very boring very fast, especially when prices are too high. The aforementioned Mac is a perfectly capable home entertainment system, if all you do is watch movies, casually browse the internet and do office work.
You're not wrong. My last upgrades were 2500k->6600k and 6600k->12600k. Each time it felt like the biggest upgrade was actually the updated connectivity.
Posted on Reply
#20
RandallFlagg
Dr. DroI've spent my day fixing up a mid-2010 Mac mini that I got recently. It only really needed a new SSD and plugging the thermal probe cables that the previous owner made a mess of (which I've done to the best of my ability, I don't think it's 100%).

If anything I came to the realization that... perhaps around 10 years ago, home computing reached a bit of a "plateau" of sorts. There's nothing in our "daily lives" that average computers won't handle, consequently, they start to get very boring very fast, especially when prices are too high. The aforementioned Mac is a perfectly capable home entertainment system, if all you do is watch movies, casually browse the internet and do office work.
Agree, though I'd say Skylake and later is needed these days for a generally smooth internet experience. A lot of that has to do with encryption, which every website uses, as well as things like CODEC support already mentioned earlier (if using the IGP).

I think the average person, even an average gamer, can probably do 7-10 years on a PC lifecycle now if they buy high end. Even gamers, if they are using a typical 3060 or RX 6600, likely wouldn't see huge differences with an upgrade.
bugYou're not wrong. My last upgrades were 2500k->6600k and 6600k->12600k. Each time it felt like the biggest upgrade was actually the updated connectivity.
Yup, USB 3.x / Thunderbolt and PCIe 4 are the main reasons to upgrade rn IMO. We've been in pattern of diminishing returns for a long, long time. AI might change that and give a new reason to upgrade, otherwise it's all very incremental.
Posted on Reply
#21
R0H1T
Dr. DroI was referring to Ada, but GTX 970's issue wasn't its cache, it was the internal partitioning of the processor and how it was unable to address the full memory in a single segment due to the crossbar configuration.
No the issue was due to the cache being partially disabled, also Nvidia got sued & lost because they lied about the cache size ~ the 980 & 970 both had 4GB VRAM but 970 was able to use full bandwidth of only 3.5GB because of the partially disabled cache.
Posted on Reply
#22
theouto
Dr. DroI've spent my day fixing up a mid-2010 Mac mini that I got recently. It only really needed a new SSD and plugging the thermal probe cables that the previous owner made a mess of (which I've done to the best of my ability, I don't think it's 100%).

If anything I came to the realization that... perhaps around 10 years ago, home computing reached a bit of a "plateau" of sorts. There's nothing in our "daily lives" that average computers won't handle, consequently, they start to get very boring very fast, especially when prices are too high. The aforementioned Mac is a perfectly capable home entertainment system, if all you do is watch movies, casually browse the internet and do office work.
Yeah, my mum is using a laptop with a sixth gen i7 (don't ask which one because I don't remember), and really the only reason why she'd change it is because the battery is fucking up, but the compute itself is fine, she doesn't need more than that, barely anyone needs more than that, the reason why we would need more than that is because either the OS or the apps we use just randomly got harder to run for no reason whatsoever, which is funny to think about, considering that cpus with "AI" are being sold to us, and that applications are starting to be built around those capabilities, even if not needed.
Posted on Reply
#23
lemonadesoda
Ok, so what was the wrong value, and what is the correct value, or is this supposed to be a game of hide and seek!?
Posted on Reply
#24
Macro Device
lemonadesodaOk, so what was the wrong value, and what is the correct value, or is this supposed to be a game of hide and seek!?
36 MB was wrong.
48 MB is correct.
Posted on Reply
Add your own comment
Dec 22nd, 2024 02:07 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts