Saturday, January 13th 2024
NVIDIA Corrects L2 Cache Spec for GeForce RTX 4070 SUPER
NVIDIA has recently revised its specification sheet for the upcoming GeForce RTX 4070 SUPER GPU—a small mistake was included in their review guide and marketing material. Team Green workers were likely in a rush to get everything ready for RTX 40xx SUPER range's official unveiling at CES 2024, so a typo here and there is not unexpected. The RTX 4070 SUPER's AD104 GPU configuration was advertised as offering a 20% core count upgrade over the vanilla RTX 4070 (non-SUPER), but detail sensitive sleuths were somewhat puzzled with the SUPER's L2 cache designation of 36 MB. Various 2023 leaks suggested that 48 MB was the correct value; representing a 33% jump up from the standard 4070's L2 pool. We must note that TPU's GPU database had the correct spec entry since day one.
Sources:
NVIDIA News, Wccftech, VideoCardz, Tom's Hardware
24 Comments on NVIDIA Corrects L2 Cache Spec for GeForce RTX 4070 SUPER
If anything I came to the realization that... perhaps around 10 years ago, home computing reached a bit of a "plateau" of sorts. There's nothing in our "daily lives" that average computers won't handle, consequently, they start to get very boring very fast, especially when prices are too high. The aforementioned Mac is a perfectly capable home entertainment system, if all you do is watch movies, casually browse the internet and do office work.
however, i've usually found pre-skylake intels to be rather dragged down by their (lack of) igp performance these days. some of them even struggle to playback 1080p.
now, on an actual desktop you could just jam in a 1030 or something, but on a mac mini that's unfortunately not possible
dont know about modern codecs though,
That gen has avx2 and not found anything that won't run on it.
If you go back 10 years before that, you're in the single core era still with the fx55...
Productivity workloads are also heavily dependent on CPU performance.
Wallet is still yours though.
These aren't really 4K cards though.
www.anandtech.com/show/8935/geforce-gtx-970-correcting-the-specs-exploring-memory-allocation/2
GTX 970 is a 4 GB GPU and all advertised cache and specifications are present and enabled. Inefficient, but it is one. AMD also lost the class action lawsuit regarding the FX processor's status as an 8-core processor. Which it undisputably is - sometimes it's much cheaper to settle than litigate. ;)
Cache size is super important on many architectures, and this isn't just for GPUs - for example, Core 2's entire segmentation was strictly on its L2 cache size, and two identically clocked CPUs with the same core count would exhibit major performance differences, take the E8400 and the E7600, they're both 3 GHz CPUs, difference being the 8400 has 6 MB and a 333 FSB and the 7600 has 3 MB and a 266 FSB (with a higher multiplier to match), the 8400 will walk on the 7600. Pentium at the time further reduced cache to 2 MB and Celeron at just 1 MB. On AMD's side, the Athlon II, for example, was just a Phenom II with the L3 entirely disabled - it's always been a significant step up in performance, but one that is costly in die area, heavily affects thermals and is particularly sensitive to fabrication imperfections so it's always been a very expensive addition to any processor design. Just saw this and I'd have to say this vintage Mac mini doesn't exactly qualify for that (the idea behind it was to get something that could run Snow Leopard) but... I expect people to keep their M1 Minis for example for a very long time. There's just little point in an upgrade if these things can play 4K video, browse the internet, and even run light games nowadays, really.
Seems the only reason we need ever faster machines is to keep up with gaming demand, unless you're doing actual work with your PC.
I think the average person, even an average gamer, can probably do 7-10 years on a PC lifecycle now if they buy high end. Even gamers, if they are using a typical 3060 or RX 6600, likely wouldn't see huge differences with an upgrade. Yup, USB 3.x / Thunderbolt and PCIe 4 are the main reasons to upgrade rn IMO. We've been in pattern of diminishing returns for a long, long time. AI might change that and give a new reason to upgrade, otherwise it's all very incremental.
48 MB is correct.