Friday, December 9th 2022
NVIDIA GeForce RTX 4070 and RTX 4070 Ti Detailed Specs Sheet Leaks
It turns out that NVIDIA has not one, but two new GeForce RTX 40-series "Ada" SKUs on the anvil this January. One of these is the RTX 4070 Ti, which we know well to be a rebranding of the RTX 4080 12 GB in the face of backlash that forced NVIDIA to "unlaunch" it. The other as it turns out, is the RTX 4070, with an interesting set of specifications. Based on the same 4 nm AD104 silicon as the RTX 4070 Ti, the new RTX 4070 is significantly cut down. NVIDIA enabled 46 out of 60 streaming multiprocessors (SM) physically present on the silicon, which yield 5,888 CUDA cores—the same count as the previous-gen RTX 3070, when compared to the 7,680 that the maxed-out RTX 4070 Ti enjoys.
The GeForce RTX 4070, besides 5,888 CUDA cores, gets 46 RT cores, 184 Tensor cores, 184 TMUs, and a reduced ROP count of 64, compared to 80 of the RTX 4070 Ti. The memory configuration remains the same as the RTX 4070 Ti, with 12 GB of 21 Gbps GDDR6X memory across the chip's 192-bit memory interface, working out to 504 GB/s of memory bandwidth. An interesting aspect of this SKU is its board power, rated at 250 W, compared to the 285 W of the RTX 4070 Ti, and the 220 W of its 8 nm predecessor, the RTX 3070.
Source:
harukaze5719 (Twitter)
The GeForce RTX 4070, besides 5,888 CUDA cores, gets 46 RT cores, 184 Tensor cores, 184 TMUs, and a reduced ROP count of 64, compared to 80 of the RTX 4070 Ti. The memory configuration remains the same as the RTX 4070 Ti, with 12 GB of 21 Gbps GDDR6X memory across the chip's 192-bit memory interface, working out to 504 GB/s of memory bandwidth. An interesting aspect of this SKU is its board power, rated at 250 W, compared to the 285 W of the RTX 4070 Ti, and the 220 W of its 8 nm predecessor, the RTX 3070.
93 Comments on NVIDIA GeForce RTX 4070 and RTX 4070 Ti Detailed Specs Sheet Leaks
The same people will say that it is AMD's fault that Nvidia is pricing it's top products very high, while it is butchering the lower products in performance, but not in pricing, prices are moving up in the whole product stack, to avoid offering high performance/$ options to those consumers who dare to pay less than $1500.
Nvidia, what?! Your initial '4080' is equal in bandwidth to a full blown 4070 that should have been an x60?!
That's an official pass on Ada for me
Also, RTX 4070 should have had 9728 cuda cores and 16GB GDDR6X in the first place! RTX 4080 should have had 12288 cuda cores and 20GB GDDR6X!
The memory bandwidth these days is falsely measured by just multiply GDDR bus by its speed. Other stuff come to play big role like L0/1/2/3 cache size. Take RDNA2/3 for example. Their true memory bandwidth is way above 1TB/s up to a few TBs/s (RDNA3).
You can’t really calculate it.
Yes for the most part 70 tier had almost the same config as 80 of previous gen.
2070 and 1080, 2304 vs 2560 CUs
1070 as the 980, 1920 vs 2048 CUs
3070 carries exactly the same CUDA count as 2080 and double issue for 100% more compute.
But go figure since 40 series has 50% clock speed they decided to keep the same count, and 4080 is a little better.
I expected 4070 close to 8704 Cuda but fair enough, we have to be more realistic here and distinguish the delusions.
- Biggest selling factor for new GPU's is having to use fake scaling and fake frames technology to make unoptimized games playable
- Charge abysmal amounts for said fake rendering technology just to make these unoptimized games playable.
Man, we are definitely the biggest cucks in the world. AMD is no better either.
I guess future machines will be generation or two behind in terms of GPU's to make budget gaming rigs while current gen won't offer anything of a real value. Unless you are going to sell your wife and first born to pay for said hardware and fake generating tech.
Let's just hope game developers can remove their heads from their own asses and optimize their games. But who am I kidding?
Once again bad balance on the Nvidia stack, showing us these arent built to be optimal products but planned to go obsolete in a few years. Cut down for the market, not the product. We have seen this before in the upper midrange. Staying far away.
every card since maxwell,
970 was 25% > than GTX 780
1070 40% > than 980,
2070 15% > than 1080
3070 25% > than 2080
and now this 4070 10% slower than 3080. a glorified 4050 Ti.
3060 Ti 4864 vs 4070 5888 + 21% Cuda -20% ROP
3070 Ti 6144 vs 4070 Ti 7680 + 25% Cuda - 17% ROP
For texture filtering I just assume people run 16x AF, as it is so cheap. I remember it being a thing almost 20 years ago, but haven't seen much of it since. Running without AF would usually be a blurry mess, unless the far textures are very high resolution, then you'll get a headache-inducing flickering nightmare. AF isn't perfect though, you can get a very visible "banding effect", where it's either flickering or blurry. Games have the ability to control this, but success will vary depending on the configuration.
Are there particular games which are known to do this badly? It's called bilinear and trilinear, which refers to how many axes it interpolates with. Read it like bi-linear and tri-linear, and it makes sense. ;)
I wouldn't be too worried about ROP count. Throughput is what matters, and I haven't seen Nvidia bottleneck their cards with raster throughput yet.
The fact is, whatever company was producing a shit line up, got flak for it at its time. There are indeed strong followings on both brands. But the overwhelming, vast majority is simply considering options every gen that gets released. Compares the offers. Talks about the changes from one gen to the next. Compares perf/$ and featureset. Is, or is not affected by past experiences. And then makes a choice.
I've been on the Nvidia boat for a long time, for the simple fact they had better products. Since Turing, that 'norm' has been fading away, slowly but surely - perf/$ is taking a nosedive since RTX, TDPs go up, performance is achieved by stacking many proprietary technologies on top of games, and some shit just won't even run at all now without extensive Nvidia TLC. That's pretty much where I conclude this company doesn't offer me what I want anymore. As much as I didn't want to get tied into Nvidia's profit scheme bullshit with Gsync by buying a monitor with it, I have the exact same sentiment wrt all the proprietary must haves they push today to feed the insatiable RT monster for a few realtime rays. Matter of fact, f*ck that whole strategy entirely. Their GPUs get worse every gen and I'm not supporting that nonsense.
Meanwhile, AMD's overall quality on GPUs has massively improved since RDNA1, RDNA2 was near perfect that way + baked on a much better node, and RDNA3 seems to continue that mode. At the same time, the products seem to be priced a little less into insanity, the feature set is more than sufficient, and they're not 3 slot behemoths that require a spiderweb of adapters. More importantly though, it is AMD that truly pushes the gaming market forward, for the simple fact they own consoles and Nvidia does not. The most important gamer market share is effectively with AMD. Nvidia has taken over PC share for the larger part (last I saw was 80+%), and yet, they still can't define it proper. RT support still isn't commonplace and most implementations are lackluster at best; console ports won't have it and they just got refreshed, more games still are released without than with it; and its just a small set of effects every time. When we get full path traced games, the performance takes an immense nosedive (=Portal) and requires yet more proprietary nonsense. We're still solid in early adopter land - who cares AMD loses 10% fps with RT on compared to Nvidia. I really don't.
Far more interesting for the prolonged advances of graphics cards, is a technology like MCM/chiplets. That's where the real movement in the market will come from. And its also an approach to enable enough horsepower to run RT; now the chiplets are identical, but they probably don't have to remain so. There's a lot of new fruit on this tree; there is none left on the CUDA tree in a monolithic floor plan. Its like Intel's Core - way past EOL, but still pushed forward.
If you don't like that honesty, that is entirely your problem, but I would take a long look in the mirror for the real fix.
The underlying emotion there is that you feel the need for peer pressure to support your idea Nvidia is still king of the hill, but that principle seems to be under pressure. It feels uncomfortable to you, you prefer ignorance=bliss or not hearing an opinion that doesn't align with yours. But how is that relevant to anyone else?
I really miss the pre-GCN days of AMD/ATI, when they offered great value in the upper mid-range to lower high-end segments, with great availability and often priced below MSRP.
But I think we're pretty low on that fanbase on TPU.
People voice an opinion, we have yet to see how it materializes in sales; the fact is though, Ada is on shelves, not sold out. And AMD is going to be pretty competitive with the 4080 that's not being bought.
I'll be pleased to switch to AMD if they weren't always behind, panting and trying to catch up.
About corporations having to do what they have to do... yeah. Okay. So we as customers have to be complacent and beg for the next iteration of ass rape? I'll pass, thanks. Markets function because customers vote with wallets and convert their sentiment into action (or inaction).
As for being 'behind', I agree, AMD was always playing catch up. But they're not today - its a mistake to think so. They're technologically leaps and bounds ahead of their two largest competitors, having built experience in the future of chip technology / scaling options. Whatever happened to Nvidia's MCM whitepaper? And Intel's stacking technologies? They're still pushing monolithic behemoths. And even if they do shrink, they still need to expand TDP to meet their perf target - that's not really progress in my book, that's just pushing more volts through ever bigger chips, and its reaching the end of the line.
As for RT development. Sure, the innovation is neat. At the same time its a tool to create demand and pull people into 'adoption'. If the GPUs would remain priced sanely (or even aligned to inflation, just fine!), I would be all-in on supporting it with my wallet too.
But that's not what happened. This is what happened: You're even paying through the nose for something as shitty as a 3050; this isn't inflation here. This is what happens when a market/niche is cornered by a single company; practices that don't benefit us in the slightest.
Progress in perf/$ is deeply negative from Ampere to Ada; and Ampere is still sold at premium on top of it.
If you want to see progress in that direction... you do you. I don't.
Also, there is another long term consideration. If you value gaming, it would help you and us if it wasn't getting priced out of comfort for the small wallets. When a 3050 starts at 300,- that's quickly moving into a territory where gaming is for haves, and the rest is have nots. What do you think is next? More RT content? You can safely forget about it. What you'll get is predatory practices on those last idiots still doing PC gaming on their >1k GPUs, far fewer games that push the boundary (there is no market left). State of the art games cost money, so they need big market adoption. The games that will still get released, are going to be a console port (driven by AMD) or lackluster altogether.
Nvidia is actively damaging everything we value with its current strategy. But wee innovation! They have a few more frames in RT!
And I'm not an idealist like the aforementioned "AMD cultists" are. I'm not going to support small guy to give a finger to the big guy. I might consider AMD if they get their shit together and do some miracle innovation, or Nvidia seriously f*cks up.
regarding your perf/$ chart: Are you sure 1140p and RTX 3050 are a fair comparison? Who in their right mind would buy a 3050 for 1440p
And also they occupy almost the same place it the line up, that of the half sized die as opposed to the 384 bit 600mm²
192 bit 276 mm² 12,000 million transistors
192 bit 295 mm² 35,800 million, triple the transistor count.
ah, forgot 3050 is 128 bit, but such a card based on AD104 may exist who knows.