Thursday, September 14th 2023
NVIDIA GeForce RTX 4070 Could See Price Cuts to $549
NVIDIA's GeForce RTX 4070 12 GB graphics card finds itself embattled against the recently launched AMD Radeon RX 7800 XT, and board partners from NVIDIA's ecosystem plan to do something about it, reports Moore's Law is Dead. A GIGABYTE custom-design RTX 4070 Gaming OC graphics card saw a $549 listing on the web, deviating from the $599 MSRP for the SKU, which hints at what the new pricing for the RTX 4070 could generally look like. At $549, the RTX 4070 would still sell for a $50 premium over the RX 7800 XT, probably banking on better energy efficiency and features such as DLSS 3. NVIDIA partners could take turns to price their baseline custom-design RTX 4070 product below the MSRP on popular online retail platforms, and we don't predict an official price-cut that applies across all brands, forcing them all to lower their prices to $549. We could also see NVIDIA partners review pricing for the RTX 4060 Ti, which faces stiff competition from the RX 7700 XT.
Source:
Moore's Law is Dead (YouTube)
130 Comments on NVIDIA GeForce RTX 4070 Could See Price Cuts to $549
But a feature, ,, scarcely ever been worth the effort because it'll be superceded way before you feel you got your worth out of the GPU you Bought for it.
And I have, many times.
3 Fg might be great but I bought into 1 got 2 for free then no 3.
So limited use in the 600+ games I own!
But it doesn't suit most games I play or more succinctly I choose other ways to gain Frames If I need too, such is my revulsion for what a 1080 spin(dirt rally2/others) does with it on and if you're not spinning out sometimes, are you sure you're trying hard enough.
I play online FPS too, in groups so higher FPS stabley and highly accurate outweigh all other needs and then fsr or dlss isn't good enough, none are , and before it's said reflex plus native is again faster than reflex and dlss.
All personal tastes really so I'm not arguing my ways it , I am again saying YdY.
You don't tell your customer "dream on", because there are other suppliers and you will lose sales :D
People not happy at all, if you ask me.
videocardz.com/newz/first-nvidia-geforce-rtx-4070-cards-drop-to-549-8-below-msrp
Better results generally come from more sophisticated formulas. That means increased computational requirements.
Person A: "Hey, there's a new shading model called Gouraud. It looks better than flat shading."
Person B: "Why do I need that? I'm happy with flat shading."
Years later.
Person A: "Hey, there's an even better shading model called Phong. It's more realistic than Gouraud."
Person B: "Nah, I upgraded to Gouraud a couple of years ago. I'm fine with that."
A few more years pass.
Person A: "A mathematician by the name of Jim Blinn has altered Phong shading."
Person B: "I'll check it out. How do you spell his name?"
Person A: "B-L-I-N-N"
DLSS, upscaling, frame generation, ray-trace reconstruction, all part of the evolution of computer graphics.
There's a reason why we don't see flat shading in computer games anymore.
Yes, there might not be a usage case for you today for DLSS 3 Frame Generation or DLSS 3.5 Ray Reconstruction. But someday there probably will be for a usage case (e.g., game title) that you care about. The problem is you just don't know when that will happen.
DLSS 1.0 was not embraced at launch. Now all three GPU manufacturers (Nvidia, AMD, Intel) provide it as a tool for developers to tap into often with great effect. Many now consider DLSS 2.0 to have superior results to conventional TAA.
For sure the technology is improving, often in software. And it's not just about who has better/more transistors. A lot of these implementations are heavily influenced by the quality of the developer tools used to harness this technology.
The reason we have GPUs in the first place is that CPU cores aren't efficient in handling the mathematic calculations for 3D graphics. Many tasks and functions are now handled by specialized ASICs that were formerly handled by the CPU.
Want to see your CPU in action doing 3D graphics calculations? Just run Cinebench. Note the speed that the images are generated. Imagine playing a game with that rendering speed.
If your phone tried to decode video just using CPU cores, the battery life would be minutes, not hours.
When you watch YouTube on your fancy computer, it is using an encoder chip on your graphics card, not the fancy Ryzen 7800X3D CPU. At a fraction of the power (and thus heat). If you forced it to do software decoding handled by the CPU, you'd see a big power spike and complain about the CPU fan being too loud.
At some point, someone came up with algorithms for ray tracing. Originally this was done in software on CPUs. Took forever, not useful for real-time graphics. So it was reserved for still images or a short film (if you had the budget and time) like early Pixar shorts.
At some point, someone said, "hey, let's build a circuit that will handle these calculations more efficiently." Today, we have smartphone SoCs with ray-tracing cores.
Someday in the not too distant future, we'll have some other form of differentiated silicon. MPEG-2 encoders used to be custom ASICs. Today they handle a wide variety of encoding schemes, the latest being AV1. Someday there will something else that succeeds AV1 as the next generation. Performance will suck on today's encoding architecture, will be better with specially modified silicon to help speed things up.
A graphics card purchase is a singular event in time but a usage case may pop up next month that wasn't on the radar last month. We saw this with the crypto mining craze. We also found out what happens when the crypto mining policies change leaving a bunch of cards utterly useless.
I remember buying a Sapphire Pulse Radeon RX 580 new for $180 (down from the original launch MSRP of $230). Six months later during the height of the mining craze, that card was going for 3x what I paid for.
Before OpenGL was a standard, it was proprietary IrisGL. It's not like OpenGL was instantly welcomed and adopted by everyone the moment the OpenGL ARB pressed the "publish" button. OpenGL wasn't always "just there" and it's not going to last forever either. Years ago Apple deprecated OpenGL (which was invented by SGI, a defunct 3D graphics company whose heyday was in the Nineties).
My computer (Mac mini M2 Pro) doesn't support OpenGL. My Mac mini 2018 (Intel CPU) might if I installed an old version of the operating system.
And DirectX is really a Microsoft standard that has been forced on the world due to their near monopolistic practices and strangehold on the PC industry (you remember that anti-trust investigation 20 years ago?).
A few years ago Vulkan wasn't on anyone's radar. Today it's important. Someday it'll fall to the side, overtaken by more modern graphics technology that has developed to address the changing needs of the industry and its users.
There are basic concepts that span multiple architectures but even in the products from one company, there isn't even compliance. As an AMD guy you should know that DirectX 12 isn't fully and evenly implemented on every single GPU even within one generation.
And designing any computer architecture is both a combination of hardware and software. The people who understand the hardware the best will have the best software. So Nvidia has hardware engineers that work with software engineers the latter writing drivers, APIs, etc. Apple has done this to great effect.
Remember that just because the industry picks a standard doesn't mean that it will be embraced by all forever and ever. How many DVI and VGA connectors does your RX 7800 XT have? Does it have a VirtualLink port (looks like USB-C)?
I'm not an AMD guy. Only my main gaming rig is fully AMD at the moment, but I've got two HTPCs that are both Intel+Nvidia, and I've got lots of various parts lying around from every manufacturer. I generally prefer AMD's open source approach towards new technologies, but that doesn't mean I restrict my choices to only one brand (although prices seem to be doing that for me anyway).
I hope that makes sense. :)
I'm not saying the 7800XT isn't a great card, because it is. I'm saying that it depends in what the user needs and wants out of their gaming experience. Wait 2 years and buy it used. Those are whiners doing what they do best. The rest of us live in the real world. Not always. Frequently method to do the same work in a better, more efficient, ways are developed.
GPUs (graphics processing units) are optimized for parallel processing, which allows them to perform many calculations at once. This is in contrast to CPUs (central processing units), which are typically optimized for sequential processing. Because of this, GPUs are able to perform many more floating point operations per second (FLOPS) than CPUs. Additionally, GPUs have specialized hardware, such as multiple cores and larger caches, that are optimized for the types of calculations that are commonly used in graphics processing, such as matrix operations. This further increases their ability to perform FLOPS.
GPU computing is faster than CPU computing because GPUs have thousands of processing cores, while CPUs have comparatively fewer cores. Actually it is the opposite. The "whiners" do live in the real world, while those who support unreal pricing, they lost connection with the world situation right now.
It's called stagflation, the worst, if you still remember this.
There are two solutions - vote with your wallet and don't buy (like the "whiners" actually do) which results in an all-time low graphics cards shipments, and if this trend goes on, nivida will be forced to exit the graphics cards market.