Thursday, November 17th 2022
NVIDIA Plans GeForce RTX 4060 Launch for Summer 2023, Performance Rivaling RTX 3070
NVIDIA is reportedly planning to ramp its GeForce "Ada" generation into the high-volume performance segment by Summer 2023, with the introduction of the GeForce RTX 4060. The card is expected to launch somewhere around June, 2023. The card will be based on the 4 nm "AD106" silicon, the 4th chip based on the "Ada Lovelace" graphics architecture. Wolstame. a reliable source with NVIDIA leaks as Lenovo's Legion gaming desktop product manager, predicts that the RTX 4060 performance could end up matching that of the current RTX 3070 at a lower price-point.
This should make it a reasonably fast graphics card for 1440p AAA gaming with high-ultra settings, and ray tracing thrown in. What's interesting is if NVIDIA is expected to extend the DLSS 3 frame-generation feature to even this segment of graphics cards, which means a near-100% frame rate uplift can be had. Other predictions include a board power expected to be in the range of 150-180 W, and a 10% generational price-increase, which would mean that the RTX 4060 would have a launch-price similar to that of the RTX 3060 Ti (USD $399).
Sources:
harukaze5719 (Twitter), VideoCardz
This should make it a reasonably fast graphics card for 1440p AAA gaming with high-ultra settings, and ray tracing thrown in. What's interesting is if NVIDIA is expected to extend the DLSS 3 frame-generation feature to even this segment of graphics cards, which means a near-100% frame rate uplift can be had. Other predictions include a board power expected to be in the range of 150-180 W, and a 10% generational price-increase, which would mean that the RTX 4060 would have a launch-price similar to that of the RTX 3060 Ti (USD $399).
165 Comments on NVIDIA Plans GeForce RTX 4060 Launch for Summer 2023, Performance Rivaling RTX 3070
Prices in Germany in euro:
Radeon RX 6800 - 499.00
Radeon RX 6800 XT - 648.00
Radeon RX 6900 XT - 699.00
Radeon RX 6950 XT - 880.00
In this group the RX 6800 is the sweet deal if you don't care about the relatively painful performance drop.
AMD sells Navi 21 from 499 to 880 euro depending on the binning. So, there is plenty of room for price reductions on 6800 XT / 6900 XT and 6950 XT.
Like I said - I'm not an NVIDIA stakeholder or Jensen's dad, I want a competitively priced, yet high-quality product. NVIDIA cannot deliver that with this generation. They may be high-quality, but the price is an absurdity and the black-box ecosystem makes it even worse. I'll pass, and this is from an RTX 3090 owner.
I owned a Radeon VII back in the day. Lovely card, but Vega 20 was never a gaming GPU. It's little wonder that specific processor became the bedrock for the CDNA architecture. It also did not cost thousands of dollars, which greatly excuses it.
But as I've been saying, the solution is more competition, real competition. AMD needs to ship enough volumes to make a dent in Nvidia's sales across the markets, then the prices will drop rapidly. This is just nonsense.
Nvidia isn't limiting volume. And this is where you venture into nonsense territory.
The chip segmentation is pretty arbitrary and varies from generation to generation, so the term "midrange chip" makes no sense. Even the naming of the chips just refers to the order they've been designed, it bears no meaning whether it will be a mid-range product or not. So if AD103 is performing like a high-end chip, then it's a high-end chip.
Keep in mind back in the Kepler era, GK104 was used for GTX 680 (because GK100 was faulty). In the Maxwell generation, GM204 was used for the GTX 980, the original top model of the lineup. The same goes for Pascal, GTX 1080(GP104) was the top model for almost a year until GTX 1080 Ti arrived.
On the last bit, nonsense? I wasn't the one who announced a card and then unlaunched it, it was NVIDIA. No matter where you put it, even in these generations of old (the GTX 980 wasn't the top model, there were two GPUs above it, the 980 Ti and the Titan X), the xx104-class cards have always been the middle of the pack ones. Even with Kepler, the GTX 600 series were quickly complimented by the 700 series that introduced the GK110 which was sizably faster than the GK104, similarly to how the GTX 500 series (and the 580) launched only 8 months apart from the GTX 480 that fixed the 400 series' terrible thermals, it was wise of NVIDIA at the time not to repeat the GF100.
Anyway, the ill-fated 4080-12GB (full AD104) was no different, relative to the full AD102 it has like 40% of the shader count, and NVIDIA quickly realized that it wasn't going to stick. If they had gone through with it, the 4080-12GB would have been laughed out of every publication, which would hype the 7900 XT instead. The 103 segment is new and in Ampere it was only used in the RTX 3060 Ti in a very cutdown configuration, or in the mobile RTX 3080 Ti. Similarly to the AD103 compared to the AD102, the GA103 was a smaller and less powerful processor than the GA102. You could call it high end, but it was never intended to be in the leading pack either. It'd work... if AMD hadn't crashed their party with the 7900 XT in that segment either, which is giving you a Navi 31 chip with a MCD and some shaders disabled, which should more than perform competitively with the RTX 4080-16GB.
We know there is an extremely high performance difference but they can argue that the RTX 3060 is much cheaper cause, you know...
Look, there is 80-20% market share difference against AMD.
AMD has always been the cheaper, value option and despite this, it is not enough to improve the market situation of the company.
So, AMD needs to step up with something completely different, plus the discounts, of course.
From 6800(even 6800 xt at $520) and below amd is wayyyyyy better value than nvidia counterparts, but hey nvidia mindshare does not understand how a rx 6600 is lightyears ahead in value than a rtx 3060.
It's like a voodoo black magic or something...
NVIDIA's marketing machine is extraordinarily effective. When someone mentions ray tracing, what comes to mind? RTX. When someone brings up upscaling solutions, what comes to mind? DLSS. By successfully capturing the public's attention, they have built a perceived trust - and are now capitalizing on the brand name.
Being the technology leader in HPC/Visualization and AI has its perks. AMD lags behind here. These are not marketing tricks, AMD's future depends on catching up here. These technologies increase in importance from generation to generation. For the mid-tier gamer, this may not matter, if you only want rasterization, AMD is a better solution. If you want more than just playing with the card or like RT effects/eye candy, nVidia has a clear advantage.
If the 4080/16 GB came for $700, as some here are demanding, AMD could pack up. If the 4080/16 GB rasterization is about the same as the 7900 XTX for 1000$, 1100$ for 4080 would be ok. compared to AMD for better RT and Tensor cores. We will see.
Not to mention I'm not too sure about that either. After the stores sold the initial batch of 4090s that got to my country, I haven't seen any restocks occur yet...
And no, NVIDIA just seized the moment. They didn't invent raytraced graphics. They just were the first to market with a product ready for it. AMD, Intel, and NV worked with Microsoft to design the specification. Also... the 4080 16GB isn't enough to make AMD pack up even if they didn't have something better than the 6900 XT on the way. It's just not that good a product.
You failed to explain your statement. I said: 4080 for 700$ and 7900 would be DOA for 900-1000$, if they have the same rasterization power. Have you any argument? Wrong. GPUs without hardware-supported RT can only be found in the entry-level class. The higher the price, the more important the RT performance and a broad software support. AMD will lose the enthusiast customer base if they don't catch up with RT like they lost the pro segment. It won't be long before the first games developed on new generation gaming engines will hit the market. Then it gets serious.
RT is too far away from being reality, you can't run it without very deep and aggressive upscaling (DLSS and similar)...
I am also an enthusiast, and I don't care about ray-tracing - traditional lighting is good for me, the games have much more serious problems than only the lighting.
This is why I can't wait to order a brand new AMD Radeon RX 7900 XT 20 GB.
:D:p
ehh, nah kinda. It's like a vague half-truth. Saying it like that, makes it sound like nVidia just straight up "invented" RT for gaming/real-time. Which isn't really the truth.
Do you think MS just scrapped together DXR the month between 2080 release and DXR public release? Or that PS5 and Xbox just added (weak) RT possibility in the 2 year between (or the 6000-series RDNA2 (inb4 it is weak, yes not the point))
Yes, nVidia was first to market with RT-"capable" product, and did some extremely good marketing work, with using RTX branding for their initial RT lmao, I quoted this inbetween it being edited, so it's a bit mangled
Too early to say if it'll get the "physX" treatment, but I doubt it. Given RT isn't a vendor specific product, but is gaining access through the API's (DXR, Vulkan RT), then it'll probably spread and be usable in the future (matter for discussion ofc). As it look now, it'll probably end up being ether full on solo light source, or in a hybrid way (but more) like now (or both)
Also, I don't think it is going anywhere soon. DirectX Raytracing is not a closed NVIDIA-only thing, and since Ampere it's supported from the bottom up through every tier - and it works.
Where is the evidence of Nvidia holding back the lower SKUs? (beyond nonsense from some YouTube channels)
It's normal that lower chips follow in sequence. I thought people would remember this by now. Renaming a product due to market backlash? How is this relevant to your claims? GTX 980 was the top model for about half a year, and it remained in the high-end segment until it was suceeded by Pascal. The mid-range cards of the 600-series was using both GK106 and GK104 chips.
The 600-series was "short lived" compared to the current release tempo. Back then Nvidia used to release a full generation and a refreshed generation (with new silicon) every ~1.25-1.5 years or so.
Geforce GTX 480 was delayed due to at least three extra steppings.
And back in the 400-series they used a GF100 chip in the GTX 465, which scaled terribly.
You should spend some time looking through the List of Nvidia GPSs. The naming is arbitrary; in one generation a 06 chip is the lowest, in others the 08 chip is. What they do is design the biggest chip in the family first, then "cut down" the design into as many chips as they want to, and name them accordingly; 0, 2, 3, 4, 6, 7, 8. Sometimes they even make it more complicated by making 110, 114, etc. which seems like minor revisions to 100 and 104 respectively.
So listen and learn, or keep digging… This might be your impression, but it doesn't match the reality. Back in the ATI days, they used to offer higher value in the upper mid-range to lower high-end segments, but since then they have been all over the place.
The Fury cards didn't start things off well, low availability and high price. Followed by RX 480/580 which were very hard to come by at a good price, compared to the competitor GTX 1060 which sold massive amounts and still was very available, even below MSRP at times. The RX Vega series was even worse, most have now forgotten that the $400/$500 price tag was initially with a game bundle, and it took months before they were somewhat available close to that price. Over the past 5+ years, AMD's supplies have been too low. Quite often the cheaper models people want are out of stock, while Nvidia's counterparts usually are. This is why I said AMD needs to have plenty of supplies to gain market shares.
We need to stop painting Nvidia/AMD/(Intel) as villains or heroes. They are not our friends, they are companies who want to make money, and given the chance, they will all overcharge for their products. RTX is their term for the overarching GPU architecture:
I doubt it will go away until their next major thing.
The GTX 465 was die harvested to move inventory. It's not that it used GF100 because it was designed around it, it was just a way to shift bad bins of higher end cards. No wonder it sucked. The GF100 at best felt like a prototype of the GF110, and I should know 'cause I had 3 480s in SLI, and then 2 580s back in the day.
The 11x-class chips haven't been released since the GK110, which already goes back around 8 years at this point. They are intra-generational refreshes, same as the -20x chips such as GK208 and GM204/GM200. I don't know why you brought up the correlation between HBM cards (low yield, expensive tech) and their midrange successors, both GTX 1060 and Polaris sold tens of millions of units and are still amongst the most widely used GPUs of all time. The 480's very low launch price at $199 may have been a little difficult at the beginning, but for a couple of years after they lost their shine and before the crypto boom, you could easily get them for change.
GA106 was not the smallest Ampere, for example. The GA107 was also used in some SKUs and in the mobile space, and there is also a GA107S intended for the embedded market. It's not really a hard-rule, but the tiers are clearly denominated.
I... don't see how any of this was productive?