Sunday, May 14th 2023
NVIDIA GeForce RTX 4060 Ti to Feature a PCI-Express 4.0 x8 Bus Interface
NVIDIA has traditionally refrained from lowering the PCIe lane counts on its mid-range GPUs, doing so only with its most entry-level SKUs, however, this is about to change with the GeForce RTX 40-series. A VideoCardz report says that the upcoming GeForce RTX 4060 Ti, based on the AD106 silicon, comes with a host interface of PCI-Express 4.0 x8.
While this is still plenty of interface bandwidth for a GPU of this market segment, with bandwidth comparable to that of PCI-Express 3.0 x16, using the RTX 4060 Ti on older platforms, such as 10th Gen Intel Core "Comet Lake," or even much newer processors such as the AMD Ryzen 5700G "Cezanne," would run the GPU at PCI-Express 3.0 x8, as the GPU physically lacks the remaining 8 lanes. The lower PCIe lane count should simplify board design for AIC partners, as it reduces the PCB traces and SMDs associated with each individual PCIe lane. Much like DRAM chip traces, PCIe traces are meticulously designed by EDA software (and later validated), to be of equal length across all lanes, for signal integrity.
Source:
VideoCardz
While this is still plenty of interface bandwidth for a GPU of this market segment, with bandwidth comparable to that of PCI-Express 3.0 x16, using the RTX 4060 Ti on older platforms, such as 10th Gen Intel Core "Comet Lake," or even much newer processors such as the AMD Ryzen 5700G "Cezanne," would run the GPU at PCI-Express 3.0 x8, as the GPU physically lacks the remaining 8 lanes. The lower PCIe lane count should simplify board design for AIC partners, as it reduces the PCB traces and SMDs associated with each individual PCIe lane. Much like DRAM chip traces, PCIe traces are meticulously designed by EDA software (and later validated), to be of equal length across all lanes, for signal integrity.
58 Comments on NVIDIA GeForce RTX 4060 Ti to Feature a PCI-Express 4.0 x8 Bus Interface
It will use the first 8 lanes when put in a fully fledged (first PICe) 16× slot
So if you share it with the second, the second half will go over as the second PCIe slot's first 8 lane, where the PCIe slot it will be 8+8 or 8+2×4 lanes as you would expect them to work
www.forbes.com/profile/jensen-huang-1/?sh=3594c8ed3a6c
It's only valid to the consumer if those reduced costs are also passed on - and that's where I suspect most people will be angry because the 4060Ti is cut down to reduce cost in so many ways - VRAM, TDP, VRAM bus width, PCIe lane count, and yet the rumoured pricing is still sky high.
Maybe Nvidia are pulling their sly old tactics of leaking a high-price all over the media and then "surprising" people at the last minute with a lower MSRP at launch. Honestly, even if the 8GB card launches at an unexpectedly low $399 it's still not a great deal, and at $449 it's DOA for anyone with a clue about the way the game industry is dropping support for older consoles and moving the VRAM requirements higher. It happens every generation of consoles, but this is the first time the PC industry has been caught with its pants down - Mainstream VRAM size has barely budged at all in 7-8 years.
I bet the saving is about 5-10 usd or something similar on a product that will probably cost at least 400.
Wouldnt surprise me if agreements have been made with board vendors to accelerate obsolescence on gen 3.
If you're on a classic Ryzen 5 3600 or i5-10400F then a used 3060Ti or RX 6700 10GB is going to be a better match for your system. Not only will you save a lot of cash, you'll also get all 16 gen3 lanes.
- GeForce GTX 750 8 GB/s (gen3 x16)
- GeForce GTX 950 8 GB/s (gen3 x16)
- GeForce GTX 1050 8 GB/s (gen3 x16)
- GeForce GTX 1650 8 GB/s (gen3 x16)
- GeForce RTX 3050 8 GB/s (gen4 x8)
- GeForce RTX 4060 Ti: 8 GB/s (gen4 x8)
So not only are they giving us the same bus bandwidth as a low-end 2012 card, but they have moved the low-end bus bandwidth up 2 SKUs from the xx50 series to the xx60 Ti series.
"NVIDIA GeForce RTX 4060 Ti to be limited to a PCI-Express 4.0 x8 Bus Interface"
That seems more accurate and less like marketing BS.
Guys get with the program... What on earth did you think Jensen was cooking in that oven of his? He's left one too many clues for us to fail at this game.
the 4090 is the 4080
4080 > 4070
4070 > 4060
4060 > 4050
The mathematical approach to unravel the brainteaser - take the MSRP of the former and divide it in half to correctly identify the latter price
4080 - $800
4070 - $600
4060 - $300
4050 - $150/$200
So not bad, the 4050 is a 8GB VRAM card skating on 4.0 x8 Bus Interface. What more could you ask for?
I thought cracking the code would get me a free GPU or something... he just threw a fist in the air and smiled.
So why are you paying double you might ask? Cheeky! You'll have to wait for part #2 of the great wheresmycar deciphering escapades
Is it easier to add 1% to manufacturing cost for multi billion dollar company, or easier for consumer to add 50% to their budget? Maybe that makes you think about it more rationally. (not to mention a 100usd board is typically junk these days).
The reality instead would be, that either the GPU gets downgraded to pay for the gen 4 board, or the consumer loses performance on their purchase, or they forego the purchase, I actually think your scenario would be the least likely outcome.
You're suggesting that people with an older board and CPU should overspend on a GPU that's too fast for their old platform?
In an ideal world, yes - this would have 16 lanes, but it doesn't, so you have to analyze based on what we are getting, not on what you'd like to be getting.
If someone is tight on cash, overpaying for a GPU isn't the right answer. IMO they should buy a cheaper GPU that better matches their existing CPU performance and save the money.
If someone is okay with buying a $450 GPU, I don't personally feel that they're tight on cash because a $450 GPU is definitely entering "luxury purchase" territory and people who are barely scraping buy don't typically make short-lived, luxury purchases.
Inline XBRL Viewer (sec.gov)