• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

ASUS has a GeForce RTX 4060 Ti Card with an M.2 SSD Slot

Updated the article to note that it was actually Asus' China GM that was in the video and added another picture of the card.
 
The only thing motherboards need to do is have their bios turn the feature on and configure it properly. So, as I said, all software.

The CFG configuration pins has to be routed from the CPU if I recall correctly, those are RSV/optional... you know how things roll with those things.
 
Since the card is PCI-e x8, why not use the unused lanes for something else? Neat! :)
Because now if you swap this card out in 2 years, you're left with something lacking a slot on your mobo.

I'm not seeing it. Storage does last longer than a midrange 8GB piece of crap. Just buy storage to suit your board, and vice versa, its not rocket science.

this is the same company that
Has done all sorts of misguided shit and still pushes features on boards that nobody wants. I count this among them to be fair. Its new. Its pointless.
 
Because now if you swap this card out in 2 years, you're left with something lacking a slot on your mobo.

I'm not seeing it. Storage does last longer than a midrange 8GB piece of crap. Just buy storage to suit your board, and vice versa, its not rocket science.


Has done all sorts of misguided shit and still pushes features on boards that nobody wants. I count this among them to be fair. Its new. Its pointless.

I will say it still could be a way for ITX to get a extra M.2. since the card is only x8 it's very few people that actually use bifurcation on ITX so there it could be something even I doubt the card would last that long.
 
I will say it still could be a way for ITX to get a extra M.2. since the card is only x8 it's very few people that actually use bifurcation on ITX so there it could be something even I doubt the card would last that long.
Yeah ITX... maybe they should have just given it a 1~1,5 slot treatment then, too... but all x60(ti)'s are massive
 
If I ran any nvme it would be on a daughterboard with a heatsink and fan like a gpu if not water cooled, otherwise I will stick to SATA...

Vast majority of NVMe SSDs up until PCIe 5.0 didn't generate enough heat to warrant this. And new cuttung edge fastest SSDs that produce tonns of heat don't bring you anything in terms of real world speedups, except for benchmarks. And the next gen PCIe 5.0 drives are already being hyped as more energy efficiend and much cooler. So it's just a couple of drives right now that are excessively hot.
 
Lol. I've read through 3 pages of debates about bifurcation, but seems like most people forgot about the most important <<target audience>>. Who is this thing for? Why?
The concept is so stupid, I can't even think of an edge-case situation where something like this would make sense.
If someone has to rely on a GPU in order to have an extra M.2 slot - there's something seriously wrong with that "someone's" planning.
Heck, there are ITX boards with 2x M.2 slots, there are PCIe adapter cards for all kinds of situations, there are cheap ATX motherboards which can give you not only two slots, but also a full x8 PCIe for NVME RAID card etc. etc. etc.
If someone argues accessibility - even in a worst case scenario where your M.2 slot is blocked by GPU it takes less effort to take out a GPU and unscrew a drive than take out a GPU and partially disassemble it to take out the drive(unless someone is stupid enough to do it while GPU is mounted and potentially bend/damage a PCIe slot).
Active cooling off a GPU heatsink also has questionable benefits, especially in real-world usage (where realistically you can run any NVME drive without a heatsink, or at most - cooled by stock candybar foil and cat farts).

P.S. Though, with CN market I've already given up in finding logic in stuff. Sometimes they make weird things that are absolutely genius, and sometimes it's so stupid to the point of not even being funny.
I guess that's just the way it is.
 
Lol. I've read through 3 pages of debates about bifurcation, but seems like most people forgot about the most important <<target audience>>. Who is this thing for? Why?
The concept is so stupid, I can't even think of an edge-case situation where something like this would make sense.
If someone has to rely on a GPU in order to have an extra M.2 slot - there's something seriously wrong with that "someone's" planning.
Heck, there are ITX boards with 2x M.2 slots, there are PCIe adapter cards for all kinds of situations, there are cheap ATX motherboards which can give you not only two slots, but also a full x8 PCIe for NVME RAID card etc. etc. etc.
If someone argues accessibility - even in a worst case scenario where your M.2 slot is blocked by GPU it takes less effort to take out a GPU and unscrew a drive than take out a GPU and partially disassemble it to take out the drive(unless someone is stupid enough to do it while GPU is mounted and potentially bend/damage a PCIe slot).
Active cooling off a GPU heatsink also has questionable benefits, especially in real-world usage (where realistically you can run any NVME drive without a heatsink, or at most - cooled by stock candybar foil and cat farts).

P.S. Though, with CN market I've already given up in finding logic in stuff. Sometimes they make weird things that are absolutely genius, and sometimes it's so stupid to the point of not even being funny.
I guess that's just the way it is.
I see two kinds of logic:
1. The GPU heatsink might provide adequate cooling to a PCI-e 5.0 SSD without the need for excessive m.2 cooling, like mini blower fans or being part of a water loop. The only problem with this logic is that the card is PCI-e 4.0.
2. A proof of concept that may not even enter mass production in the near future, or at all.

Out of Asus's weird ideas, I'm more interested to see the motherboard-integrated PCI-e power connector + slot-powered GPU combo.
 
I'm not aware of any optional pins in a pci express slot: https://en.wikipedia.org/wiki/PCI_Express#Pinout
Though I won't claim to be an expert.

There are multiple methods configuring it actually depending of the pcie root complex location ie CPU or Bridge. It stems from resistors to IO mapping via various methods. Those methods imply hardware needs for the motherboard to support it. Let's not forget, many boards do switching between slots automatically for 8 + 8 mode and do not allow another foreign config, they already sport IO remapping on HW UEFI level.
 
Then buy a higher-capacity SSD.

Not always does density makes sense price wise. Same why I own my Sabrent I got it second handed for good price it's the only reason I have it.
 
Edge case: a sff user needs a 3rd SSD :p
"Have a pie and eat it too". You know SATA still exists, right?
Not always does density makes sense price wise. Same why I own my Sabrent I got it second handed for good price it's the only reason I have it.
I'm pretty sure having a third NVME slot on your board was a big part of your purchase decision, so not "the only reason".
 
Not always does density makes sense price wise. Same why I own my Sabrent I got it second handed for good price it's the only reason I have it.
If you're buying ITX and complaining about a lack of NVMe slots, you don't care about price.
 
"Have a pie and eat it too". You know SATA still exists, right?

I'm pretty sure having a third NVME slot on your board was a big part of your purchase decision, so not "the only reason".

Not really, I was planing to go B650 actually but than Komplett in Norway had a sale on the Asus Prime X670-P WiFi so I took that board instead of a higher priced B650.
 
I've been wanting AMD to take their Radeon SSG concept and expand it into their upper-range GPUs. Pair it with AMD's SAM (Smart Access Memory) and DirectStorage, and even a small SSD could offer extra performance. Doubly so if it's something like an Intel 3D XPoint NVMe.
 
Back
Top