• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

MSI GeForce RTX 5070 Ti Ventus 3X OC

IMO, i much prefer to choose MSI trio gaming OC or suprim X version, especially trio gaming OC white, because its has better durability, cooling and aesthetics than ventus 3x oc.
 
Because there were shortages during Covid and the mining boom, GPU prices went up. For this gen Nvidia deliberately launched with very little stock to hold them up.

It's unlikely to be sustainable longer term, once stock is plentiful, prices could well collapse, but we'll see.

I actually predict we won't even see the "Plentiful Stock" this generation, and the next ones, if the demand for server AI components will remain this high.

Nvidia can play the De Beers of Gaming GPU world - they create the hype, they develop the desirable product, but then they create this scarcity that drives the prices up.

I think they wouldn't even bother making Gaming GPUs this generation - every single made is a potential $70.000 AI accelerator less.

But I think they know this hype won't last forever, even without AI bubble collapse there will be more competition in this lucrative business. So they remain in the Gaming ring, and if AI bursts or demand for specifically Nvidia accelerators lessens, they can all of a sudden offer the amazing price / performance increase, just as they did with RTX 30x0 compared to previous generation - that $699 RTX 3080 was a direct result of low revenue due to crypto crash in early 2020 and abysmal sales of RTX 20x0 cards. It was only after they finalized everything that the new cryptomining surge shafted us gamers.

So for the foreseeable future I don't expect any good value for gamers, we aren't needed. Cards Will remain scarce, AIBs will bitch and moan, several will go out of business just like EVGA...
 
Thanks for the review. These cards also seem to OC wel.
I notice the remark about memory overlocking on and the 375mhz cap. Is this also the case for the 5080 and 5090?
 
I'm confused. VRAM consumption was never part of my reviews

Did you switch from allocation to usage in game reviews? I know it was something like that. Because *both* are important. You can't just go by usage because too many variables from swapping (or not).

If you go by only usage, you need to use games that also will overflow the buffer. Do you understand why this is important?

_______________________________
RE: margins.

15k 4nm wafer. Ram costs about the speed it is (for 2GB, 3GB probably about 25% more if similar ratio to GDDR6)...Hmm. (At least) 100 good dies per wafer....so 150 + (8)*28 = yep, $750!

Oh wait. HALF THAT. YES. HALF THAT. Oh wait. That's 5080. LESS THAN HALF THAT bc salvage. OH WAIT. A 24GB 5080 would only have 200% margin, but they really needed that extra 50%.

They're hurtin'.

OH WAIT. That's us. nVIDIA making a GD killing.
 
Last edited:
Lots of games use greater than 16GB, especially at 4k. Some 1440p.

By use, do you mean allocate or actually need?

I don't think I've ever seen anyone showcasing VRAM-related stuttering on a 16 GB card, and definitely not at settings offering playable framerates. You're not going to be enabling path tracing in native 4K on a 5080, even if it had 24 GB.
I've seen a few examples of games spilling over 12 GB (maxed out Hogwarts Legacy would be one), but also not at settings offering playable performance.
The only inexcusable amount is 8 GB. A 5060 with this amount of memory should not even be allowed to launch. They should wait for 3 GB modules and make it 12 GB.

Yes, we should be getting more VRAM at these prices, but to say that we actually need more than 16 GB is not accurate. Even when the PS6 comes out, it'll take a few years before games start fully utilizing more memory. All the PS5 cross-gen games had no problems running on 8 GB.
 
By use, do you mean allocate or actually need?

I don't think I've ever seen anyone showcasing VRAM-related stuttering on a 16 GB card, and definitely not at settings offering playable framerates. You're not going to be enabling path tracing in native 4K on a 5080, even if it had 24 GB.
I've seen a few examples of games spilling over 12 GB (maxed out Hogwarts Legacy would be one), but also not at settings offering playable performance.
The only inexcusable amount is 8 GB. A 5060 with this amount of memory should not even be allowed to launch. They should wait for 3 GB modules and make it 12 GB.

Yes, we should be getting more VRAM at these prices, but to say that we actually need more than 16 GB is not accurate. Even when the PS6 comes out, it'll take a few years before games start fully utilizing more memory. All the PS5 cross-gen games had no problems running on 8 GB.

I mean make it so the game doesn't 'stutter struggle' and/or have problems with a swap (which can sometimes be seen as graphical glitches/dithering), which sometimes people blame on the games.

If you think about it for a second, you'll get it. You've likely seen it, especially in Hogwarts. Ofc there are ways to limit these things from happening (one or the other), but there's almost-always a limitation.

You are absolutely not correct that 12GB is a good amount for current feature sets. Even W1zard changed his monologue in the current conclusion to reflect this. Pretty sure it used to be 8, then 12, now 16GB.

Ofc, none of that is true depending upon the actual resolutions and quantity of the textures, not-to-mention other features that use VRAM (like FG/RT/etc).

W1zard again, PURPOSELY DOES NOT SHOW THIS BECAUSE HE THINKS IT MEANS A GAME IS BADLY OPTIMIZED. In reality, his choices are using low-rez textures and/or swapping more often, if it even can.

Go look it up, pretty sure HUB showed this a long while back? Again, some people just be like 'why texture funny-lookin'?'. BC RAM, that's why.

I'll ask you this: When next-gen GPUs arrive, what configs do you think they will be? I would assume 16GB (low-end, 128-bit), 18GB (192-bit), and 24+ (256-bit+). That is 16GB minimum. Do you disagree?
Do you expect those 128-bit cards to be running maxed out 4kRT? Or rather baseline 1080p? Maybe that, maybe less than that upscaled? 1440p upscaled is 960p. Again, watch for this on 5070.
Mins. Not averages, W1zard.

Now think of a PS6. How much ram do you think it will have? How much devoted to graphics? Do you think it will run at 1440p all the time?
Keep dreaming...considering this card can't even keep 1440pRT mins...It'll probably be 1080p upscaled...and use at least 16GB for the GPU. Again, you can see this in stuff like Spiderman. 1080? Yes. 1440p. No.

Current consoles run games like this, like Star Wars Outlaws (which are likely set up for decent settings/resolution on next-gen) at 720p. That's seven two zero p's. 8GB. Seven twenty pee.
One thousand two hundred and eighty by seven hundred and twenty pixels.

Again, this will make more and more sense as more and more engines are adapted to take advantage of next-gen consoles (like Snow Drop already is, apparently).
Those times are, as already shown by RT options on PC not in current-gen console versions, fast-approaching.
 
Last edited:
I notice the remark about memory overlocking on and the 375mhz cap. Is this also the case for the 5080 and 5090?
Yes

Did you switch from allocation to usage in game reviews? I know it was something like that. Because *both* are important. You can't just go by usage because too many variables from swapping (or not).
In game reviews? I've been testing VRAM since forever, but only allocation. This is a graphics card review, not a game review though.

There is no way to track usage. Each frame is different, and each one touches different resources and many come from the various caches in the GPU, and this changes from frame to frame, even when standing still. GPU vendors have software that can capture the state of the GPU and everything related, and they can replay the command buffers and analyze a single frame. This is how they design their next-gen GPUs. Nothing that's accessible outside of these companies
 
I mean make it so the game doesn't 'stutter struggle' and/or have problems with a swap (which can sometimes be seen as graphical glitches/dithering), which sometimes people blame on the games.
Your whole take on this is quite literally retarded. Instead of blaming crap developers for making s**tty PC ports of console games, you blame W1zz for blaming those developers. I cannot stress enough how mind-bogglingly braindead your view is; I've come across some dumb takes in my 2 decades online, but this one is up there with the stupidest.

It is not NVIDIA's fault that developers make s**tty ports, nor is NVIDIA required - and nor should they be - to build their hardware to cater for s**tty ports.

Stop posting dumb crap and go sit in the dunce corner and for once THINK about what you're posting, BEFORE you post it.
 
In reality, it is using low-rez textures and/or swapping more often, if it even can.

Go look it up, pretty sure HUB showed this a long while back? Again, some people just be like 'why texture funny-lookin'?'. BC RAM, that's why.
That may have been just one game and it could be that the textures aren't preloaded just yet. And the whole pitcure is washed out as if the viewing distance is low.
For most of the time I can't even hit 6-8 GB of usage to have 100 FPS.
 
People are putting way too much faith in AMD. We're talking about AMD that's love shooting themselves in the foot each time. So I expect AMD to screw up the prices and quickly drop their prices again after 1 - 2 Months just to piss the early purchase customers off and sour the relationship with people that were holding out and hoping for the best.
 
Yes


In game reviews? I've been testing VRAM since forever, but only allocation. This is a graphics card review, not a game review though.

There is no way to track usage. Each frame is different, and each one touches different resources and many come from the various caches in the GPU, and this changes from frame to frame, even when standing still. GPU vendors have software that can capture the state of the GPU and everything related, and they can replay the command buffers and analyze a single frame. This is how they design their next-gen GPUs. Nothing that's accessible outside of these companies
I know that, which is why trusting usage is sketch. Isn't that literally what presentmon/RTSS attempts to do though, usage/allocation?
Your whole take on this is quite literally retarded. Instead of blaming crap developers for making s**tty PC ports of console games, you blame W1zz for blaming those developers. I cannot stress enough how mind-bogglingly braindead your view is; I've come across some dumb takes in my 2 decades online, but this one is up there with the stupidest.

It is not NVIDIA's fault that developers make s**tty ports, nor is NVIDIA required - and nor should they be - to build their hardware to cater for s**tty ports.

Stop posting dumb crap and go sit in the dunce corner and for once THINK about what you're posting, BEFORE you post it.

I find this quite humorous. A port using next-gen features the current consoles can't support is a crappy port? Or using higher-rez textures and keeping them loaded? Which? I'm lost.
It's not the game. It's the cards. You're blaming the wrong thing.

I apologize you literally can't see the limitations imposed on each card. It's very apparent if you're looking.
Again, I challenge W1zrd to post 1440pRT mins for games that are ~60fps avg at 1440pRT. 4k quality upscale mins.
What is less than 1440p? 1080p. That's correct. Or 960p if you use quality up-scaling to 1440p. Which is less than 1080p. Do you need to run those 1440p/60fps? You do not. Are they it selling this way? Yes.
Is W1zard capitulating to this? You be the judge.
Will AMD be able to run 1080p? Probably yes. This is why their card makes just as much sense.

Is this the devs fault, or nVIDIA keeping you on a string to upgrade when again, the thresholds are very apparent? They literally inch everything along. If you can't see this, you're blind.

You can see where 45TF and 16GB both become requirements (4070ti/5070 both less on purpose). This is and will soon become more apparent.
You can see where 60TF and >16GB both become requirements across multiple scenarios. Are basically their cards all over/under this? Yeeeeppppp. Can you OVERCLOCK a 5080 above 60TF? Yes. Ram limit? Yes.
Is 9070xt literally hopping the 45/12GB limitation (that AMD themselves put on 7800xt @ 45TF even if 16GB)? YEEEEEP. Will this make a lot of settings above 12GB cards but same as GB203 playable? PROBABLY.
Is it still below 60TF raster? Yep. Is 7900xt limited to ~60TF? YEEPPP. Is that where you need more than 16GB? YEEEP. Again, it's very purposeful product segmentation...nvidia's is just more gross.

I don't like the idea of AMD selling 1080p->4k upscaling as 4k either. That's my point. But literally nothing can do 1440pRT 60mins outside of incredibly expensive 90 cards. This will eventually change.
But when it does, do you not expect the bottom to rise? Would it not make sense if >60TF/>16GB then becomes what you want for even 1080pRT upscaling w/ FG etc? I think this makes tremendous sense.
This is why their 'middle-ground' card may not hold up for the settings you want there, either. This is where (for right now) you would want a higher-clocked 24GB 5080, which doesn't exist RN.
But it surely will, because they will create that gap, and then you will see it more clearly. Before long you will need more raster (like a 4090+) for 1440p. It's pretty clear how things will probably evolve if you look at it.
And nVIDIA will be there, to sell you every damn card they can below that (and those) threshold(s) until they have to do it.
Ask yourself why a 8 cluster (~12288sp) 24GB nvidia card does not exist when it would clear many hurdles AD103/B203 do not. 9728, 10240. 10752. All 16GB. All with different limitations.
AMD too apparently...but at least they're trying to make a well-balanced card for what's affordably possible this generation. They are more a victim of circumstance, than anything else. For nvidia; clearly planned.
 
Last edited:
No it's 750 dollars, the remaining amount is AI generated.


"DLSS Multi Price Generation generates up to three additional card values per traditionally calculated price, working in unison with the complete suite of scalping technologies to multiply price by up to 8X over traditional brute-force MSRP. This massive price improvement on GeForce RTX 50 series graphics cards unlocks stunning multiplies of original value for $1000 - $5000 Generally Unavailable Gaming Experience!"
 
Ask yourself why a 8 cluster (~12288sp) 24GB nvidia card does not exist when it would clear many hurdles AD103/B203 do not. 9728, 10240. 10752. All 16GB. All with different limitations.
AMD too apparently...but at least they're trying to make a well-balanced card for what's affordably possible this generation. They are more a victim of circumstance, than anything else. For nvidia; clearly planned.

N3P brings +60% logic density, therefore we're looking at 7680->12288C 18GB for the 6070/Ti, 6080 with 16384C 24GB. Rubin is released earlier, like Pascal, so there's no need for a super on N4. What for.
 
"MSRP Promised"

I take it, they will never increase the price and keep stocking them going forward? They won't "magickly" disappear from the product stack, right? :)
 
N3P brings +60% logic density, therefore we're looking at 7680->12288C 18GB for the 6070/Ti, 6080 with 16384C 24GB. Rubin is released earlier, like Pascal, so there's no need for a super on N4. What for.
60%? My understanding is ~33% (according to listed metrics).

I still think nvidia will do 12288/9216/6144. This is because their cache is set up for each of those (256/192/128) and 3780/36000; 3780mhz being the clockspeed of Apple's first 3nm chip. 36gbps bc could use Micron.
Yes, Apple used N3B and nVIDIA probably N3E, but nvidia also like to keep their clockspeeds conservative for power/size and/or to have the ability to sell that clock improvement with more products over time.
Also, again, at ~3700mhz you could argue 24GB of ram could become a limitation in some respects, but we use more and more compute for things like up-scaling so it'll probably be fine most of the time.

Also, again, 12288 is 8 clusters...where 4080 and 4080s were kinda like 6.5ish (6*1536/1*512 or 1024) and 5080 7 (7*1536). Why when 4090/5090 16? I dunnnnnnoooo. Totally not bc people would keep that.
Had AMD made a good RT and/or good RT high-clocking 7900xtx, it might've happened on 4/5nm. Instead we have to settle for just N48, but it still makes just as much sense as most of nVIDIA's stack rn imho.
And much more sense than 4070ti/5070.

You ask 'what for'? And I would say 'because they didn't do it in the first place' and 'so they could sell it again and not suck'.
Like I said, there are a ton of instances you see where 5080 would benefit from 18-20GB of RAM. >60TF/>16GB is a key metric for a lot of things, and that card would not suck long-term (theoretically).
It also won't be as high-end long-term as people might think, but it should age gracefully. Current perf scaling at high clocks is bad bc 16GB. This is why stock clock low (2640mhz when capable of at least ~3165).
Probably replaced by 192-bit/18GB cards next-gen that will for all intents and purposes perform the same, but it's a stop-gap. All of these things are stop-gaps, imo. nVIDIA's just dragging them out. Bc $.
 
The headline has been changed and you have been exposed as an extreme Nvidia brand loyalist. Good job!
lol i'm just messing with you people. 7 more minutes until stores open

edit:

3:04pm
Screenshot 2025-02-20 at 15-04-39 GeForce RTX 5070 Ti GPUs _ Video Graphics Cards Newegg.com.png

3:19pm
Screenshot 2025-02-20 at 15-19-43 GeForce RTX 5070 Ti GPUs _ Video Graphics Cards Newegg.com.png

3:35pm
Screenshot 2025-02-20 at 15-35-16 GeForce RTX 5070 Ti GPUs _ Video Graphics Cards Newegg.com.png

3:50pm
no more cards in stock
 

Attachments

  • Screenshot 2025-02-20 at 15-49-48 GeForce RTX 5070 Ti GPUs _ Video Graphics Cards Newegg.com.png
    Screenshot 2025-02-20 at 15-49-48 GeForce RTX 5070 Ti GPUs _ Video Graphics Cards Newegg.com.png
    1.8 MB · Views: 61
Last edited:
Is it me or are people more disingenuous nowadays when discussing review results? Linking to a page that supports their argument, when the literal next or previous page disagrees with it. I usually read comments in a popcorn capacity but that's just becoming more depressing. Or I'm getting old, maybe both.

According to this review:
- It's equivalent to a 7900XTX in raster (bit better 1080p, bit worse 1440p, similar 4k for some reason). 15-28% faster than 4070ti
- 33-42% faster than 7900xtx in raytracing. 7-15% faster than 4070 ti (meh, guess rt really does rely on fixed function hw)
- MSRP is irrelevant. The graph people link to say "look how good/bad the price performance" literally has multiple bars for different prices. The cognitive dissonance is real.

My thoughts:
- GPU prices have beaten me into submission over the years. Went from £200-300 in early 2000s until £380 for GTX1070 (2016), £500 for current 6800XT (2023) and I think I'm willing to go to £750 next time...
- I don't care about RT unless a games conventional lighting is actually trash*, in which case I'll turn on the minimal required settings, performance willing. Performance willing being key, RT is useless outside of the top SKUs (in my games). 5070ti just eeks out 62fps, so RT might be slightly growing on me. Still 50% performance hit though, on what is supposed to be the RT industry leaders latest card.
- Don't care for upscaling or framegen. When it first came out I thought it might be good enough (for me) around DLSS5/FSR5. Looks like we're on track for that. Anything other than MSAA is already "fake", after all. So not too tribal. Ram and latency improvements seen, expect more next gen.
- Not sure where I stand on VRAM. More is of course better but that's assuming equal speed and price. For gaming 8GB still seems fine for some performance compromise, 12GB has none outside edge cases, 16GB still isn't enough for those edge cases. Not sure what to think, I only occasionally use GPU for other things. Religiously kept up with Wiz's VRAM usage tests since I got the 6800XT (to validate my purchase decision xD), not really feeling a push for more. Also, DirectStorage exists and SSDs keep getting faster.
- 7900XTs (13% slower) are in stock for £650 and XTXs for £850. RTX 40xx don't exist so are irrelevant to my decision making process. Not up in UK yet but at £750 it would net me 56%/50% more performance/price. Versus the 153%/32% going 1070->6800XT...
- More appealing than an MSRP 5080 (+12% fps, +33% $) but still not that appealing.

I'm not willing to upgrade for less than a 50% performance uplift, this barely hits that and 9070XT is expected to be between 7900XT and XTX, so won't hit that. I think this gen is a bust for me. Bought a quest 3 last year, it's "only" 2064 x 2208 per eye but VR pixels are warped so you want to render at higher resolutions to essentially AA the edges. Upscaling artifacts are "slightly" more noticible when next to your eyeball, so if I already don't like it on flatscreen... Also AV1 would be nice. 2 years for next gen GPUs, this'll be fun.

I don't think the future is much better. Nvidia stopped "old" gen production sooner than ever before (?), so prices won't go down over time. AMD won't do that for RDNA3 this gen but most likely will for RDNA4 once UDNA rolls around. Currently their AI and gaming archs are seperate so they have to produce both. Big reason for unifying is to cut costs, cutting RDNA4 to streamline their production chain makes sense.

I don't think Nvidia cut 40xx early just to make 50xx look like better value, although that is part of the reason. Going forward I expect the decision to cut or not to be based on how many people jump on the latest AI cards arch and how many people prefer to save money with older ones. No point making low profit gaming GPUs for an arch that no one's buying AI GPUs for. Expect the same decision making process from AMD come UDNA.

* Specifically reflections in CP2077, which reflect your nephews minecraft world with RT off (got to save money somewhere, ig). Not too much of a hit on 6800XT for just reflections (last I played, around 2.0 launch)
 
Oh look at that all of the ones for 749 are gone on Newegg.. lol but if you really want one then you can get one off of ebay in a few minutes...

EDIT: It's early.. did they ever show instock because they were always showing out of stock.
 
This card is $909.99 at MC now. Did MSI lie or is MC price gouging?
829 at newegg and in-stock .. still much more than the promised 750
 
Oh look at that all of the ones for 749 are gone on Newegg.. lol but if you really want one then you can get one off of ebay in a few minutes...

EDIT: It's early.. did they ever show instock because they were always showing out of stock.
No, they never went in Stock they stayed out of Stock. There never were any cards for $749 and if they did show up it was for 30seconds or less.
 
Back
Top