• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

EVGA GeForce GTX 980 Ti VR Edition Starts Selling

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,670 (7.43/day)
Location
Dublin, Ireland
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard Gigabyte B550 AORUS Elite V2
Cooling DeepCool Gammax L240 V2
Memory 2x 16GB DDR4-3200
Video Card(s) Galax RTX 4070 Ti EX
Storage Samsung 990 1TB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
The EVGA GeForce GTX 980 Ti VR EDITION has arrived. Accelerated by the groundbreaking NVIDIA Maxwell architecture, the GTX 980 Ti delivers an unbeatable 4K and virtual reality experience. With 2,816 NVIDIA CUDA Cores and 6GB of GDDR5 memory, it has the horsepower to drive whatever comes next. And with the VR EDITION, you get an included 5.25" drive bay with front HDMI 2.0 and USB 3.0 giving you easy access to your VR device's input. The graphics card also has an internal HDMI connector, meaning no external cables will be visible.



New and Key Features:
  • Built for VR: Included 5.25" Drive Bay with Front HDMI 2.0 and USB 3.0.
  • Internal HDMI: Connects to a 5.25" Drive Bay, no visible external wires!
  • ACX 2.0+ Cooling: EVGA's latest cooling solution offers improved cooling, reduced fan noise and better overclocking.
  • 6GB of GDDR5 Memory: Gives you access to higher texture qualities in games, improved 4K gaming performance and optimized for the next generation of gaming.

For more information, visit this page.

View at TechPowerUp Main Site
 
did they ad ultra wide low latency vram to make it better at vr? o nope guess not :rolleyes:
 
I want this for a tiny little lan box where you can plug everything but the power cable in the front. Makes things easier.
 
did they ad ultra wide low latency vram to make it better at vr? o nope guess not :rolleyes:

You have a real stick up your ass when it comes to nvidia products huh? Some sort of conspiracy theory like they are out to get you.
 
That front output is actually a good idea. I like it.
 
You have a real stick up your ass when it comes to nvidia products huh? Some sort of conspiracy theory like they are out to get you.
idk? are they the dumb shits that fuck up games all the time?
 
idk? are they the dumb shits that fuck up games all the time?

No. That would be lazy developers. Gameworks is what you're getting at I assume but its not entirely useless.
However, on topic, this VR edition is a bit of a con. Normal card with drive bay for cables. Hmm....
 
Wouldn't a cheap $10 10ft hdmi cable be a better solution instead of the breakout box?
 
Wouldn't a cheap $10 10ft hdmi cable be a better solution instead of the breakout box?

A better solution? No. A far more economical one though, which is why this is probably useless. I'm sure they charge a premium for it.
 
did they ad ultra wide low latency vram to make it better at vr? o nope guess not :rolleyes:

still 4+2 gb too which really helps developers...
 
4K only if you use all details at low....
 
I think its actually kinda clever add on. I mean it is good for those wanting to do VR to have the input be like that at the front of a case. While there are alternatives to this as long as the premium isn't crazy I see no problem to it!
 
still 4+2 gb too which really helps developers...

I'm sorry, what? Are you getting at that the 980Ti is like the 970 in the split ram dept? If so you couldn't be further from the truth.
 
I'm sorry, what? Are you getting at that the 980Ti is like the 970 in the split ram dept? If so you couldn't be further from the truth.

He's just a troll. He doesn't know.

If it weren't for the clever split ram on the 970, that card would have only 2gb. At least according to an interview of an nvidia representative when the entire Internet broke because of the whole thing. And then the guy who made the benchmark that people were using to test said it was not a valid way to verify the issue, and has asked people to stop.
 
He's just a troll. He doesn't know.

If it weren't for the clever split ram on the 970, that card would have only 2gb. At least according to an interview of an nvidia representative when the entire Internet broke because of the whole thing. And then the guy who made the benchmark that people were using to test said it was not a valid way to verify the issue, and has asked people to stop.

yes according to the people that want to save some skin on their ass lol

maxwell cores are the best of the 28nm process but everything else attached to that core... yup. hitting high frequency with less cores making things more efficient overall but starved already by vram bandwidth and relying on compression to keep relevance.

they talk up some garbage that now with maxwell we can cut down the arch like this... oops fail there not following the leader. the 970 has such high processing ability per core it would be silly to see one with 2gb or even 3gb especially with lower bandwidth and slimmer width if you go by amd standard for how a gpu should be balanced.

if amd made the 970...4gb
if amd made the 980...6gb
if amd made the 980ti..8gb
if amd the titan..12gb

but they dont want messy situation with high end gpu's especially regarding fields they are among world leaders in. they tell no lie that 4gb of hbm is just as good as 8gb of gddr5 but if they cant get nv to figure it out then what the hell should they do. truth is its just a load of shit and nv knows what they are doing and they know most people have no damn idea.. even at the patent office.
 
yes according to the people that want to save some skin on their ass lol

maxwell cores are the best of the 28nm process but everything else attached to that core... yup. hitting high frequency with less cores making things more efficient overall but starved already by vram bandwidth and relying on compression to keep relevance.

they talk up some garbage that now with maxwell we can cut down the arch like this... oops fail there not following the leader. the 970 has such high processing ability per core it would be silly to see one with 2gb or even 3gb especially with lower bandwidth and slimmer width if you go by amd standard for how a gpu should be balanced.

if amd made the 970...4gb
if amd made the 980...6gb
if amd made the 980ti..8gb
if amd the titan..12gb

but they dont want messy situation with high end gpu's especially regarding fields they are among world leaders in. they tell no lie that 4gb of hbm is just as good as 8gb of gddr5 but if they cant get nv to figure it out then what the hell should they do. truth is its just a load of shit and nv knows what they are doing and they know most people have no damn idea.. even at the patent office.

Fury X = 8.9 billion transistors
980ti = 8 billion transistors

Fury X = 4096 shaders
980ti = 2816 shaders/cuda cores

Fury X texture fillrate = 268.8 GT/s
980ti texture fillrate = 176 GT/s

Fury X ROPS = 64
980ti ROPS = 96

Fury X pixel fillrate = 67.2 GP/s
980ti pixel fillrate = 96 GP/s

Fury X memory bus = 4096 bit
980ti memory bus = 384 bit

HBM doesn't make AMD awesome, it helps gloss over the Fiji arch's weaknesses. When it comes to business, Nvidia knew what they were doing perfectly well. If you compare the hardware numbers, Fury X should wipe the floor with Maxwell but it doesn't. However they did it, they misfired. And as great as the HBM implementation is, it was an early adoption with little traction. Arguments suggest they had to go HBM otherwise their power draw would have been silly.

Don't get me wrong, I think Fury X is a great card but it should be better.
 
nvidia for VR ... :D
 
980ti = 2816 shaders/cuda cores

Not so much to argue, but didn't Nvidia revise what or how the call/count a Cuda core by half and that's why that number doesn't jive. I though they went to a fewer number of higher clocked units on Kepler vs. larger number of lower clocked units of Fermi. So it hard to call one shader equal to another.

It’s hard to say because there’s more than “bits and pieces” there's their sum together, it’s more how well they interact. On that we can say Fiji doesn’t appear to make the most of what’s placed on it vs. the GM200 only because there are GTX 980Ti that in more cases provide percentage increase in FpS… which is one matrix. The only thing you can say is Nvidia has a 601mm² part, while AMD nearly competitive part is a 596mm² not much else.
 
I'm impressed (although I guess shouldn't be surprised) how Nvidia completely stole AMD's Fury X thunder. They sliced their GM200 as required, dictated the price and sat back and watched the fireworks (and tears).

Fact is Nvidia didn't need a fully fledged GM200 to compete with Fiji.
 
Not so much to argue, but didn't Nvidia revise what or how the call/count a Cuda core by half and that's why that number doesn't jive. I though they went to a fewer number of higher clocked units on Kepler vs. larger number of lower clocked units of Fermi. So it hard to call one shader equal to another.

It’s hard to say because there’s more than “bits and pieces” there's their sum together, it’s more how well they interact. On that we can say Fiji doesn’t appear to make the most of what’s placed on it vs. the GM200 only because there are GTX 980Ti that in more cases provide percentage increase in FpS… which is one matrix. The only thing you can say is Nvidia has a 601mm² part, while AMD nearly competitive part is a 596mm² not much else.


Well in terms of comparison between Nvidia and AMD architectures and core design, you cannot compare them.
 
I'm impressed (although I guess shouldn't be surprised) how Nvidia completely stole AMD's Fury X thunder. They sliced their GM200 as required, dictated the price and sat back and watched the fireworks (and tears).

Fact is Nvidia didn't need a fully fledged GM200 to compete with Fiji.
lies... the titan doesnt really sell to people unless they plan to put a water block on it and oc it. having a lot to do with how much heat it produces already and needing the overclock to even be worth the money. pretty much all efficiency out the window at that point but at least you can overclock it when its water.. the furyx has a custom water cooler and overclocks for shit.
 
Well in terms of comparison between Nvidia and AMD architectures and core design, you cannot compare them.
there is certainly a level of comparison to be had.
that would probably be a good thread.. no? pull up some white papers on compute units for both camps?
 
Back
Top