Thursday, November 6th 2014

MSI Announces Radeon R9 290X Gaming 8GB Graphics Card

MSI is proud to announce the availability of the new R9 290X GAMING 8G graphics card. Packed with 8 Gigabyte GDDR5 memory operating at 5500MHz and all the extra features stuffed with it, the new 290X GAMING 8G is sure to drive UltraHD gaming resolutions without any problem. The MSI Twin Frozr IV Advanced ensures your card runs cool so you can enjoy maximum performance while AMD's PowerTune technology enables the R9 290X GAMING 8G to run at highest clock speeds.

With support for the latest industry standards and thrilling new technology such as Mantle support in Battlefield 4. Thanks to the bundled MSI GAMING App gamers can quickly switch between three pre-sets including a silent mode optimized for power efficiency and an overclocking and OC Mode to get the most power out of your graphics card, without worrying about learning how to overclock. The R9 290X GAMING 8G has been designed to give you a fluid and silent gaming experience that delivers you true next-gen performance for 4K UHD resolutions and up, without sacrificing on thermals or noise.
Stay cool in combat
The MSI Twin Frozr IV Advanced has been completely customized for the R9 290X GAMING 8G graphics cards to deliver the best thermal experience. The design uses a larger copper base for heat absorption and the heat pipes are in contact with a bigger part of the heat sink and exposed to more airflow to ensure the highest performance of the GPU because of optimal temperatures. Combined with the dual form-in-one heat sinks that add both cooling ability and structural reinforcement the R9 290X GAMING 8G stays cool, quiet and safe.

Optimized for Gamers
The new R9 290X GAMING 8G is packed with features that benefit every gamer. AMD TrueAudio technology allows far more realistic positional audio and the added benefit of surround sound over connected displays. Mantle allows game developers to directly speak to the GPU optimizing GPU performance. This can all be displayed in a glorious UltraHD / 4K resolution and even up as the new R9 290X GAMING 8G is offering unmatched performance at 4K resolutions. This can be easily connected through the DVI, HDMI and DisplayPort connectors. Up to six dedicated displays can be connected to the R9 290X GAMING 8G for an amazing Eyefinity experience.

R9 290X GAMING 8G Technical specifications
  • GPU: Hawaii XT
  • R9 290X GAMING 8G Clock speeds: 1040 MHz (OC Mode)
  • Memory size / Speed: 8096 MB / 5500 MHz
  • Connectivity: DisplayPort / HDMI / 2x DL-DVI-D
  • Card Dimensions: 276x127x39 mm
  • Afterburner OC support: GPU / VDDCI Overvoltage
  • Power connectors: 1x 8-pin, 1x 6-pin PCI Express
Add your own comment

49 Comments on MSI Announces Radeon R9 290X Gaming 8GB Graphics Card

#26
EarthDog
I use almost 3gb in bf4 at 2560x1440...(default ultra). Amazed you aren't seeing more than 2gb in other titles.

As far as single card performance at 4k, depends on the title and settings, but most aren't what one would consider 'playable' (30fps+) with reasonable texture settings when looking at benchmarks.. I'd rather rock 1440 and ultra than 4k and medium... But that is just me.

not what I would consider playable here..But it isn't a fps so more forgiveness is built in not to mention it's largely a cpu bound game...www.pugetsystems.com/labs/articles/Star-Citizen-Benchmark-Arena-Commander-v0-8-571/page3
Posted on Reply
#27
rooivalk
D007I keep seeing people saying 4k is too much for one card.
I have to say in a lot of cases you are wrong.
I was playing in 4k on one gtx 680 for some time.
One gtx 980 runs start citizen at 4k perfectly smooth.
Tested it myself on ultra.

I have never seen anywhere near 8 gb of memory usage though.
Most I have seen is like 2.
You're right but GTX980 is only borderline (~30-40fps) 4K capable with recent games. It's not even capable outputting 30fps in Crysis 3.
I'm not so sure with next year games or with AA, texture mods, and ENB even though it's currently the fastest and most expensive single card (not counting old and odd Titans).

Comfortably running 4K with single card is still a year or two away.
Posted on Reply
#28
D007
rooivalkYou're right but GTX980 is only borderline (~30-40fps) 4K capable with recent games. It's not even capable outputting 30fps in Crysis 3.
I'm not so sure with next year games or with AA, texture mods, and ENB even though it's currently the fastest and most expensive single card (not counting old and odd Titans).

Comfortably running 4K with single card is still a year or two away.
Crysis is a terrible benchmark imho.
Crysis is the perfect example of badly optimized games.
I really don't think about "oh can I play crysis in 4k".
Who does?

I can comfortably play star citizen at 4k on one card.

I understand what you mean and to an extent I agree but most games you will be able to play in 4k on one gtx 980.
Posted on Reply
#29
Scrizz
EarthDogI use almost 3gb in bf4 at 2560x1440...(default ultra). Amazed you aren't seeing more than 2gb in other titles.

As far as single card performance at 4k, depends on the title and settings, but most aren't what one would consider 'playable' (30fps+) with reasonable texture settings when looking at benchmarks.. I'd rather rock 1440 and ultra than 4k and medium... But that is just me.

not what I would consider playable here..But it isn't a fps so more forgiveness is built in not to mention it's largely a cpu bound game...www.pugetsystems.com/labs/articles/Star-Citizen-Benchmark-Arena-Commander-v0-8-571/page3
that's an old version of SC
Posted on Reply
#31
R00kie
Well, tbh they could've added that new TFV cooler on top. I mean, why wouldn't you?
Posted on Reply
#32
D007
btarunrYou won't need AA with 4K, unless you're using 65-inch or 80-inch 4K TVs, instead of 28-inch or 32-inch monitors. At monitor sizes, the jagged lines simply aren't discernible.
You are pretty correct there.
I find that on my 50" 4k TV, I use 2X AA though, never more.
Totally unnecessary to use more.
Posted on Reply
#33
Prima.Vera
D007Crysis is a terrible benchmark imho.
Crysis is the perfect example of badly optimized games.
I really don't think about "oh can I play crysis in 4k".
Who does?

I can comfortably play star citizen at 4k on one card.

I understand what you mean and to an extent I agree but most games you will be able to play in 4k on one gtx 980.
Star Citizen is not even released. What are you talking about?!
Posted on Reply
#34
D007
Prima.VeraStar Citizen is not even released. What are you talking about?!
You do know there is a multiplayer mode right?
It runs perfectly in it.
Posted on Reply
#35
RealNeil
Fluffmeisterthe amazing all conquering GTX 970.
LOL!
Posted on Reply
#36
EarthDog
Well, it seems like this card will never make it to the states and is only Asia and Europe...
Posted on Reply
#37
RealNeil
EarthDogWell, it seems like this card will never make it to the states and is only Asia and Europe...
There are too many people in the States with more money than sense for it not to get here. :peace:
Seeing as though games will not use 8GB of RAM at this point, I would be leery of buying into 8GBs of RAM on a GPU.

Once this is massive RAM idea is utilized and more mainstream, these 8GB cards may not work properly like the cards released at that time.
Posted on Reply
#38
Aquinus
Resident Wat-man
RealNeilOnce this is massive RAM idea is utilized and more mainstream, these 8GB cards may not work properly like the cards released at that time.
This card will become useful as soon as we start using more than 4GB of video ram, we don't need to be using 8GB for it to be useful. 8GB just seems like a big number, but it's no bigger of a jump than 1GB to 2GB when most intense games back then didn't use more than 800MB of VRAM. The simple fact is that if it's not all used isn't an issue. If any more than 4GB is used, it's worth it because the moment you start running off system memory because you've run out dedicate memory, you performance starts dropping pretty fast.

The real question is, when will 4GB not be enough and will it be soon? I think someone earlier posted that at 4k they are using as much as 3.5GB of VRAM. That's really close to eating up 4GB already. I think that's just worth mentioning.
Posted on Reply
#39
EarthDog
I'm glad we have some people wearing their thinking caps here!
Posted on Reply
#40
RealNeil
The reason I said that today's 8GB card may not work properly in the future, (when 8GB is really needed, utilized, and properly implemented) has to do with the flow of DATA between the cards and their screens. Are connectors that we use today going to be relevant (bandwidth capable) in a few years when so much more DATA will be passing to monitors?
Posted on Reply
#41
EarthDog
Again, anything over 4GB will render 6/8GB useful...we are there at 4K for some titles... more added weekly.

In a year or two I would imagine so. By the time the method becomes obsolete, the cards should also long be obsolete as well. Outside of NVIDIA's compression technology 256 bit is 256 bit. Unless they completely change the way GPUs do things, which is possible, it will still be viable in say 2 years. If they do change the way that GPUs do things, than all other cards vram amount be damned, will be in the same boat.
Posted on Reply
#42
RealNeil
Another thing that bothers me is that If you have two cards with 4GB of RAM on them, why can't you have 8GB to play with in SLI or Crossfire? (yeah just getting bitchy now)
Posted on Reply
#43
EarthDog
Blame SLI/CFx and how it currently works...
Posted on Reply
#44
RealNeil
EarthDogBlame SLI/CFx and how it currently works...
I understand that, but I wish they would figure out how to utilize all of every card's RAM
Posted on Reply
#45
Aquinus
Resident Wat-man
RealNeilAnother thing that bothers me is that If you have two cards with 4GB of RAM on them, why can't you have 8GB to play with in SLI or Crossfire? (yeah just getting bitchy now)
RealNeilI understand that, but I wish they would figure out how to utilize all of every card's RAM
They need to. Both cards are rendering the same exact scene, which requires textures to be loaded on both graphics cards. PCI-E communication is relatively slow compared to the memory bus on a GPU so there is no way you could share the data to process it in parallel.

Think of it this way, if you have two gpus, one renders a frame, than the next renders a frame, then the first renders again, going back and forth. Both GPUs needs all the stuff to render that scene, therefore memory doesn't scale out. Also sharing memory would imply a memory controller shared by two GPUs and that adds latency and reduces bandwidth which could cripple a GPU.

All in all, things are the way they are because people who are very smart and do this for a living haven't thought up a better way to do it. So before you complain a little too much, I beg you to remember that this isn't easy stuff to design and there are limitations to what hardware as we know it can do.
Posted on Reply
#46
eidairaman1
The Exiled Airman
Gpus just cant request more work than what a piece of software can push out. Thats where i think games should be pc developed and then pushed to console dummed down.
Posted on Reply
#47
RealNeil
AquinusSo before you complain a little too much, I beg you to remember that this isn't easy stuff to design and there are limitations to what hardware as we know it can do.
It's not a complaint as much as a wish for a better way of doing it. I like the GPUs that I have and the new ones coming out too.
I remember in the past that it was possible to add more memory to some of my video cards. Plug an play memory would be cool nowadays.
Posted on Reply
#48
Aquinus
Resident Wat-man
RealNeilIt's not a complaint as much as a wish for a better way of doing it.
That's like people at work saying "Why can't we just do X, or add feature Y", something like that and most people underestimate the work to do it because they don't recognize all the work that's needed to do one thing. Crossfire/SLi is no different. Its requirements are vastly understated which leads to people like you wondering why it can't be more simple and the answer is because it can't.
RealNeilI remember in the past that it was possible to add more memory to some of my video cards. Plug an play memory would be cool nowadays.
Yeah, except for the fact that it would be slow, which is why they stopped doing it. Even with DIMMs like DDR3 or 4, each DIMM would have a 64-bit memory bus. To get at least 256 you would need 4 DIMMs. Where the heck are those going to go on a GPU? Thanks, but no thanks. How about that 512-bit bus on the 290(x)? That would be 8 DIMMs.

I'm saying this all because CFX/SLI isn't going to change because both cards *NEED* to have everything to render any given scene. Adding any back-and-forth communication between the GPUs to share memory will slow down overall rendering. In fact duplication of data such as textures is one of the ways they can run crossfire without having games implement something insane and without slowing down GPUs due to superfluous communication because memory requests are blocking and will slow the entire thing down.

I'm responding in depth because people like you say, "Well, wouldn't it be nice if this worked like this." The problem is more often than not, people over-simplify the problem and don't realize the one of the reasons crossfire/sli even works is because data exists on both GPUs. So if you want all of your VRAM, don't run crossfire/sli, but what you're carping about is part of how those technologies work.
Posted on Reply
#49
GorbazTheDragon
Really the solution to the CFX/SLI issue is to make fatter cards with more chips on them. If you have something like a 295x2 or Titan Z you have the potential to allow two GPU chips to access the same block of VRAM, or some way to have a command/instruction chip distribute the load over two powerful slave GPUs which share VRAM
Posted on Reply
Add your own comment
Nov 21st, 2024 10:13 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts