Friday, July 14th 2017

Liquid Cooled AMD Radeon Vega Frontier Edition Now on Sale for $1,489.99

The liquid cooled version of AMD's latest graphics card meant for the "pioneering crowd" of prosumers has been made available over at SabrePC. It sports the exact same GPU you'd find on the air-cooled version, featuring all the same 4096 Stream Processors and 16 GB of HBM2 memory. The only differences are, and you guessed it, the higher cooling capacity afforded by the AIO solution, and the therefore increased TDP from the 300 W of the air-cooled version to a eyebrow-raising 375 W. That increase in TDP must come partially from the employed cooling solution, but also from an (for now, anecdotal) ability for the card to more easily sustain higher clocks, closer to its AMD-rated 1,630 MHz peak core clock.

You can nab one right now in that rather striking gold and blue color scheme, and have it shipped to you in 24H. Hit the source link for the SabrePC page.
Sources: SabrePC, Computerbase.de
Add your own comment

96 Comments on Liquid Cooled AMD Radeon Vega Frontier Edition Now on Sale for $1,489.99

#51
cdawall
where the hell are my stars
RejZoRRead my post about tiling. Without it, you're essentially brute forcing through things. Well, no wonder it has 400W consumption at hardly performance of GTX 1080. Functional tiling should shave off 100W rather easily and bump the performance significantly. How much, no idea. But if it's already faster than GTX1080, you can get the idea. It still has the shader count advantage over GTX 1080 series and clock gap has closed dramatically from 1050MHz of Fury X compared to Pascal's.
Good to hear you got your electrical engineering degree while I was sleeping. So couple of things, these are less efficient per clock so clock speed comparisons to the fury are idiotic on a good day. TBR wont lower power consumption it will increase frame rate at the same power consumption
Posted on Reply
#52
RejZoR
Lol, what has electrical engineering to do with a feature that makes graphic card not overdraw same shit over and over again? I guess Maxwell and Pascal are indeed powered by magic pixies then...
Posted on Reply
#53
Th3pwn3r
dwadeThis is like Walmart trying to sell Louis Vuitton products.
How so? It's dumb or fake ? I'm confused.
Posted on Reply
#54
xkm1948
cdawallGood to hear you got your electrical engineering degree while I was sleeping. So couple of things, these are less efficient per clock so clock speed comparisons to the fury are idiotic on a good day. TBR wont lower power consumption it will increase frame rate at the same power consumption
Exactly my thought. If TBR is so amazing why it won't work? Unless the back end of the GPU really doesn't benefit much from having TBR. AFAIK Fiji was limited by both ROP and front end task manager. So I image patching an old design won't do much for overall efficiency improvement.


Dude chill, anyone can see that RezJor is the AMD fanatic here. No use to discuss this any more. Even if RXVega sucks bad he will still find excuesses to spin it otherwise.
Posted on Reply
#55
RejZoR
Lol, another one accusing me of being an AMD fanatic/fanboy. If I'd get 10€ for every accusation of being an AMD fanboy, I'd actually be able to buy Threadripper and RX Vega FE (the AiO one!) :D
Posted on Reply
#56
cdawall
where the hell are my stars
xkm1948Exactly my thought. If TBR is so amazing why it won't work? Unless the back end of the GPU really doesn't benefit much from having TBR. AFAIK Fiji was limited by both ROP and front end task manager. So I image patching an old design won't do much for overall efficiency improvement.


Dude chill, anyone can see that RezJor is the AMD fanatic here. No use to discuss this any more. Even if RXVega sucks bad he will still find excuesses to spin it otherwise.
I'm not calling anyone a fanatic I am saying he is wrong
Posted on Reply
#57
0x4452
RejZoRFunctional tiling should shave off 100W rather easily and bump the performance significantly.
:peace:


Only if you knew the true numbers... ;)
Posted on Reply
#58
Nokiron
RejZoRLol, what has electrical engineering to do with a feature that makes graphic card not overdraw same shit over and over again? I guess Maxwell and Pascal are indeed powered by magic pixies then...
TBR increases efficiency (performance per watt), it does not lower power consumption. Common misconception.

The only benefit on power consumption you'll get is somewhat less bandwidth-shoveling which might lower total consumption on the VRAM but not by much.
Posted on Reply
#59
RejZoR
It does if you don't have to push the core so hard to achieve same performance (and pushing it hard is what really brings massive power draw, as seen between RX480 and RX580). Considering GPU's don't always have to run at full tilt and that voltage regulation is now highly dynamic and the fact that AMD cards are now framerate aware (the Hialgo Boost subset also used for Chill feature), I can see that as possibility. We've seen with Chill feature that power draw can be significantly lower simply by doing clever framerate and load balancing.
Posted on Reply
#60
Nokiron
RejZoRIt does if you don't have to push the core so hard to achieve same performance (and pushing it hard is what really brings massive power draw, as seen between RX480 and RX580). Considering GPU's don't always have to run at full tilt and that voltage regulation is now highly dynamic and the fact that AMD cards are now framerate aware (the Hialgo Boost subset also used for Chill feature), I can see that as possibility. We've seen with Chill feature that power draw can be significantly lower simply by doing clever framerate and load balancing.
The problem with your statement is that TBR is not a permanently active feature. There are plenty of games that sees regressions and/or bugs with that rendering method. Which is exactly why Vegas DSBR (just as Nvidias TBR) is only active when it needs to and why there is a fallback method (passthrough).

You don't see any game where power consumption increases on Maxwell or Pascal where they can't or don't use TBR. You just won't get optimal performance.

Again, TBR and DSBR are for alleviating bandwidth issues by decreasing the amount of time the path to VRAM is saturated. You are minimizing data movement. It's not a miracle cure for power consumption that some suggest it is.
Posted on Reply
#61
RejZoR
Which could explain why AMD picked different approach than NVIDIA. Maybe they have different plans with it. And clearly, software (driver) side plays a big role here. And AMD currently really doesn't have much to offer with Vega FE in terms of software features inside driver. Which is why I'm so interested in RX Vega. They are cooking something, we just don't know for sure what it is.
Posted on Reply
#62
EarthDog
Hopefully its not typical AMD fare... looks good on their menu, but when the plate arrives...

In this case, we are teased with a 375W card that cant match a 1080ti (rumor)...

...maybe its priced like a kids meal though, making it palatable, if those performance results hold true. :)
Posted on Reply
#63
Vayra86
RejZoRWhich could explain why AMD picked different approach than NVIDIA. Maybe they have different plans with it. And clearly, software (driver) side plays a big role here. And AMD currently really doesn't have much to offer with Vega FE in terms of software features inside driver. Which is why I'm so interested in RX Vega. They are cooking something, we just don't know for sure what it is.
Just being extremely late to market doesn't mean they're cooking something. And I strongly doubt something IS cooking atm, they're just preparing the launch. If there are still major changes being done, there wouldn't be a release date and to consider the 'gaming drivers' to make the difference they should to not have Vega RX be a bit of an embarassment, is not entirely realistic IMO at this point. If that were the case, AMD has some really shitty risk management going on.

Vega RX might 'just' situationally in DX12 and well optimized games touch on stock 1080ti performance, and AMD is going to market that as the 1080ti killer, while everyone knows the truth. Its 980ti vs Fury X all over again, mark my words.
Posted on Reply
#64
RejZoR
Why else would they go through all this length of making everything new when they could literally take R9 Fury X, shrink it, overclock it and call it a day? I mean, they are a massive company, what do you think? Why make all new silicon, tons of new features, require massive driver rewrite for... hardly any boost. I know AMD did some weird shit few years ago, but seeing how they deal with Ryzen/Threadripper/Epyc and how competent and competitive products they have across entire range, I think AMD is turning new page here. It's just that they are catching up for being an underperformer for so long. And you can't do that over night. Also, don't judge it from Polaris. They had a very clear focus with Polaris, it was never meant to be a king of the hill. They made a decision to fill the most profitable segment. And I respect that decision.

However, people treat RX Vega like it's literally just a shrunk R9 Fury X where drivers would be ready from day 1. Just Ctrl+F for "R9 Fury X" and replace it with RX Vega X. When you have an all new core, things don't just appear there. You need to write software for it. And that takes time. More than they initially expected apparently. Few of us seem to understand that, 99% of people just entirely disregard it. Sometimes I just don't understand people...
Posted on Reply
#65
EarthDog
Because that is what amd does. They aim for the future but when the future arrives, it hasnt worked out nearly as well as planned... read many cores over ipc (consumer not enterprise). read hbm and its bandwidth...read mantle. vulkan.......still waiting for more adoption, smells like Mantle in that front, but we hope not.
Posted on Reply
#66
BiggieShady
RejZoR... feature that makes graphic card not overdraw same shit over and over again ...
You are describing full blown tile based renderer on mobile gpus from ARM, this is only a tile based rasterizer which helps only with more efficient use of on-die cache (less vram bandwidth used per frame)
Posted on Reply
#67
Vayra86
RejZoRWhy else would they go through all this length of making everything new when they could literally take R9 Fury X, shrink it, overclock it and call it a day? I mean, they are a massive company, what do you think? Why make all new silicon, tons of new features, require massive driver rewrite for... hardly any boost. I know AMD did some weird shit few years ago, but seeing how they deal with Ryzen/Threadripper/Epyc and how competent and competitive products they have across entire range, I think AMD is turning new page here. It's just that they are catching up for being an underperformer for so long. And you can't do that over night. Also, don't judge it from Polaris. They had a very clear focus with Polaris, it was never meant to be a king of the hill. They made a decision to fill the most profitable segment. And I respect that decision.

However, people treat RX Vega like it's literally just a shrunk R9 Fury X where drivers would be ready from day 1. Just Ctrl+F for "R9 Fury X" and replace it with RX Vega X. When you have an all new core, things don't just appear there. You need to write software for it. And that takes time. More than they initially expected apparently. Few of us seem to understand that, 99% of people just entirely disregard it. Sometimes I just don't understand people...
Come on now, AMD is not implementing a new architecture, they are updating their existing design just like they did with Tonga. Saying all-new and revolutionary in powerpoint slides a few dozen times doesnt make it so. Just like Nvidia builton Kepler, AMD is working from a solid foundation called GCN. Thats all.
Posted on Reply
#68
RejZoR
EarthDogBecause that is what amd does. They aim for the future but when the future arrives, it hasnt worked out nearly as well as planned... read many cores over ipc (consumer not enterprise). read hbm and its bandwidth...read mantle. vulkan.......still waiting for more adoption, smells like Mantle in that front, but we hope not.
People keep saying that. Yet, old GCN cards still perform really well under DX12 and Vulkan. HBM, you want to say it didn't deliver? Then how come Fury X with limited VRAM capacity rivaled cards with way more VRAM with ease? It does work. Mantle/Vulkan? Do you think things like this just happen out of thin air? We all know how widely OpenGL was used. Well, Vulkan is now OpenGL. And it's just one single thing now, no separation of mobile and desktop like with OpenGL. When Android and Linux fully adopt Vulkan, it'll be widely used even on Windows. I mean, just look at DX12. It's based on what AMD created essentially. But adoption is slow, but it's happening and for the large part, it's beneficial. People always seem to praise NVIDIA and toss AMD around, disregarding the fact that AMD was and still is a major innovator. Tessellation? AMD invented that with TruForm. Normal maps compression? AMD innovated that with 3Dc normal maps compression algorithm now widely used by everyone because games with normal maps are now a norm. But they used to be rather exotic and with horrible artefacting when using algorithms meant for regular texture maps.

People keep dissing AMD, but AdoredTV quite nicely predicted this with the "AMD's master plan" video. AMD is releasing and rolling out seemingly incoherent innovations that don't contribute much to their current state, but long term, it's a stream pouring on their mill. And their mill only.
AMD APU's in laptops and consoles are now a norm, giving developers priority for AMD since they are literally developing for their hardware, their Mantle literally replaced entire OpenGL so to speak, DX12 followed Mantle and its fundamental ideas, pushing multicore CPU's on the masses with Ryzen, incorporation of Infinity Fabric for massive parallel systems on the cheap with Infinity Fabric soon finding its way into GPU's. AMD is creating a massive ecosystem that might not benefit them now or in 3 years, but it'll benefit them greatly long term as they gain more and more control of the industry. They may not have a massive GPU market share on PC from graphic cards. But do you know how many consoles are sold? Both, PlayStation and Xbox carry AMD tech. And Nintendo as well though I'm frankly not following them much so I'm unsure. Things like this matter too. And they matter a lot.
Posted on Reply
#69
RejZoR
Vayra86Come on now, AMD is not implementing a new architecture, they are updating their existing design just like they did with Tonga. Saying all-new and revolutionary in powerpoint slides a few dozen times doesnt make it so. Just like Nvidia builton Kepler, AMD is working from a solid foundation called GCN. Thats all.
Because whole new rasterizer, new vertex unit, new compute unit don't warrent for "massive upgrade", somehow? Then what does? Replacing GPU with a hamster wheel? O_o Small upgrade is when you fiddle with the tesselator like they did between Hawaii and Tonga. But when you're comparing Fury X or even newer Polaris to Vega, it's like day and night.
Posted on Reply
#70
Supercrit
hapkimanTypically - A manufacturer tries to cut corners and save money wherever they can for a base model or reference card
On the contrary, AMD cards all had some pretty beefy power delivery on reference models. Blower fan sink combo is for good aesthetics, it looks reserved and professional.
Posted on Reply
#71
the54thvoid
Intoxicated Moderator
RejZoRBoth, PlayStation and Xbox carry AMD tech. And Nintendo as well though I'm frankly not following them much so I'm unsure. Things like this matter too. And they matter a lot.
The new switch uses old Nvidia Maxwell Tegra chip. AMD lost that one.
Posted on Reply
#72
EarthDog
RejZoRPeople keep saying that. Yet, old GCN cards still perform really well under DX12 and Vulkan. HBM, you want to say it didn't deliver? Then how come Fury X with limited VRAM capacity rivaled cards with way more VRAM with ease? It does work. Mantle/Vulkan? Do you think things like this just happen out of thin air? We all know how widely OpenGL was used. Well, Vulkan is now OpenGL. And it's just one single thing now, no separation of mobile and desktop like with OpenGL. When Android and Linux fully adopt Vulkan, it'll be widely used even on Windows. I mean, just look at DX12. It's based on what AMD created essentially. But adoption is slow, but it's happening and for the large part, it's beneficial. People always seem to praise NVIDIA and toss AMD around, disregarding the fact that AMD was and still is a major innovator. Tessellation? AMD invented that with TruForm. Normal maps compression? AMD innovated that with 3Dc normal maps compression algorithm now widely used by everyone because games with normal maps are now a norm. But they used to be rather exotic and with horrible artefacting when using algorithms meant for regular texture maps.

People keep dissing AMD, but AdoredTV quite nicely predicted this with the "AMD's master plan" video. AMD is releasing and rolling out seemingly incoherent innovations that don't contribute much to their current state, but long term, it's a stream pouring on their mill. And their mill only.
AMD APU's in laptops and consoles are now a norm, giving developers priority for AMD since they are literally developing for their hardware, their Mantle literally replaced entire OpenGL so to speak, DX12 followed Mantle and its fundamental ideas, pushing multicore CPU's on the masses with Ryzen, incorporation of Infinity Fabric for massive parallel systems on the cheap with Infinity Fabric soon finding its way into GPU's. AMD is creating a massive ecosystem that might not benefit them now or in 3 years, but it'll benefit them greatly long term as they gain more and more control of the industry. They may not have a massive GPU market share on PC from graphic cards. But do you know how many consoles are sold? Both, PlayStation and Xbox carry AMD tech. And Nintendo as well though I'm frankly not following them much so I'm unsure. Things like this matter too. And they matter a lot.
Those other things you mentioned... cool beans. But that isn't where performance and price is being affected which, let's be real here, is what milkshake brings all the boys to the yard.

So far, their mill is spinning pouring their own water on it, but, many of the catches on the mill were leaking like a sieve. AMD isn't a company that can afford to keep looking too far ahead in the future IMO. They need to deliver better results now, like with Ryzen... finally in the ballpark on IPC... but again, the core thing... people swore up and down quad core was where its at around the Q6600 days.... and here we are, now just BARELY recommending 4c/8t CPUs for a rig lasting 3 years+. Seems like with Vega... 375W and behind a 1080Ti performance wise (likely), and way behind in power use.... wasn't HBM supposed to power power use (it did... but.... yeah...). Those are the big things people hold on to. Again, its neat, those things you describe, but, really, from the items I mentioned, in the scope of this thread, people seem to care about more.

Many eggs are in few baskets and their gambles haven't seem to have paid off yet... Ryzen is great because its cheaper, not because it has more cores or is faster. Vulkan is great in the few titles that use it with not a lot of adoption happening. HBM isn't needed until 4K where .86% of people use (according to steam)... and even there, it isn't needed. GDDRX5 is doing a more than serviceable job there. So again, ahead of its time and I don't see many payoffs.
Posted on Reply
#73
RejZoR
No, they are exactly in position to look long term. If they'll forever look at everything short term, they'll remain this peasant nobody. If they look long term and care about short term, they might get somewhere.
Posted on Reply
#74
RejZoR
the54thvoidThe new switch uses old Nvidia Maxwell Tegra chip. AMD lost that one.
They lost the smallest console player. besides, Nintendo was never "POWAH" player so it's even less relevant for them even though every extra market share counts.
Posted on Reply
#75
EarthDog
RejZoRIf they'll forever look at everything short term, they'll remain this peasant nobody. If they look long term and care about short term, they might get somewhere.
This part I agree with... and is essence, is what I am saying. The first part I deleted, No. I dont agree they are in that position yet. So far, they have shown where it counts to look too far ahead and things don't pan out.
Posted on Reply
Add your own comment
May 22nd, 2024 17:27 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts