Sunday, March 28th 2021

ASUS Launches Single-Fan RTX 3060 12GB Phoenix Graphics Card

ASUS has recently launched their first Ampere Series Phoenix card with the GeForce RTX 3060 Phoenix (PH-RTX3060-12G). The Phoenix features a 2.5 slot design with a single fan and measures just 17.7 x 12.8 x 5.1 cm which makes it the shortest Ampere GPU from ASUS. The card features the NVIDIA standard 1777 MHz boost clock but can be configured with the bundled ASUS software for 1807 MHz. The Phoenix includes three DisplayPort 1.4a connectors and one HDMI 2.1 along with a single 8-pin power connector. The card is now available to purchase from select retailers but official pricing and availability have not been released.
Source: ASUS
Add your own comment

38 Comments on ASUS Launches Single-Fan RTX 3060 12GB Phoenix Graphics Card

#26
TumbleGeorge
ValantarAnd, again, if ARM and Qualcomm could scale their GPUs up to much larger sizes without sacrificing efficiency, why haven't they done so?
They don't scale it because make it for GSM's and tablets and has much smaller power budget than GPU's for graphic cards for PC.
Posted on Reply
#27
Valantar
TumbleGeorgeThey don't scale it because make it for GSM's and tablets and has much smaller power budget than GPU's for graphic cards for PC.
... That isn't a logical statement. "They make it for A" in no way precludes them from also making it for B. It's not like they have exclusivity agreements in place with... uh, the entire mobile industry. You're arguing that they have much better GPU tech and much more efficient architectures. If that was true, they could then scale these up and make massive amounts of money from new markets with relatively small investments - the architectures exist already, after all. The issue with your argument, which is understandable as it's by no means self-explanatory, is that the reason they don't scale up their designs is because doing so would be expensive, difficult, and maybe not even possible, and what is an efficient design at very small sizes might not be efficient at all in bigger sizes. You're the one making a new claim here - that mobile GPUs could scale up to beat desktop/server GPUs - so the burden of proof is on you. And sadly you won't be able to prove that, as it isn't as simple as you're making it out to be.
Posted on Reply
#28
TumbleGeorge
ValantarIf that was true, they could then scale these up and make massive amounts of money from new market
Maybe have gentleman's agreement from it school/student years to devide the market. Desktop, laptop and workstation for owners of Intel, AMD and Nvidia other consumer devices for ARM companies?
Posted on Reply
#29
dyonoctis
Valantar... That isn't a logical statement. "They make it for A" in no way precludes them from also making it for B. It's not like they have exclusivity agreements in place with... uh, the entire mobile industry. You're arguing that they have much better GPU tech and much more efficient architectures. If that was true, they could then scale these up and make massive amounts of money from new markets with relatively small investments - the architectures exist already, after all. The issue with your argument, which is understandable as it's by no means self-explanatory, is that the reason they don't scale up their designs is because doing so would be expensive, difficult, and maybe not even possible, and what is an efficient design at very small sizes might not be efficient at all in bigger sizes. You're the one making a new claim here - that mobile GPUs could scale up to beat desktop/server GPUs - so the burden of proof is on you. And sadly you won't be able to prove that, as it isn't as simple as you're making it out to be.
There's a simple rule that I decided to apply to myself : If I could think of something that people who are actually expert in a domain couldn't think of, then it means that there's something blocking it that my lack of knowledge can't grasp.

The only time when big companies are not making evident business move, is when those moves are not lucrative enough to bother with them ( like Windows glaring UI issues, people are still buying and using it anyways) or when they can't see the true potential of a market . But they will jump on anything lucrative.
TumbleGeorgeMaybe have gentleman's agreement from it school/student years to devide the market. Desktop, laptop and workstation for owners of Intel, AMD and Nvidia other consumer devices for ARM companies?
gentleman's agreement ? In the tech industry where everyone is constantly low kicking when the other isn't looking ? :confused: ARM is also going beyond the consumer market, Qualcomm and Huawei are selling A.I/general compute card for the datacenter
Posted on Reply
#30
TumbleGeorge
dyonoctisgentleman's agreement ? In the tech industry where everyone is constantly low kicking when the other isn't looking ? :confused: ARM is also going beyond the consumer market, Qualcomm and Huawei are selling A.I/general compute card for the datacenter
Yes why not I not write nothing for A.I. It's not exist when they was students and is impossible to exist in agreement areas.
Posted on Reply
#31
dyonoctis
TumbleGeorgeYes why not I not write nothing for A.I. It's not exist when they was students and is impossible to exist in agreement areas.
mmmh I see. Have a nice day.
Posted on Reply
#32
Valantar
TumbleGeorgeMaybe have gentleman's agreement from it school/student years to devide the market. Desktop, laptop and workstation for owners of Intel, AMD and Nvidia other consumer devices for ARM companies?
Yeah, no, that's not how the tech industry works. Rather, it's intensely competitive, with big fish eating the small fish at every chance they get. Just looks at the long history of acquisitions and mergers for the companies you mentioned.

Nvidia backed out of smartphones and ARM SoCs because of the intense competition and high price of taking part - developing those SoCs is very expensive, and having their own GPU IP wasn't enough of an advantage to keep them in the game (anticompetitive moves from QC also reportedly played a large part in this, with the technologically superior Tegra 4 barely being adopted at all). ARM is on the other hand expanding rapidly into the server/datacenter space, after about five years of trying and failing, they're now truly gaining ground (and are highly competitive in terms of peak performance). Check out AnandTech's recent server reviews for some info there. ARM and QC are also trying to get into the laptop market with WOA. Intel spent billions trying to get into the tablet and smartphone spaces, but ultimately lost out simply because their Atom CPU cores weren't competitive. AMD has an active ARM licence and has previously tried making an ARM server core (discontinued as it wasn't very good and Ryzen turned out to be a great success). And so on, and so on.

There are no non-compete agreements, just the realities of what is feasible and what is lucrative. The server accelerator market is certainly lucrative enough that anyone with a GPU or GPU-like chip IP could make massive amounts of money if they could make their product suitable for that. So if QC, ARM, PowerVR, or anyone else had a GPU design that could unproblematically scale up to the 200-300W range while maintaining the efficiency advantage they have in the 2-5W range, they would. As it stands, the cost of doing so would be massive for them as it would essentially necessitate a ground-up rearchitecting of their GPU architectures, and there's no guarantee whatsoever that they would be able to compete with what Nvidia and AMD are currently selling. So they don't bother. It would be a very, very, very expensive and risky gamble.
Posted on Reply
#33
TumbleGeorge
Valantarthe cost of doing so would be massive for them
Mmm, cost has much ways to be justified. But my opinion is that may to begin with just one model GPU for graphic card in price tag which in sweet spot for consumers. Maybe this is between today's budget and middle cards in performance. If they succeed to make card which is with price like rtx 3050 ti, with power consumption like GTX 1650 and with teraflops like rtx 3070...and with manufacturing cost like GT 1030 :D
Not possible or just naughty example for comparisson I wrote?
Posted on Reply
#34
Valantar
TumbleGeorgeMmm, cost has much ways to be justified. But my opinion is that may to begin with just one model GPU for graphic card in price tag which in sweet spot for consumers. Maybe this is between today's budget and middle cards in performance. If they succeed to make card which is with price like rtx 3050 ti, with power consumption like GTX 1650 and with teraflops like rtx 3070...and with manufacturing cost like GT 1030 :D
Not possible or just naughty example for comparisson I wrote?
Yeah, that's not happening. Not only do none of them have the technology for that, but that would be a poor investment. If anything like this were to happen they would target the server/data center markets, not sonsumer gaming first. They might expand to gaming after establishing a foothold in server/datacenter, but only if they could stomach the investment needed to ensure driver compatibility with thousands and thousands of games. This would require a massive software development team with highly specialized skills and several years of development at the very least. At that point the hardware would already be obsolete. Hardware+driver mixes are a constantly moving target, and one that's extremely difficult to come close to if starting from little or nothing. Compute is much, much more straightforward, and would as such be the only way to begin. That margins in those markets are much higher obviously also helps - you can easily sell the same silicon for 2-3x the consumer-equivalent price in enterprise server/datacenter markets after all. That none of these actors have started pitching future server compute accelerators tells us that their GPUs aren't likely to scale up easily - if it was easy, they would be looking to cash in on a booming and enormously lucrative market.
Posted on Reply
#35
Logoffon
Chrispy_The hightest-TDP low-profile cards to date have been 75W, and those were still double-wide.
Palit GTS 450 (106W) and PowerColor HD 5750 (86W) would like to have words with you.


Posted on Reply
#36
qubit
Overclocked quantum bit
@Logoffon that Palit card at the top is just so collectible. :cool:
Posted on Reply
#37
Chrispy_
LogoffonPalit GTS 450 (106W) and PowerColor HD 5750 (86W) would like to have words with you.
Neat! I wasnt' aware anyone had made a half-height card with a 6-pin connector, as the chances of having an ATX PSU with dedicated GPU connectors in a Flex-ATX or custom half-height case were vanishingly small.

Always happy to be proved wrong, but I still don't think that they're going to get a 170W cooler into the space constraints of a half-height card. I suspect the increased thermal density of Samsung's 8nm might actually make it harder, so a 106W card based on a 40nm will be easier to cool than a 106W card based on Samsung's 8nm, all else being equal.
Posted on Reply
#38
Valantar
Chrispy_Neat! I wasnt' aware anyone had made a half-height card with a 6-pin connector, as the chances of having an ATX PSU with dedicated GPU connectors in a Flex-ATX or custom half-height case were vanishingly small.

Always happy to be proved wrong, but I still don't think that they're going to get a 170W cooler into the space constraints of a half-height card. I suspect the increased thermal density of Samsung's 8nm might actually make it harder, so a 106W card based on a 40nm will be easier to cool than a 106W card based on Samsung's 8nm, all else being equal.
Yep, as I said earlier in the thread, somewhere around 120W is likely to be the feasible maximum for cooling with a balls-to-the-wall HHHL GPU. Using something like an XT60 power connector with an included PCIe adapter would even allow for a noticeable increase in fin area given the chunk taken away by the PCIe power connector :P

But given the prevalence of SFF cases supporting full-size GPUs these days, it's highly unlikely for anyone to make a GPU like this. Too bad, really.
Posted on Reply
Add your own comment
Nov 27th, 2024 17:34 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts