• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

ASUS Launches Single-Fan RTX 3060 12GB Phoenix Graphics Card

Joined
May 2, 2017
Messages
7,762 (2.80/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
AMD and Nvidia have idea but there is ARM/QUALCOMM which is on light years ahead of them.
PS. GPU CU numbers scaling is not so bad like CPU number of cores scaling because operations for GPU are more simplify and is more easily to parallelized to many CU's.
The variety of tasks assigned to the CPU makes it difficult to scale and reduces efficiency, because the CPU also solves a lot of single-threaded tasks.
But my interest in this discussion is on GPU's not CPU's.
I don't think the person you quoted mentioned CPUs whatsoever...

And, again, if ARM and Qualcomm could scale their GPUs up to much larger sizes without sacrificing efficiency, why haven't they done so? That would allow them entry into huge and very lucrative markets like consoles, gaming PCs, etc. Of course none of these come close to the sales volumes of smartphones, but smartphones also have near zero margins.

You're assuming they have some kind of magical technology that simply doesn't exist, as you're not taking into account the inherent efficiency that comes from designing for a small maximum size and overall limited layout. Smaller designs will always be more efficient than larger designs. Period. There's nothing saying that any current mobile GPU maker could match AMD or Nvidia at the 150-250W range, except maybe Apple. But given the drastic differences between mobile GPUs in power delivery, size and thus internal interconnects, VRAM interfaces and bus widths, thread/workload allocation, driver complexity, etc., etc., etc., there's no way of knowing until one of them tries.
 
Joined
Sep 1, 2020
Messages
2,358 (1.52/day)
Location
Bulgaria
And, again, if ARM and Qualcomm could scale their GPUs up to much larger sizes without sacrificing efficiency, why haven't they done so?
They don't scale it because make it for GSM's and tablets and has much smaller power budget than GPU's for graphic cards for PC.
 
Joined
May 2, 2017
Messages
7,762 (2.80/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
They don't scale it because make it for GSM's and tablets and has much smaller power budget than GPU's for graphic cards for PC.
... That isn't a logical statement. "They make it for A" in no way precludes them from also making it for B. It's not like they have exclusivity agreements in place with... uh, the entire mobile industry. You're arguing that they have much better GPU tech and much more efficient architectures. If that was true, they could then scale these up and make massive amounts of money from new markets with relatively small investments - the architectures exist already, after all. The issue with your argument, which is understandable as it's by no means self-explanatory, is that the reason they don't scale up their designs is because doing so would be expensive, difficult, and maybe not even possible, and what is an efficient design at very small sizes might not be efficient at all in bigger sizes. You're the one making a new claim here - that mobile GPUs could scale up to beat desktop/server GPUs - so the burden of proof is on you. And sadly you won't be able to prove that, as it isn't as simple as you're making it out to be.
 
Joined
Sep 1, 2020
Messages
2,358 (1.52/day)
Location
Bulgaria
If that was true, they could then scale these up and make massive amounts of money from new market
Maybe have gentleman's agreement from it school/student years to devide the market. Desktop, laptop and workstation for owners of Intel, AMD and Nvidia other consumer devices for ARM companies?
 
Joined
Oct 28, 2012
Messages
1,190 (0.27/day)
Processor AMD Ryzen 3700x
Motherboard asus ROG Strix B-350I Gaming
Cooling Deepcool LS520 SE
Memory crucial ballistix 32Gb DDR4
Video Card(s) RTX 3070 FE
Storage WD sn550 1To/WD ssd sata 1To /WD black sn750 1To/Seagate 2To/WD book 4 To back-up
Display(s) LG GL850
Case Dan A4 H2O
Audio Device(s) sennheiser HD58X
Power Supply Corsair SF600
Mouse MX master 3
Keyboard Master Key Mx
Software win 11 pro
... That isn't a logical statement. "They make it for A" in no way precludes them from also making it for B. It's not like they have exclusivity agreements in place with... uh, the entire mobile industry. You're arguing that they have much better GPU tech and much more efficient architectures. If that was true, they could then scale these up and make massive amounts of money from new markets with relatively small investments - the architectures exist already, after all. The issue with your argument, which is understandable as it's by no means self-explanatory, is that the reason they don't scale up their designs is because doing so would be expensive, difficult, and maybe not even possible, and what is an efficient design at very small sizes might not be efficient at all in bigger sizes. You're the one making a new claim here - that mobile GPUs could scale up to beat desktop/server GPUs - so the burden of proof is on you. And sadly you won't be able to prove that, as it isn't as simple as you're making it out to be.
There's a simple rule that I decided to apply to myself : If I could think of something that people who are actually expert in a domain couldn't think of, then it means that there's something blocking it that my lack of knowledge can't grasp.

The only time when big companies are not making evident business move, is when those moves are not lucrative enough to bother with them ( like Windows glaring UI issues, people are still buying and using it anyways) or when they can't see the true potential of a market . But they will jump on anything lucrative.
Maybe have gentleman's agreement from it school/student years to devide the market. Desktop, laptop and workstation for owners of Intel, AMD and Nvidia other consumer devices for ARM companies?
gentleman's agreement ? In the tech industry where everyone is constantly low kicking when the other isn't looking ? :confused: ARM is also going beyond the consumer market, Qualcomm and Huawei are selling A.I/general compute card for the datacenter
 
Joined
Sep 1, 2020
Messages
2,358 (1.52/day)
Location
Bulgaria
gentleman's agreement ? In the tech industry where everyone is constantly low kicking when the other isn't looking ? :confused: ARM is also going beyond the consumer market, Qualcomm and Huawei are selling A.I/general compute card for the datacenter
Yes why not I not write nothing for A.I. It's not exist when they was students and is impossible to exist in agreement areas.
 
Joined
Oct 28, 2012
Messages
1,190 (0.27/day)
Processor AMD Ryzen 3700x
Motherboard asus ROG Strix B-350I Gaming
Cooling Deepcool LS520 SE
Memory crucial ballistix 32Gb DDR4
Video Card(s) RTX 3070 FE
Storage WD sn550 1To/WD ssd sata 1To /WD black sn750 1To/Seagate 2To/WD book 4 To back-up
Display(s) LG GL850
Case Dan A4 H2O
Audio Device(s) sennheiser HD58X
Power Supply Corsair SF600
Mouse MX master 3
Keyboard Master Key Mx
Software win 11 pro
Yes why not I not write nothing for A.I. It's not exist when they was students and is impossible to exist in agreement areas.
mmmh I see. Have a nice day.
 
Joined
May 2, 2017
Messages
7,762 (2.80/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
Maybe have gentleman's agreement from it school/student years to devide the market. Desktop, laptop and workstation for owners of Intel, AMD and Nvidia other consumer devices for ARM companies?
Yeah, no, that's not how the tech industry works. Rather, it's intensely competitive, with big fish eating the small fish at every chance they get. Just looks at the long history of acquisitions and mergers for the companies you mentioned.

Nvidia backed out of smartphones and ARM SoCs because of the intense competition and high price of taking part - developing those SoCs is very expensive, and having their own GPU IP wasn't enough of an advantage to keep them in the game (anticompetitive moves from QC also reportedly played a large part in this, with the technologically superior Tegra 4 barely being adopted at all). ARM is on the other hand expanding rapidly into the server/datacenter space, after about five years of trying and failing, they're now truly gaining ground (and are highly competitive in terms of peak performance). Check out AnandTech's recent server reviews for some info there. ARM and QC are also trying to get into the laptop market with WOA. Intel spent billions trying to get into the tablet and smartphone spaces, but ultimately lost out simply because their Atom CPU cores weren't competitive. AMD has an active ARM licence and has previously tried making an ARM server core (discontinued as it wasn't very good and Ryzen turned out to be a great success). And so on, and so on.

There are no non-compete agreements, just the realities of what is feasible and what is lucrative. The server accelerator market is certainly lucrative enough that anyone with a GPU or GPU-like chip IP could make massive amounts of money if they could make their product suitable for that. So if QC, ARM, PowerVR, or anyone else had a GPU design that could unproblematically scale up to the 200-300W range while maintaining the efficiency advantage they have in the 2-5W range, they would. As it stands, the cost of doing so would be massive for them as it would essentially necessitate a ground-up rearchitecting of their GPU architectures, and there's no guarantee whatsoever that they would be able to compete with what Nvidia and AMD are currently selling. So they don't bother. It would be a very, very, very expensive and risky gamble.
 
Joined
Sep 1, 2020
Messages
2,358 (1.52/day)
Location
Bulgaria
the cost of doing so would be massive for them
Mmm, cost has much ways to be justified. But my opinion is that may to begin with just one model GPU for graphic card in price tag which in sweet spot for consumers. Maybe this is between today's budget and middle cards in performance. If they succeed to make card which is with price like rtx 3050 ti, with power consumption like GTX 1650 and with teraflops like rtx 3070...and with manufacturing cost like GT 1030 :D
Not possible or just naughty example for comparisson I wrote?
 
Joined
May 2, 2017
Messages
7,762 (2.80/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
Mmm, cost has much ways to be justified. But my opinion is that may to begin with just one model GPU for graphic card in price tag which in sweet spot for consumers. Maybe this is between today's budget and middle cards in performance. If they succeed to make card which is with price like rtx 3050 ti, with power consumption like GTX 1650 and with teraflops like rtx 3070...and with manufacturing cost like GT 1030 :D
Not possible or just naughty example for comparisson I wrote?
Yeah, that's not happening. Not only do none of them have the technology for that, but that would be a poor investment. If anything like this were to happen they would target the server/data center markets, not sonsumer gaming first. They might expand to gaming after establishing a foothold in server/datacenter, but only if they could stomach the investment needed to ensure driver compatibility with thousands and thousands of games. This would require a massive software development team with highly specialized skills and several years of development at the very least. At that point the hardware would already be obsolete. Hardware+driver mixes are a constantly moving target, and one that's extremely difficult to come close to if starting from little or nothing. Compute is much, much more straightforward, and would as such be the only way to begin. That margins in those markets are much higher obviously also helps - you can easily sell the same silicon for 2-3x the consumer-equivalent price in enterprise server/datacenter markets after all. That none of these actors have started pitching future server compute accelerators tells us that their GPUs aren't likely to scale up easily - if it was easy, they would be looking to cash in on a booming and enormously lucrative market.
 
Joined
Nov 25, 2019
Messages
141 (0.08/day)
The hightest-TDP low-profile cards to date have been 75W, and those were still double-wide.
Palit GTS 450 (106W) and PowerColor HD 5750 (86W) would like to have words with you.
images.jpeg-51.jpg

card1.jpg

card2.jpg
 

qubit

Overclocked quantum bit
Joined
Dec 6, 2007
Messages
17,865 (2.88/day)
Location
Quantum Well UK
System Name Quantumville™
Processor Intel Core i7-2700K @ 4GHz
Motherboard Asus P8Z68-V PRO/GEN3
Cooling Noctua NH-D14
Memory 16GB (2 x 8GB Corsair Vengeance Black DDR3 PC3-12800 C9 1600MHz)
Video Card(s) MSI RTX 2080 SUPER Gaming X Trio
Storage Samsung 850 Pro 256GB | WD Black 4TB | WD Blue 6TB
Display(s) ASUS ROG Strix XG27UQR (4K, 144Hz, G-SYNC compatible) | Asus MG28UQ (4K, 60Hz, FreeSync compatible)
Case Cooler Master HAF 922
Audio Device(s) Creative Sound Blaster X-Fi Fatal1ty PCIe
Power Supply Corsair AX1600i
Mouse Microsoft Intellimouse Pro - Black Shadow
Keyboard Yes
Software Windows 10 Pro 64-bit
@Logoffon that Palit card at the top is just so collectible. :cool:
 
Joined
Feb 20, 2019
Messages
8,298 (3.93/day)
System Name Bragging Rights
Processor Atom Z3735F 1.33GHz
Motherboard It has no markings but it's green
Cooling No, it's a 2.2W processor
Memory 2GB DDR3L-1333
Video Card(s) Gen7 Intel HD (4EU @ 311MHz)
Storage 32GB eMMC and 128GB Sandisk Extreme U3
Display(s) 10" IPS 1280x800 60Hz
Case Veddha T2
Audio Device(s) Apparently, yes
Power Supply Samsung 18W 5V fast-charger
Mouse MX Anywhere 2
Keyboard Logitech MX Keys (not Cherry MX at all)
VR HMD Samsung Oddyssey, not that I'd plug it into this though....
Software W10 21H1, barely
Benchmark Scores I once clocked a Celeron-300A to 564MHz on an Abit BE6 and it scored over 9000.
Palit GTS 450 (106W) and PowerColor HD 5750 (86W) would like to have words with you.
View attachment 194408View attachment 194409
Neat! I wasnt' aware anyone had made a half-height card with a 6-pin connector, as the chances of having an ATX PSU with dedicated GPU connectors in a Flex-ATX or custom half-height case were vanishingly small.

Always happy to be proved wrong, but I still don't think that they're going to get a 170W cooler into the space constraints of a half-height card. I suspect the increased thermal density of Samsung's 8nm might actually make it harder, so a 106W card based on a 40nm will be easier to cool than a 106W card based on Samsung's 8nm, all else being equal.
 
Joined
May 2, 2017
Messages
7,762 (2.80/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
Neat! I wasnt' aware anyone had made a half-height card with a 6-pin connector, as the chances of having an ATX PSU with dedicated GPU connectors in a Flex-ATX or custom half-height case were vanishingly small.

Always happy to be proved wrong, but I still don't think that they're going to get a 170W cooler into the space constraints of a half-height card. I suspect the increased thermal density of Samsung's 8nm might actually make it harder, so a 106W card based on a 40nm will be easier to cool than a 106W card based on Samsung's 8nm, all else being equal.
Yep, as I said earlier in the thread, somewhere around 120W is likely to be the feasible maximum for cooling with a balls-to-the-wall HHHL GPU. Using something like an XT60 power connector with an included PCIe adapter would even allow for a noticeable increase in fin area given the chunk taken away by the PCIe power connector :p

But given the prevalence of SFF cases supporting full-size GPUs these days, it's highly unlikely for anyone to make a GPU like this. Too bad, really.
 
Top