- Joined
- Jan 15, 2021
- Messages
- 337 (0.23/day)
400 Watts and only 7K CUDA cores... also ppl arguing that the 980ti could OC needs to realize that 99.99% of ppl don't OC their GPU.
System Name | Ryzen Reflection |
---|---|
Processor | AMD Ryzen 9 5900x |
Motherboard | Gigabyte X570S Aorus Master |
Cooling | 2x EK PE360 | TechN AM4 AMD Block Black | EK Quantum Vector Trinity GPU Nickel + Plexi |
Memory | Teamgroup T-Force Xtreem 2x16GB B-Die 3600 @ 14-14-14-28-42-288-2T 1.45v |
Video Card(s) | Zotac AMP HoloBlack RTX 3080Ti 12G | 950mV 1950Mhz |
Storage | WD SN850 500GB (OS) | Samsung 980 Pro 1TB (Games_1) | Samsung 970 Evo 1TB (Games_2) |
Display(s) | Asus XG27AQM 240Hz G-Sync Fast-IPS | Gigabyte M27Q-P 165Hz 1440P IPS | LG 24" IPS 1440p |
Case | Lian Li PC-011D XL | Custom cables by Cablemodz |
Audio Device(s) | FiiO K7 | Sennheiser HD650 + Beyerdynamic FOX Mic |
Power Supply | Seasonic Prime Ultra Platinum 850 |
Mouse | Razer Viper v2 Pro |
Keyboard | Corsair K65 Plus 75% Wireless - USB Mode |
Software | Windows 11 Pro 64-Bit |
It would help if the criticism also came from people who knew a thing or 2 about whats going on in semiconductor industry currently. But no its coming from people who have zero clue of current limitations being hit by chip designers, and the never ending demand for higher performance.Don't tell us to shutup about it if you can't handle proper criticism.
Power numbers on TPU are with RT disabled.If you are not going to READ and understand context, take your own advice.
Processor | Intel Core i5 4590 |
---|---|
Motherboard | Gigabyte Z97x Gaming 3 |
Cooling | Intel Stock Cooler |
Memory | 8GiB(2x4GiB) DDR3-1600 [800MHz] |
Video Card(s) | XFX RX 560D 4GiB |
Storage | Transcend SSD370S 128GB; Toshiba DT01ACA100 1TB HDD |
Display(s) | Samsung S20D300 20" 768p TN |
Case | Cooler Master MasterBox E501L |
Audio Device(s) | Realtek ALC1150 |
Power Supply | Corsair VS450 |
Mouse | A4Tech N-70FX |
Software | Windows 10 Pro |
Benchmark Scores | BaseMark GPU : 250 Point in HD 4600 |
Doing Hardware raytracing while running non raytraced games!!!Not quite. Ampere has a high TDP because it is doing hardware real-time raytracing, which is a VERY complex and compute heavy type of task. When not RTRT is not being performed, Ampere GPUs are good on power. Turing is/was no different. AMD's RTRT functionality is no different, turn on raytracing and power usage takes a big bump.
Card Name/GPU Name | Manufacturing Node | FP32 TLFOPS | Power(TBP) |
MI50(Vega20) | TSMC 7nm | 13.3 | 300W |
MI100(Arcturus) | TSMC 7nm | 23.1 | 300W |
Tesla V100(GV100) | TSMC 12nm | 14.13 | 250W(upto 300W version) |
A100(GA100) | TSMC 7nm | 19.5 | 300W(upto 500W Version) |
Highly binned A4000 will efficient same as Vega Pro 64 is efficient then the consumer version. Original Vega can run on 150-180W when under volted. So can we say that Vega was efficient and all nvidia fans who dump on Vega are dishonest propagandist.It's not that simple. Just look at A4000. A big part of efficiency just depends on the performance you target in terms of clock speed and the yields you target. Higher clockspeeds means less efficiency. It's possible bad yields resulted in the 3000 series being relatively inefficient.
Just look how efficient A4000 is. Less power than a 6600xt while performance is equal to a 3060Ti. Just a result of a bigger chip with lower clocks and possibily binned for low voltages.
Nvidia RTX A4000 Review
Today we're taking a look at the Nvidia RTX A4000, but this isn't a member of the rumored next-gen GeForce RTX 4000 series, but rather an Ampere-based...www.techspot.com
Lexluthermeister is trying to say that with RT enabled on both Ampere and RDNA2 based cards, Nvidia is more efficient because of the much higher performance. The problem is that neither of these architectures' power numbers are measured with RT enabled. He/she is conjecturing without measurements. So in lieu of actual measurements with RT enabled, we have TPUs Perf/W numbers. From the latest 6950XT review:Doing Hardware raytracing while running non raytraced games!!!
So you are telling me that TPU was able to run the same Raytraced test to measure power on GTX 1630?? You need to read the TPU review methodology properly. If you lazy to read it, It says Gaming: Cyberpunk 2077 is running at 2560x1440 with Ultra settings and ray tracing disabled. We ensure the card is heated up properly, which ensures a steady-state result instead of short-term numbers that won't hold up in long-term usage.
NVIDIA GeForce RTX 3090 Ti Founders Edition Review
The GeForce RTX 3090 Ti Founders Edition is NVIDIA's mightiest card from the Ampere lineup. We previously looked at various custom designs. Today, we're checking out the Founders Edition to test how well it does in terms of heat and noise, and whether it's an alternative to the even more...www.techpowerup.com
No matter what how Nvidia fansboys trying to spin it Ampere is inefficient. Let me give you a example,
Here we can see Vega20 to Arcturus same node with new hardware matrix unit and 1.73x FP32 performance, same TBP. Where GV100 to GA100, TSMC's 12nm to 7nm(12->7-60% power reduction), only 1.38X FP32 performance, 50W more power. If Ampere were efficient then TBD would have been lower or same. But that does not happened. TBP rose which means one thing Ampere is not efficient.
Card Name/GPU Name Manufacturing Node FP32 TLFOPS Power(TBP) MI50(Vega20) TSMC 7nm 13.3 300W MI100(Arcturus) TSMC 7nm 23.1 300W Tesla V100(GV100) TSMC 12nm 14.13 250W(upto 300W version) A100(GA100) TSMC 7nm 19.5 300W(upto 500W Version)
Highly binned A4000 will efficient same as Vega Pro 64 is efficient then the consumer version. Original Vega can run on 150-180W when under volted. So can we say that Vega was efficient and all nvidia fans who dump on Vega are dishonest propagandist.
Edit: Looks like @Beertintedgoggles already pointed out the power consumption testing part.
For top-end maybe, but there's really no technological limitation to stop nVidia making low-mid range cards "wider but slower" (as they are already doing with mobile chips). AMD certainly weren't held back by making an almost (180w) RTX 3060 with the TDP of a (120w) GTX 1660 with the RX 6600 (then pricing it like a 130w RTX 3050), much of which is due to hitting the sweet spot rather than going beyond it. All 4 of the past nVidia GPU's I've owned undervolted to 0.85-0.90v (from 1.05v) retaining stock frequency. Reduce that a little to 1700-1800MHz and voltage (and TDP) just falls away. Make the chips a little wider and you've gained back what you lost in frequency but without adding anywhere near as much TDP or voltage back on. They do this all the time with mobile chips. The "limitation" to them not doing more of this for desktop is entirely 'political', ie, they simply don't want to tell overly-complacent stockholders that the "low hanging fruit has all been picked", and that they may have to accept 5% lower margins in order to use +5-10% larger die sizes in order to make a +50% better product in case they start voting against those executive bonuses...There is clearly a barrier neither Nvidia or AMD can pass through, they cannot increase performance in a meaningful way in the 2 year cycle without going crazy on power draw, no point in beating the dead horse.
You're trying to dimiss his leaks with those tweest but that first tweet is actually pretty accurate. The only thing he didn't know was the SM layout and that the CUDA cores per SM doubled. He predicted the raw specs of Ampere more than a year before launch.@AleksandarK
Rumors before 30 launch were under estimating CUDA count by a large number.
It's likely they're overestimating performance this time to reach an equilibrium.
The only impressive leak was the cooler which wasn't from kopite7kimi anyway.
look at his numbers:
View attachment 256640
he's predicted half the CUDA cores for 3090. leaked 5248 vs real 10496
his 4352 for 3080 -which is same cores as 2080Ti had- is less than half the actual 8960
his leak is exactly half for 3070 and 3070Ti
And here is his crazy 20gb rumors about 3080Ti
View attachment 256643
I got 980ti from Asus with 1500 MHZ on core then i bought GTX 1080 with 2078 mhz and there was only 5% difference, but huge diffrence in watts.
GTX 1080 was 25% faster when comparing OC versions and 20% for the OC-OC. And it did that with 10% less transistors 7200/8000. 600 shrinking to 300mm2. and the impressive 59% improvement between FE and OC-OC.
Now 4070 Ti has more transistors, L2 and ROPs, But cut to 192 bit bus and using same memory speed G6X. no improvement there. Bandwidth is cut to less than half.
Processor | AMD Ryzen 9 5900X ||| Intel Core i7-3930K |
---|---|
Motherboard | ASUS ProArt B550-CREATOR ||| Asus P9X79 WS |
Cooling | Noctua NH-U14S ||| Be Quiet Pure Rock |
Memory | Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz |
Video Card(s) | MSI GTX 1060 3GB ||| MSI GTX 680 4GB |
Storage | Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB |
Display(s) | Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24" |
Case | Fractal Design Define 7 XL x 2 |
Audio Device(s) | Cambridge Audio DacMagic Plus |
Power Supply | Seasonic Focus PX-850 x 2 |
Mouse | Razer Abyssus |
Keyboard | CM Storm QuickFire XT |
Software | Ubuntu |
That's because the vast majority of "leaks" are not leaks at all, just random nobodies on Reddit, Twitter or YouTube pulling numbers out of thin air.Rumors before 30 launch were under estimating CUDA count by a large number.
It's likely they're overestimating performance this time to reach an equilibrium.
System Name | Ryzen Reflection |
---|---|
Processor | AMD Ryzen 9 5900x |
Motherboard | Gigabyte X570S Aorus Master |
Cooling | 2x EK PE360 | TechN AM4 AMD Block Black | EK Quantum Vector Trinity GPU Nickel + Plexi |
Memory | Teamgroup T-Force Xtreem 2x16GB B-Die 3600 @ 14-14-14-28-42-288-2T 1.45v |
Video Card(s) | Zotac AMP HoloBlack RTX 3080Ti 12G | 950mV 1950Mhz |
Storage | WD SN850 500GB (OS) | Samsung 980 Pro 1TB (Games_1) | Samsung 970 Evo 1TB (Games_2) |
Display(s) | Asus XG27AQM 240Hz G-Sync Fast-IPS | Gigabyte M27Q-P 165Hz 1440P IPS | LG 24" IPS 1440p |
Case | Lian Li PC-011D XL | Custom cables by Cablemodz |
Audio Device(s) | FiiO K7 | Sennheiser HD650 + Beyerdynamic FOX Mic |
Power Supply | Seasonic Prime Ultra Platinum 850 |
Mouse | Razer Viper v2 Pro |
Keyboard | Corsair K65 Plus 75% Wireless - USB Mode |
Software | Windows 11 Pro 64-Bit |
It may not be as bad as you think if the l2 cache increased by as much as leaks indicate or moreThat better not be true. 256bit mem bus or NVidia can suck a d... duck, yes, suck a duck.
Nvidia fanboys bashing Samsung's 8nm? I was under the impression that they claim it's the best think since sliced bread, because how could they say anything otherwise? The fact is that this node was never meant for high power complex chips like Ampere. It was meant for low power smartphone SoC's like the one i have in my S10e that is based on this same 8nm process. Samsung did scale it up but it was never going to beat TSMC's 7nm in terms of efficiency.Samsung's 8nm is quite good node. Much better then Nvidia fansboys like to tell every one. Ampere has high TDP because Ampere is inefficient.
Focusing just on the bus width is pure ignorance.That better not be true. 256bit mem bus or NVidia can suck a d... duck, yes, suck a duck.
Processor | AMD Ryzen 9 5900X ||| Intel Core i7-3930K |
---|---|
Motherboard | ASUS ProArt B550-CREATOR ||| Asus P9X79 WS |
Cooling | Noctua NH-U14S ||| Be Quiet Pure Rock |
Memory | Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz |
Video Card(s) | MSI GTX 1060 3GB ||| MSI GTX 680 4GB |
Storage | Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB |
Display(s) | Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24" |
Case | Fractal Design Define 7 XL x 2 |
Audio Device(s) | Cambridge Audio DacMagic Plus |
Power Supply | Seasonic Focus PX-850 x 2 |
Mouse | Razer Abyssus |
Keyboard | CM Storm QuickFire XT |
Software | Ubuntu |
In the 80s and early 90s, bits was all the rage for gaming consoles, CPU register width that time; 16-bits of power!Focusing just on the bus width is pure ignorance.
System Name | Bragging Rights |
---|---|
Processor | Atom Z3735F 1.33GHz |
Motherboard | It has no markings but it's green |
Cooling | No, it's a 2.2W processor |
Memory | 2GB DDR3L-1333 |
Video Card(s) | Gen7 Intel HD (4EU @ 311MHz) |
Storage | 32GB eMMC and 128GB Sandisk Extreme U3 |
Display(s) | 10" IPS 1280x800 60Hz |
Case | Veddha T2 |
Audio Device(s) | Apparently, yes |
Power Supply | Samsung 18W 5V fast-charger |
Mouse | MX Anywhere 2 |
Keyboard | Logitech MX Keys (not Cherry MX at all) |
VR HMD | Samsung Oddyssey, not that I'd plug it into this though.... |
Software | W10 21H1, barely |
Benchmark Scores | I once clocked a Celeron-300A to 564MHz on an Abit BE6 and it scored over 9000. |
Even sub-2GHz there's a lot of efficiency on the table. This PC in my living room has a 3060 in it.Everyone always assumes samsung's process sucked just because nvidia OCed their parts too high to dominate said graphs out of the box. Sub 2 GHz the 8nm node is great.
What I wouldnt give to find one of those A2000s to upgrade my SFF box.
Are you limited to low-profile? If not, just buy a dirt-cheap 3060 and run it at the lowest power limit you can. This Palit goes down to 55% which ends up drawing a fraction over 90W board power. With the patience to undervolt and tune you can probably get >1700MHz boost clocks from under 100W.What I wouldn't give to find one of those A2000s to upgrade my SFF box.
Processor | AMD Ryzen 9 5900X ||| Intel Core i7-3930K |
---|---|
Motherboard | ASUS ProArt B550-CREATOR ||| Asus P9X79 WS |
Cooling | Noctua NH-U14S ||| Be Quiet Pure Rock |
Memory | Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz |
Video Card(s) | MSI GTX 1060 3GB ||| MSI GTX 680 4GB |
Storage | Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB |
Display(s) | Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24" |
Case | Fractal Design Define 7 XL x 2 |
Audio Device(s) | Cambridge Audio DacMagic Plus |
Power Supply | Seasonic Focus PX-850 x 2 |
Mouse | Razer Abyssus |
Keyboard | CM Storm QuickFire XT |
Software | Ubuntu |
I like the idea, I might try it out when I build a HTPC.Even sub-2GHz there's a lot of efficiency on the table.
<snip>
Comparing 60% against stock, it's 87% of the performance for only 62% of the total board power.
Comparing 60% against an OC, it's , it's 86% of the performance for only 58% of the total board power.
Undervolting is a little more "scary" though. Is it really worth the risk of crashing during gameplay or movies?I haven't even bothered undervolting yet. It's inaudible under load set to 75% in afterburner and I'm only losing 5% of the stock performance for that privilege.
System Name | M3401 notebook |
---|---|
Processor | 5600H |
Motherboard | NA |
Memory | 16GB |
Video Card(s) | 3050 |
Storage | 500GB SSD |
Display(s) | 14" OLED screen of the laptop |
Software | Windows 10 |
Benchmark Scores | 3050 scores good 15-20% lower than average, despite ASUS's claims that it has uber cooling. |
Where is that "mainland Europe" where electricity price trippled please? Average EU price was somewhere in 25-30 cents area, in which country would you pay 75-90 cents for a kilowatt - hour???Here in mainland Europe electricity expenses went up 200 to 300%
NV simply claimed things had double the number of shaders cards really had, just because they could do fp+fp.Rumors before 30 launch were under estimating CUDA count by a large number.
System Name | Bragging Rights |
---|---|
Processor | Atom Z3735F 1.33GHz |
Motherboard | It has no markings but it's green |
Cooling | No, it's a 2.2W processor |
Memory | 2GB DDR3L-1333 |
Video Card(s) | Gen7 Intel HD (4EU @ 311MHz) |
Storage | 32GB eMMC and 128GB Sandisk Extreme U3 |
Display(s) | 10" IPS 1280x800 60Hz |
Case | Veddha T2 |
Audio Device(s) | Apparently, yes |
Power Supply | Samsung 18W 5V fast-charger |
Mouse | MX Anywhere 2 |
Keyboard | Logitech MX Keys (not Cherry MX at all) |
VR HMD | Samsung Oddyssey, not that I'd plug it into this though.... |
Software | W10 21H1, barely |
Benchmark Scores | I once clocked a Celeron-300A to 564MHz on an Abit BE6 and it scored over 9000. |
I like the idea, I might try it out when I build a HTPC.
(Now with supplies getting better, there could be opportunities to get a good card at discount.)
But I do wonder though, how does this affect the frame rate consistency?
Undervolting is a little more "scary" though. Is it really worth the risk of crashing during gameplay or movies?
I would think that with 25% of the TDP shaved off, the cooler should be easily capable of cooling a card fairly silently, even if it was a higher TDP card than this.
30 sents. If it used to be 10 sents that that has indeed tripled. 30 was pretty high before. Now it's the norm.Where is that "mainland Europe" where electricity price trippled please? Average EU price was somewhere in 25-30 cents area, in which country would you pay 75-90 cents for a kilowatt - hour???
System Name | Bragging Rights |
---|---|
Processor | Atom Z3735F 1.33GHz |
Motherboard | It has no markings but it's green |
Cooling | No, it's a 2.2W processor |
Memory | 2GB DDR3L-1333 |
Video Card(s) | Gen7 Intel HD (4EU @ 311MHz) |
Storage | 32GB eMMC and 128GB Sandisk Extreme U3 |
Display(s) | 10" IPS 1280x800 60Hz |
Case | Veddha T2 |
Audio Device(s) | Apparently, yes |
Power Supply | Samsung 18W 5V fast-charger |
Mouse | MX Anywhere 2 |
Keyboard | Logitech MX Keys (not Cherry MX at all) |
VR HMD | Samsung Oddyssey, not that I'd plug it into this though.... |
Software | W10 21H1, barely |
Benchmark Scores | I once clocked a Celeron-300A to 564MHz on an Abit BE6 and it scored over 9000. |
It doesn't, at least not in a negative way.But I do wonder though, how does this affect the frame rate consistency?
3070 256 bit, 4070 = 192 bit, likely next gen 5070 will be 128 bit, disappointing.
System Name | Bragging Rights |
---|---|
Processor | Atom Z3735F 1.33GHz |
Motherboard | It has no markings but it's green |
Cooling | No, it's a 2.2W processor |
Memory | 2GB DDR3L-1333 |
Video Card(s) | Gen7 Intel HD (4EU @ 311MHz) |
Storage | 32GB eMMC and 128GB Sandisk Extreme U3 |
Display(s) | 10" IPS 1280x800 60Hz |
Case | Veddha T2 |
Audio Device(s) | Apparently, yes |
Power Supply | Samsung 18W 5V fast-charger |
Mouse | MX Anywhere 2 |
Keyboard | Logitech MX Keys (not Cherry MX at all) |
VR HMD | Samsung Oddyssey, not that I'd plug it into this though.... |
Software | W10 21H1, barely |
Benchmark Scores | I once clocked a Celeron-300A to 564MHz on an Abit BE6 and it scored over 9000. |
I was on a fixed tariff during lockdown of 16.6p per KWh. Currently I'm paying 33p per KWh and it's about to go up again by 20% in October, so in the space of a couple of years, I'll have seen a 2.4x increase.30 sents. If it used to be 10 sents that that has indeed tripled. 30 was pretty high before. Now it's the norm.
I can for example say that i now pay same or more during the summer that use used to pay during the winter in the coldest months.
No I didn't. You are not reading the material correctly and understanding context.You got called out on an inaccurate statement.
On that one review for that specific game. For everything else, cards are tested with RTRT on. Why? Because CyberPunk2077 is the new Crysis. It will bring any system it runs on to it's knees and in the way that W1zard usually conducts testing, would bring the frames rates to a crawl which would interfere with testing results on power usage. So for that ONE game, RTRT is turned off.Reviews here on TPU show the power measurement numbers with ray tracing disabled.
Wow. Just wow.Doing Hardware raytracing while running non raytraced games!!!
So you are telling me that TPU was able to run the same Raytraced test to measure power on GTX 1630?? You need to read the TPU review methodology properly. If you lazy to read it, It says Gaming: Cyberpunk 2077 is running at 2560x1440 with Ultra settings and ray tracing disabled. We ensure the card is heated up properly, which ensures a steady-state result instead of short-term numbers that won't hold up in long-term usage.
No matter what how Nvidia fansboys trying to spin it Ampere is inefficient. Let me give you a example,NVIDIA GeForce RTX 3090 Ti Founders Edition Review
The GeForce RTX 3090 Ti Founders Edition is NVIDIA's mightiest card from the Ampere lineup. We previously looked at various custom designs. Today, we're checking out the Founders Edition to test how well it does in terms of heat and noise, and whether it's an alternative to the even more...www.techpowerup.com
Here we can see Vega20 to Arcturus same node with new hardware matrix unit and 1.73x FP32 performance, same TBP. Where GV100 to GA100, TSMC's 12nm to 7nm(12->7-60% power reduction), only 1.38X FP32 performance, 50W more power. If Ampere were efficient then TBD would have been lower or same. But that does not happened. TBP rose which means one thing Ampere is not efficient.
Card Name/GPU Name Manufacturing Node FP32 TLFOPS Power(TBP) MI50(Vega20) TSMC 7nm 13.3 300W MI100(Arcturus) TSMC 7nm 23.1 300W Tesla V100(GV100) TSMC 12nm 14.13 250W(upto 300W version) A100(GA100) TSMC 7nm 19.5 300W(upto 500W Version)
Highly binned A4000 will efficient same as Vega Pro 64 is efficient then the consumer version. Original Vega can run on 150-180W when under volted. So can we say that Vega was efficient and all nvidia fans who dump on Vega are dishonest propagandist.
Edit: Looks like @Beertintedgoggles already pointed out the power consumption testing part.
I don't care? 256bit or they can take a flying leap..It may not be as bad as you think if the l2 cache increased by as much as leaks indicate or more
Rendering an opinion without historical context and technological understanding is unadulterated ignorance.Focusing just on the bus width is pure ignorance.