Processor | OCed 5800X3D |
---|---|
Motherboard | Asucks C6H |
Cooling | Air |
Memory | 32GB |
Video Card(s) | OCed 6800XT |
Storage | NVMees |
Display(s) | 32" Dull curved 1440 |
Case | Freebie glass idk |
Audio Device(s) | Sennheiser |
Power Supply | Don't even remember |
How can Vega be bandwidth limited? It has the same memory bandwidth as GTX 1080 Ti and more than RTX 2080. It still lacks tiled rendering, but that shouldn't make this much of a difference.
Processor | AMD Ryzen 9 5900X ||| Intel Core i7-3930K |
---|---|
Motherboard | ASUS ProArt B550-CREATOR ||| Asus P9X79 WS |
Cooling | Noctua NH-U14S ||| Be Quiet Pure Rock |
Memory | Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz |
Video Card(s) | MSI GTX 1060 3GB ||| MSI GTX 680 4GB |
Storage | Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB |
Display(s) | Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24" |
Case | Fractal Design Define 7 XL x 2 |
Audio Device(s) | Cambridge Audio DacMagic Plus |
Power Supply | Seasonic Focus PX-850 x 2 |
Mouse | Razer Abyssus |
Keyboard | CM Storm QuickFire XT |
Software | Ubuntu |
"It just is"?Nvidia does more color compression, I believe. And in any case, it just is.
Benchmark Scores | Faster than yours... I'd bet on it. :) |
---|
Good lawd.... if that is true those gpus are horrifically inefficient. Wow...So I am not sure how valuable GamerNexus is but this is what they say:
Speaking with Buildzoid, we know that Vega: Frontier Edition’s 16GB HBM2 pulls 20W max, using a DMM to determine this consumption. This ignores the voltage controller’s 3.3v draw, but we’re still at 20W memory, and no more than an additional 10W for the controller – that’s less than 30W for the entire memory system on Vega: Frontier Edition.
We also know that an RX 480 uses 40-50W for its 8GB, which is already a significant increase in power consumption per-GB over Vega: FE. The RX 480 also has a memory bandwidth of 256GB/s with 8GB GDDR5, versus Vega 64’s 484GB/s. The result is increased bandwidth, the same capacity, and lower power consumption, but at higher cost to build. In order for an RX 480 to hypothetically reach similar bandwidth, power consumption would increase significantly. Buildzoid calculates that a hypothetical 384-bit GDDR5 bus on Polaris architecture would push 60-75W, and an imaginary 512-bit bus would do 80-100W. For this reason alone, HBM2 saves AMD from high power budget that would otherwise be spent solely on memory. This comes down to architectural decisions made years ago by AMD, which are most readily solved for with HBM2, as HBM2 provides greater bandwidth per watt than GDDR5. HBM is effectively a necessity to make Vega at least somewhat power efficient while keeping the higher memory bandwidth. Imagine Vega 56, 64, or FE drawing an additional 70-100W – the world wouldn’t have it, and it’d be among the hottest cards since the GTX 480 or R9 290X.
https://www.gamersnexus.net/guides/3032-vega-56-cost-of-hbm2-and-necessity-to-use-it
Seems that 30W per 8GB ram could be reasonable, could be more...could be less.
Too bad it seems to cost more...? I thought that was something they said was cheaper?Production is the only problem. It saves space/power, so it's 100% the future unless someone wants to fund another type.
Processor | OCed 5800X3D |
---|---|
Motherboard | Asucks C6H |
Cooling | Air |
Memory | 32GB |
Video Card(s) | OCed 6800XT |
Storage | NVMees |
Display(s) | 32" Dull curved 1440 |
Case | Freebie glass idk |
Audio Device(s) | Sennheiser |
Power Supply | Don't even remember |
Good lawd.... if that is true those gpus are horrifically inefficient. Wow...
Too bad it seems to cost more...? I thought that was something they said was cheaper?
Either way, I dont see it as the future until others adopt it, and its bandwidth becomes worthwhile for the masses.
"It just is"?
People claim GCN is starved of memory bandwidth, computational performance and fillrate (ROPs), but neither is true. GCN have plenty of memory bandwidth and computational performance compared to their Pascal and Turing counterparts, and fillrate is sufficient.
We all know what the problem with GCN is; utilization of resources. We have been over this many times before. The efficiency gap between Nvidia and AMD have increased with every generation since the initial GCN, and is also the cause of AMD's thermal problems, as they have to throw more and more "brute force" resources at it to achieve performance. If Vega had close to the level of resource utilization of Turing, it would have been able to compete, even without 7nm.
AMD is not going to make significant improvements until they have a new architecture. Still, Navi can do smaller improvements, like low hanging fruit such as tiled rendering. Tiled rendering would help memory bandwidth a little bit, but much more importantly it eases the GPU's task of analyzing resource dependencies, which is one of key causes of GCN's problems. So in the end, AMD might get greater benefits from tiled rendering than Nvidia.
Benchmark Scores | Faster than yours... I'd bet on it. :) |
---|
I didnt know NVIDIA had any cards with HBM....what are you saying here?Nvidia uses it in every card that matters since it was available lol
I might agree but, AMD have invested in/for the Professional HPC Markets of AI and Deep Learning which are huge growth markets. AMD for the past several years are/and said they're not focused on High-End Gaming (slow growth). That being said, what they actually have to work/offer right now... they aren't losing out or that bad. They had to juggle resources, to where they had the best return, and on that course changed and execution allows this product. Given that "focus" was never to offer such products, I think it keeps them alive, but yes Navi can't come soon enough.I honestly wish they would ditch HBM already as it doesn't seem to be helping much in the consumer/gaming realm
If you got lemons you make Lemonade.There might be some 56CU/3584SP parts they could harvest, wouldn't those be interesting if they turned them into Nano's. 15% above a V64 @ 175 TDP for $550, we can dream...
Processor | OCed 5800X3D |
---|---|
Motherboard | Asucks C6H |
Cooling | Air |
Memory | 32GB |
Video Card(s) | OCed 6800XT |
Storage | NVMees |
Display(s) | 32" Dull curved 1440 |
Case | Freebie glass idk |
Audio Device(s) | Sennheiser |
Power Supply | Don't even remember |
I didnt know NVIDIA had any cards with HBM....what are you saying here?
Benchmark Scores | Faster than yours... I'd bet on it. :) |
---|
ahhh... different scope in HPC.Nvidia uses it for a reason in HPC...
Processor | AMD Ryzen 9 5900X ||| Intel Core i7-3930K |
---|---|
Motherboard | ASUS ProArt B550-CREATOR ||| Asus P9X79 WS |
Cooling | Noctua NH-U14S ||| Be Quiet Pure Rock |
Memory | Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz |
Video Card(s) | MSI GTX 1060 3GB ||| MSI GTX 680 4GB |
Storage | Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB |
Display(s) | Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24" |
Case | Fractal Design Define 7 XL x 2 |
Audio Device(s) | Cambridge Audio DacMagic Plus |
Power Supply | Seasonic Focus PX-850 x 2 |
Mouse | Razer Abyssus |
Keyboard | CM Storm QuickFire XT |
Software | Ubuntu |
For the simple reason that GDDR memory controllers >384-bit becomes too complex.Nvidia uses it for a reason in HPC...
Processor | OCed 5800X3D |
---|---|
Motherboard | Asucks C6H |
Cooling | Air |
Memory | 32GB |
Video Card(s) | OCed 6800XT |
Storage | NVMees |
Display(s) | 32" Dull curved 1440 |
Case | Freebie glass idk |
Audio Device(s) | Sennheiser |
Power Supply | Don't even remember |
For the simple reason that GDDR memory controllers >384-bit becomes too complex.
That fine... keep it in the professional markets where it's more useful. It isnt doing much at all for gaming and consumers.
ahhh... different scope in HPC.
I thought we were talking consumer and gaming... dont move the goal posts!
Benchmark Scores | Faster than yours... I'd bet on it. :) |
---|
Processor | AMD Ryzen 9 5900X ||| Intel Core i7-3930K |
---|---|
Motherboard | ASUS ProArt B550-CREATOR ||| Asus P9X79 WS |
Cooling | Noctua NH-U14S ||| Be Quiet Pure Rock |
Memory | Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz |
Video Card(s) | MSI GTX 1060 3GB ||| MSI GTX 680 4GB |
Storage | Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB |
Display(s) | Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24" |
Case | Fractal Design Define 7 XL x 2 |
Audio Device(s) | Cambridge Audio DacMagic Plus |
Power Supply | Seasonic Focus PX-850 x 2 |
Mouse | Razer Abyssus |
Keyboard | CM Storm QuickFire XT |
Software | Ubuntu |
You just obliterated my last shred of respect for you, well done.Now, you're just making up excuses lolFor the simple reason that GDDR memory controllers >384-bit becomes too complex.
This is little more than nit-picking because real-world performance numbers are not public.Simple.Price of Radeon VII.if it was $499 , I doubt he would pull those stuff.on YouTube You see some more his negative opinions.obviously he has the right to reserve his point.
Processor | OCed 5800X3D |
---|---|
Motherboard | Asucks C6H |
Cooling | Air |
Memory | 32GB |
Video Card(s) | OCed 6800XT |
Storage | NVMees |
Display(s) | 32" Dull curved 1440 |
Case | Freebie glass idk |
Audio Device(s) | Sennheiser |
Power Supply | Don't even remember |
You just obliterated my last shred of respect for you, well done.
GDDR memory controllers become more challenging with newer GDDR memory standards. As GDDR pushes beyond GDDR6 with higher and higher effective clocks, I assume even 384-bit controllers will become challenging at some point.
System Name | M3401 notebook |
---|---|
Processor | 5600H |
Motherboard | NA |
Memory | 16GB |
Video Card(s) | 3050 |
Storage | 500GB SSD |
Display(s) | 14" OLED screen of the laptop |
Software | Windows 10 |
Benchmark Scores | 3050 scores good 15-20% lower than average, despite ASUS's claims that it has uber cooling. |
Oh, FFS, come on!!!The efficiency gap between Nvidia and AMD have increased with every generation
They couldn't know what comes out of it, before it comes out of development.They wouldn't use it if it weren't better, bc it costs more.
Processor | AMD Ryzen 9 5900X ||| Intel Core i7-3930K |
---|---|
Motherboard | ASUS ProArt B550-CREATOR ||| Asus P9X79 WS |
Cooling | Noctua NH-U14S ||| Be Quiet Pure Rock |
Memory | Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz |
Video Card(s) | MSI GTX 1060 3GB ||| MSI GTX 680 4GB |
Storage | Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB |
Display(s) | Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24" |
Case | Fractal Design Define 7 XL x 2 |
Audio Device(s) | Cambridge Audio DacMagic Plus |
Power Supply | Seasonic Focus PX-850 x 2 |
Mouse | Razer Abyssus |
Keyboard | CM Storm QuickFire XT |
Software | Ubuntu |
Both Nvidia and AMD have used 512-bit GDDR memory controllers in the past, but with more advanced memory it gets to a point where HBM becomes advantageous vs. giant GDDR controllers, which is why Nvidia use HBM when they have to and GDDR otherwise.No, no one needs to makes them bigger. We all know you just use multiple. Try harder for excuses.
Processor | 7800X3D |
---|---|
Motherboard | MSI MAG Mortar b650m wifi |
Cooling | Thermalright Peerless Assassin |
Memory | 32GB Corsair Vengeance 30CL6000 |
Video Card(s) | ASRock RX7900XT Phantom Gaming |
Storage | Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB |
Display(s) | Gigabyte G34QWC (3440x1440) |
Case | Lian Li A3 mATX White |
Audio Device(s) | Harman Kardon AVR137 + 2.1 |
Power Supply | EVGA Supernova G2 750W |
Mouse | Steelseries Aerox 5 |
Keyboard | Lenovo Thinkpad Trackpoint II |
Software | W11 IoT Enterprise LTSC |
Benchmark Scores | Over 9000 |
Oh, FFS, come on!!!
580 vs 1060, similar chip size, similar perf, vastly more FLOPS PER DIE AREA on AMD side.
If AMD would be "as efficient", 580 is supposed to beat 1080.
AMD simply crams more CUs per die area than nVidia, both have reasons to.
They couldn't know what comes out of it, before it comes out of development.
And once it does, I doubt it's so easy to jump from mem type to mem type, unless it was planned upfront.
Oh, and given Huang's claim Volta development eat 4 billion of R&D, that thing never paid off.
System Name | M3401 notebook |
---|---|
Processor | 5600H |
Motherboard | NA |
Memory | 16GB |
Video Card(s) | 3050 |
Storage | 500GB SSD |
Display(s) | 14" OLED screen of the laptop |
Software | Windows 10 |
Benchmark Scores | 3050 scores good 15-20% lower than average, despite ASUS's claims that it has uber cooling. |
I don't know who are you talking to.Raw FLOPS do not translate into useful performance
There is now way that a GTX 480 would ever perform on the level of a 1080 unless they clocked by factor of 10. Or did you mean the RX480?480 would be on 1080's levels
But now with a 4096 bit wide-bus couldn't that that provide more bandwidth even using 8Gb HBM2? I would think to get another 15-18% over a Vega 64's 483.8 GB/s is not that difficult. Given Vega 7 works at as much as 1200 MHz (2400 MHz effective), which is more than the Vega 64 of only 945 MHz (1890 MHz effective). If they got a 1100Mhz clock (2200 MHz effective) would be more than enough to have it see an overall performance up-tick of 15%.
So using this Bandwidth Calculator I get 1126 GB/s, or about 10% off the Vega 7 1229 GB/s. I think that's enough...
http://gpubandwidthcalculator.totalh.net/?i=1
"It just is"?
People claim GCN is starved of memory bandwidth, computational performance and fillrate (ROPs), but neither is true. GCN have plenty of memory bandwidth and computational performance compared to their Pascal and Turing counterparts, and fillrate is sufficient.
We all know what the problem with GCN is; utilization of resources. We have been over this many times before. The efficiency gap between Nvidia and AMD have increased with every generation since the initial GCN, and is also the cause of AMD's thermal problems, as they have to throw more and more "brute force" resources at it to achieve performance. If Vega had close to the level of resource utilization of Turing, it would have been able to compete, even without 7nm.
AMD is not going to make significant improvements until they have a new architecture. Still, Navi can do smaller improvements, like low hanging fruit such as tiled rendering. Tiled rendering would help memory bandwidth a little bit, but much more importantly it eases the GPU's task of analyzing resource dependencies, which is one of key causes of GCN's problems. So in the end, AMD might get greater benefits from tiled rendering than Nvidia.
Processor | 7800X3D |
---|---|
Motherboard | MSI MAG Mortar b650m wifi |
Cooling | Thermalright Peerless Assassin |
Memory | 32GB Corsair Vengeance 30CL6000 |
Video Card(s) | ASRock RX7900XT Phantom Gaming |
Storage | Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB |
Display(s) | Gigabyte G34QWC (3440x1440) |
Case | Lian Li A3 mATX White |
Audio Device(s) | Harman Kardon AVR137 + 2.1 |
Power Supply | EVGA Supernova G2 750W |
Mouse | Steelseries Aerox 5 |
Keyboard | Lenovo Thinkpad Trackpoint II |
Software | W11 IoT Enterprise LTSC |
Benchmark Scores | Over 9000 |
I don't know who are you talking to.
I was addressing "flops efficiency" myth. It's different architectures, those transitors that nvidia didn't use to cram more CUs, went somewhere. Had there been no compromise, 480 would be on 1080's levels, a 1.5 times bigger chip.
Processor | AMD Ryzen 9 5900X ||| Intel Core i7-3930K |
---|---|
Motherboard | ASUS ProArt B550-CREATOR ||| Asus P9X79 WS |
Cooling | Noctua NH-U14S ||| Be Quiet Pure Rock |
Memory | Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz |
Video Card(s) | MSI GTX 1060 3GB ||| MSI GTX 680 4GB |
Storage | Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB |
Display(s) | Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24" |
Case | Fractal Design Define 7 XL x 2 |
Audio Device(s) | Cambridge Audio DacMagic Plus |
Power Supply | Seasonic Focus PX-850 x 2 |
Mouse | Razer Abyssus |
Keyboard | CM Storm QuickFire XT |
Software | Ubuntu |
Just because you can find an edge-case where increasing one attribute gives a little improvement doesn't mean the product is bottlenecked by this. Some workloads are always going to be slightly more memory intensive, some have more geometry etc. Memory speed is not the primary thing holding Vega back, and there is no reason why Vega should need over twice the (theoretical) memory bandwidth of RTX 2080 to compete with it.I agree with most you said. But vega absolutely scales with more HBM2 speeds. Its a fact. I have personally witnessed it.
They should, because it's low-hanging fruit, and will give a decent gain in certain workloads. But keep in mind that Nvidia only enables it when it's advantageous.I agree that AMD can't get Tiled based rendering fast enough. That alone could make their cards instantly 20-30% power efficient.
Yea they can't splurge on 16Gb for a card at $550, while hadn't heard of 2Gb stacks. Wonder if there's a way to take 4Gb stacks that are defective and disable/fuse-off half and still have a functional 2Gb per stack.I think you need 4 stacks of HBM2 to get that bandwidth.