• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD's Initial Production Run of Radeon VII Just 5,000 Pieces, Company Denies it

Low quality post by XXL_AI
Joined
Sep 15, 2007
Messages
3,946 (0.63/day)
Location
Police/Nanny State of America
Processor OCed 5800X3D
Motherboard Asucks C6H
Cooling Air
Memory 32GB
Video Card(s) OCed 6800XT
Storage NVMees
Display(s) 32" Dull curved 1440
Case Freebie glass idk
Audio Device(s) Sennheiser
Power Supply Don't even remember
How can Vega be bandwidth limited? It has the same memory bandwidth as GTX 1080 Ti and more than RTX 2080. It still lacks tiled rendering, but that shouldn't make this much of a difference.

Nvidia does more color compression, I believe. And in any case, it just is.
 
Joined
Jun 10, 2014
Messages
2,978 (0.78/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
Nvidia does more color compression, I believe. And in any case, it just is.
"It just is"? :rolleyes:
People claim GCN is starved of memory bandwidth, computational performance and fillrate (ROPs), but neither is true. GCN have plenty of memory bandwidth and computational performance compared to their Pascal and Turing counterparts, and fillrate is sufficient.

We all know what the problem with GCN is; utilization of resources. We have been over this many times before. The efficiency gap between Nvidia and AMD have increased with every generation since the initial GCN, and is also the cause of AMD's thermal problems, as they have to throw more and more "brute force" resources at it to achieve performance. If Vega had close to the level of resource utilization of Turing, it would have been able to compete, even without 7nm.

AMD is not going to make significant improvements until they have a new architecture. Still, Navi can do smaller improvements, like low hanging fruit such as tiled rendering. Tiled rendering would help memory bandwidth a little bit, but much more importantly it eases the GPU's task of analyzing resource dependencies, which is one of key causes of GCN's problems. So in the end, AMD might get greater benefits from tiled rendering than Nvidia.
 
Joined
Dec 31, 2009
Messages
19,371 (3.57/day)
Benchmark Scores Faster than yours... I'd bet on it. :)
So I am not sure how valuable GamerNexus is but this is what they say:

Speaking with Buildzoid, we know that Vega: Frontier Edition’s 16GB HBM2 pulls 20W max, using a DMM to determine this consumption. This ignores the voltage controller’s 3.3v draw, but we’re still at 20W memory, and no more than an additional 10W for the controller – that’s less than 30W for the entire memory system on Vega: Frontier Edition.
We also know that an RX 480 uses 40-50W for its 8GB, which is already a significant increase in power consumption per-GB over Vega: FE. The RX 480 also has a memory bandwidth of 256GB/s with 8GB GDDR5, versus Vega 64’s 484GB/s. The result is increased bandwidth, the same capacity, and lower power consumption, but at higher cost to build. In order for an RX 480 to hypothetically reach similar bandwidth, power consumption would increase significantly. Buildzoid calculates that a hypothetical 384-bit GDDR5 bus on Polaris architecture would push 60-75W, and an imaginary 512-bit bus would do 80-100W. For this reason alone, HBM2 saves AMD from high power budget that would otherwise be spent solely on memory. This comes down to architectural decisions made years ago by AMD, which are most readily solved for with HBM2, as HBM2 provides greater bandwidth per watt than GDDR5. HBM is effectively a necessity to make Vega at least somewhat power efficient while keeping the higher memory bandwidth. Imagine Vega 56, 64, or FE drawing an additional 70-100W – the world wouldn’t have it, and it’d be among the hottest cards since the GTX 480 or R9 290X.
https://www.gamersnexus.net/guides/3032-vega-56-cost-of-hbm2-and-necessity-to-use-it

Seems that 30W per 8GB ram could be reasonable, could be more...could be less.
Good lawd.... if that is true those gpus are horrifically inefficient. Wow...

Production is the only problem. It saves space/power, so it's 100% the future unless someone wants to fund another type.
Too bad it seems to cost more...? I thought that was something they said was cheaper?

Either way, I dont see it as the future until others adopt it, and its bandwidth becomes worthwhile for the masses.
 
Joined
Sep 15, 2007
Messages
3,946 (0.63/day)
Location
Police/Nanny State of America
Processor OCed 5800X3D
Motherboard Asucks C6H
Cooling Air
Memory 32GB
Video Card(s) OCed 6800XT
Storage NVMees
Display(s) 32" Dull curved 1440
Case Freebie glass idk
Audio Device(s) Sennheiser
Power Supply Don't even remember
Good lawd.... if that is true those gpus are horrifically inefficient. Wow...

Too bad it seems to cost more...? I thought that was something they said was cheaper?

Either way, I dont see it as the future until others adopt it, and its bandwidth becomes worthwhile for the masses.

Nvidia uses it in every card that matters since it was available lol

"It just is"? :rolleyes:
People claim GCN is starved of memory bandwidth, computational performance and fillrate (ROPs), but neither is true. GCN have plenty of memory bandwidth and computational performance compared to their Pascal and Turing counterparts, and fillrate is sufficient.

We all know what the problem with GCN is; utilization of resources. We have been over this many times before. The efficiency gap between Nvidia and AMD have increased with every generation since the initial GCN, and is also the cause of AMD's thermal problems, as they have to throw more and more "brute force" resources at it to achieve performance. If Vega had close to the level of resource utilization of Turing, it would have been able to compete, even without 7nm.

AMD is not going to make significant improvements until they have a new architecture. Still, Navi can do smaller improvements, like low hanging fruit such as tiled rendering. Tiled rendering would help memory bandwidth a little bit, but much more importantly it eases the GPU's task of analyzing resource dependencies, which is one of key causes of GCN's problems. So in the end, AMD might get greater benefits from tiled rendering than Nvidia.

Cry all you want. OCing HBM yields gains. It's not like I've done it myself on multiple cards....
 
Joined
Apr 19, 2011
Messages
2,198 (0.44/day)
Location
So. Cal.
I honestly wish they would ditch HBM already as it doesn't seem to be helping much in the consumer/gaming realm
I might agree but, AMD have invested in/for the Professional HPC Markets of AI and Deep Learning which are huge growth markets. AMD for the past several years are/and said they're not focused on High-End Gaming (slow growth). That being said, what they actually have to work/offer right now... they aren't losing out or that bad. They had to juggle resources, to where they had the best return, and on that course changed and execution allows this product. Given that "focus" was never to offer such products, I think it keeps them alive, but yes Navi can't come soon enough.

My comment comes from this originally:
There might be some 56CU/3584SP parts they could harvest, wouldn't those be interesting if they turned them into Nano's. 15% above a V64 @ 175 TDP for $550, we can dream...
If you got lemons you make Lemonade.
 
Last edited:
Joined
Sep 15, 2007
Messages
3,946 (0.63/day)
Location
Police/Nanny State of America
Processor OCed 5800X3D
Motherboard Asucks C6H
Cooling Air
Memory 32GB
Video Card(s) OCed 6800XT
Storage NVMees
Display(s) 32" Dull curved 1440
Case Freebie glass idk
Audio Device(s) Sennheiser
Power Supply Don't even remember
I didnt know NVIDIA had any cards with HBM....what are you saying here?


Nvidia uses it for a reason in HPC...

Also, if both of you claim you're smarter than buildzoid....feel free to make vidoes about it.
 
Joined
Dec 31, 2009
Messages
19,371 (3.57/day)
Benchmark Scores Faster than yours... I'd bet on it. :)
That fine... keep it in the professional markets where it's more useful. It isnt doing much at all for gaming and consumers.

Nvidia uses it for a reason in HPC...
ahhh... different scope in HPC. ;)

I thought we were talking consumer and gaming... dont move the goal posts!
 
Joined
Jun 10, 2014
Messages
2,978 (0.78/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
Joined
Sep 15, 2007
Messages
3,946 (0.63/day)
Location
Police/Nanny State of America
Processor OCed 5800X3D
Motherboard Asucks C6H
Cooling Air
Memory 32GB
Video Card(s) OCed 6800XT
Storage NVMees
Display(s) 32" Dull curved 1440
Case Freebie glass idk
Audio Device(s) Sennheiser
Power Supply Don't even remember
For the simple reason that GDDR memory controllers >384-bit becomes too complex.

Now, you're just making up excuses lol

That fine... keep it in the professional markets where it's more useful. It isnt doing much at all for gaming and consumers.

ahhh... different scope in HPC. ;)

I thought we were talking consumer and gaming... dont move the goal posts!

I'm not, you're both trying to deny facts. They wouldn't use it if it weren't better, bc it costs more. Nvidia is all about lower margins. Saving power on more than 8GB is a necessity (assuming they had PCB space for 32GB now).

We all know it's only a matter of time before it's on CPUs and used for more than vram.
 
Last edited:
Joined
Dec 31, 2009
Messages
19,371 (3.57/day)
Benchmark Scores Faster than yours... I'd bet on it. :)
Lol, what am I going against BZ about? Nobody mentioned HPC until you did. But it's a different conversation to me considering HPC arent used by normal consumers here/and what the discussion was centering around. ;)

I just think it has no business in the consumer/gaming segment for the reasons listed... what it excels in doesnt mean much at all for gaming.
 
Joined
Jun 10, 2014
Messages
2,978 (0.78/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
For the simple reason that GDDR memory controllers >384-bit becomes too complex.
Now, you're just making up excuses lol
You just obliterated my last shred of respect for you, well done.

GDDR memory controllers become more challenging with newer GDDR memory standards. As GDDR pushes beyond GDDR6 with higher and higher effective clocks, I assume even 384-bit controllers will become challenging at some point.
 
Joined
Jul 5, 2013
Messages
27,380 (6.61/day)
Simple.Price of Radeon VII.if it was $499 , I doubt he would pull those stuff.on YouTube You see some more his negative opinions.obviously he has the right to reserve his point.
This is little more than nit-picking because real-world performance numbers are not public.
 
Joined
Sep 15, 2007
Messages
3,946 (0.63/day)
Location
Police/Nanny State of America
Processor OCed 5800X3D
Motherboard Asucks C6H
Cooling Air
Memory 32GB
Video Card(s) OCed 6800XT
Storage NVMees
Display(s) 32" Dull curved 1440
Case Freebie glass idk
Audio Device(s) Sennheiser
Power Supply Don't even remember
You just obliterated my last shred of respect for you, well done.

GDDR memory controllers become more challenging with newer GDDR memory standards. As GDDR pushes beyond GDDR6 with higher and higher effective clocks, I assume even 384-bit controllers will become challenging at some point.

No, no one needs to makes them bigger. We all know you just use multiple. Try harder for excuses.

And since you were too lame to fall for the bait about die size increasing from more mem controllers...well there ya go. Another win for HBM.
 
Joined
Jul 9, 2015
Messages
3,413 (1.00/day)
System Name M3401 notebook
Processor 5600H
Motherboard NA
Memory 16GB
Video Card(s) 3050
Storage 500GB SSD
Display(s) 14" OLED screen of the laptop
Software Windows 10
Benchmark Scores 3050 scores good 15-20% lower than average, despite ASUS's claims that it has uber cooling.
The efficiency gap between Nvidia and AMD have increased with every generation
Oh, FFS, come on!!!

580 vs 1060, similar chip size, similar perf, vastly more FLOPS PER DIE AREA on AMD side.
If AMD would be "as efficient", 580 is supposed to beat 1080.

AMD simply crams more CUs per die area than nVidia, both have reasons to.

They wouldn't use it if it weren't better, bc it costs more.
They couldn't know what comes out of it, before it comes out of development.
And once it does, I doubt it's so easy to jump from mem type to mem type, unless it was planned upfront.

Oh, and given Huang's claim Volta development eat 4 billion of R&D, that thing never paid off.
 
Joined
Jun 10, 2014
Messages
2,978 (0.78/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
No, no one needs to makes them bigger. We all know you just use multiple. Try harder for excuses.
Both Nvidia and AMD have used 512-bit GDDR memory controllers in the past, but with more advanced memory it gets to a point where HBM becomes advantageous vs. giant GDDR controllers, which is why Nvidia use HBM when they have to and GDDR otherwise.
 
Joined
Apr 19, 2011
Messages
2,198 (0.44/day)
Location
So. Cal.
Here's something; There's been conversation as to the cost of HBM2, but what is the percentage difference between that and GDDR6 presently? Because some are saying GDDR6 is also more than original estimates than compared to that of top-shelf GDDR5. I'd go out and say that HBM2 might be about 15% more... here today.

Also, as AMD sells the complete interposer package, isn't their bottom-line showing a "final sale" price of assembly. Wouldn't that have a better profit margin when selling the complete and tested package, than just a GPU chip to an AIB? AMD removes risk from the AIB's in a deliverable package, and in a way streamlines the AIB's production process that has "value" also. AMD might have some financial at advantage in get more cash flow from these High-end markets.
 
Joined
Sep 17, 2014
Messages
22,292 (6.02/day)
Location
The Washing Machine
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling Thermalright Peerless Assassin
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
Oh, FFS, come on!!!

580 vs 1060, similar chip size, similar perf, vastly more FLOPS PER DIE AREA on AMD side.
If AMD would be "as efficient", 580 is supposed to beat 1080.

AMD simply crams more CUs per die area than nVidia, both have reasons to.


They couldn't know what comes out of it, before it comes out of development.
And once it does, I doubt it's so easy to jump from mem type to mem type, unless it was planned upfront.

Oh, and given Huang's claim Volta development eat 4 billion of R&D, that thing never paid off.

It is true though, it took AMD until Polaris to do a minor catch up, but Pascal destroyed that entirely, and now they're clocking Polaris way out of its comfort zone, and we're looking at a 15-20% TDP gap right now between maxed out Vega's versus 1080ti's. Meanwhile, the performance gap has also increased. That I think is better efficiency across generations, not sure what else to call it. Raw FLOPS do not translate into useful performance, you're saying so yourself with the 580 example.
 
Joined
Jul 9, 2015
Messages
3,413 (1.00/day)
System Name M3401 notebook
Processor 5600H
Motherboard NA
Memory 16GB
Video Card(s) 3050
Storage 500GB SSD
Display(s) 14" OLED screen of the laptop
Software Windows 10
Benchmark Scores 3050 scores good 15-20% lower than average, despite ASUS's claims that it has uber cooling.
Raw FLOPS do not translate into useful performance
I don't know who are you talking to.

I was addressing "flops efficiency" myth. It's different architectures, those transitors that nvidia didn't use to cram more CUs, went somewhere. Had there been no compromise, 480 would be on 1080's levels, a 1.5 times bigger chip.
 

Nkd

Joined
Sep 15, 2007
Messages
364 (0.06/day)
But now with a 4096 bit wide-bus couldn't that that provide more bandwidth even using 8Gb HBM2? I would think to get another 15-18% over a Vega 64's 483.8 GB/s is not that difficult. Given Vega 7 works at as much as 1200 MHz (2400 MHz effective), which is more than the Vega 64 of only 945 MHz (1890 MHz effective). If they got a 1100Mhz clock (2200 MHz effective) would be more than enough to have it see an overall performance up-tick of 15%.

So using this Bandwidth Calculator I get 1126 GB/s, or about 10% off the Vega 7 1229 GB/s. I think that's enough...
http://gpubandwidthcalculator.totalh.net/?i=1

I think you need 4 stacks of HBM2 to get that bandwidth. Like I said it won't be that cost effective for them to scale it down so I don't see a point. Especially when Navi is few months away from launch with GDDR6.

"It just is"? :rolleyes:
People claim GCN is starved of memory bandwidth, computational performance and fillrate (ROPs), but neither is true. GCN have plenty of memory bandwidth and computational performance compared to their Pascal and Turing counterparts, and fillrate is sufficient.

We all know what the problem with GCN is; utilization of resources. We have been over this many times before. The efficiency gap between Nvidia and AMD have increased with every generation since the initial GCN, and is also the cause of AMD's thermal problems, as they have to throw more and more "brute force" resources at it to achieve performance. If Vega had close to the level of resource utilization of Turing, it would have been able to compete, even without 7nm.

AMD is not going to make significant improvements until they have a new architecture. Still, Navi can do smaller improvements, like low hanging fruit such as tiled rendering. Tiled rendering would help memory bandwidth a little bit, but much more importantly it eases the GPU's task of analyzing resource dependencies, which is one of key causes of GCN's problems. So in the end, AMD might get greater benefits from tiled rendering than Nvidia.

I agree with most you said. But vega absolutely scales with more HBM2 speeds. Its a fact. I have personally witnessed it. It feeds the architecture and GCN shaders seem to scale better with better HBM2 speeds.
I agree that AMD can't get Tiled based rendering fast enough. That alone could make their cards instantly 20-30% power efficient. Nvidia did that with Maxwell first and that was their secret sauce to big efficiency gains. AMD really needs to get on it lol! Even intel is going to have it with their discrete GPUs I think. I do think Navi might be the first one to use Tiled base rasterizer. Well atleast I hope so. May be that is one of the reasons it was reported to be better then they had hoped.
 
Joined
Sep 17, 2014
Messages
22,292 (6.02/day)
Location
The Washing Machine
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling Thermalright Peerless Assassin
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
I don't know who are you talking to.

I was addressing "flops efficiency" myth. It's different architectures, those transitors that nvidia didn't use to cram more CUs, went somewhere. Had there been no compromise, 480 would be on 1080's levels, a 1.5 times bigger chip.

So who are you talking to then? Your response is literally the same as the one you quoted, just worded differently...

Those CUs went nowhere, they are sitting there waiting for new input while guzzling power. That is what's going on. Efficiency is efficiency, you've made that into 'Flops' efficiency, but the vast majority is simply talking about perf/watt. It was never a good idea ever anywhere to compare GPUs based on FLOPS. Sure, its nice marketing I suppose for those who are clueless but it says nothing. Its a bit like describing the mileage of a car by saying 'the tank contains 40 litres' without any additional info. Or comparing Ghz across different architectures to measure performance.

You say 'had there been no compromise' as if AMD can simply 'not compromise' and make a faster chip. If that was the case, I think they'd have done so five to eight years ago. They already tried their hand at efficiency / 'less compromise' and what they've got is an archaic boost technology, some weak delta compression and some slight handicapped GPGPU. I mean, they don't even bother buying into GDDR5x or -6 as of yet. Its easier to compromise on HBM o_Oo_Oo_O There is so much low hanging fruit here, its unreal, and its nearly rotting away on the tree already.

Your saying 'had there been no compromise' really means 'had AMD used the exact same architecture'... well no shit sherlock!
 
Last edited:
Joined
Jun 10, 2014
Messages
2,978 (0.78/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
I agree with most you said. But vega absolutely scales with more HBM2 speeds. Its a fact. I have personally witnessed it.
Just because you can find an edge-case where increasing one attribute gives a little improvement doesn't mean the product is bottlenecked by this. Some workloads are always going to be slightly more memory intensive, some have more geometry etc. Memory speed is not the primary thing holding Vega back, and there is no reason why Vega should need over twice the (theoretical) memory bandwidth of RTX 2080 to compete with it.

I agree that AMD can't get Tiled based rendering fast enough. That alone could make their cards instantly 20-30% power efficient.
They should, because it's low-hanging fruit, and will give a decent gain in certain workloads. But keep in mind that Nvidia only enables it when it's advantageous.
And your 20-30% efficiency gain might be a little on the optimistic side.
 
Joined
Apr 19, 2011
Messages
2,198 (0.44/day)
Location
So. Cal.
I think you need 4 stacks of HBM2 to get that bandwidth.
Yea they can't splurge on 16Gb for a card at $550, while hadn't heard of 2Gb stacks. Wonder if there's a way to take 4Gb stacks that are defective and disable/fuse-off half and still have a functional 2Gb per stack.

As to Navi IDK... As much as we want it, I think if we see a release sometime Q3 we'll be lucky.
 
Top