I wonder if there was a miscommunication at some point, given that $749/$899 Bulgarian Lav is almost exactly $400/480. Conversion rate is 54%.
I say that as by all other accounts, that's the rumored (base) pricing. Not to say it's true, but it makes sense.
The only way what is implied works is if nVIDIA would have priced GTX50 like original GTX40 (in terms of 80/70ti/70).
That was never going to happen imho, given it's same node and original 4080 and below pricing was not received well.
Perhaps AMD thought the stack would be priced based on the value of 24GB of GDDR7 for 5080, which is possible, but according to my math even at $1000 that still likely leaves >70% typical margin.
With 16GB nVIDIA is making an absolute killing on 5080, but also may let them price the cards more 'competitively' while still making a killing. Also, not destroy every partner they still have left.
FWIW, nVIDIA typical margin is >70% while AMD's is >40%. Both CEO's have been commended for this fact.
IMHO, the market doesn't like AMD having over a 50% margin, 60% at most, nor (as they correctly summized with interviews at CES) does it want a $1000 card from them. ~$800 yes, but only if good value.
Where-as for nVIDIA the sky is the limit, especially given their willingness to use less ram (compared to compute capability) where-as AMD will not, but will sell their cards based on compute OR current RT.
Whichever is higher.
It's why I feel like AMD should be aiming to beat the cut-down 80 every generation (with OC to ~ stock 80), not just value;with a weird stack where we know both cards will over-all be better than 70.
I personally think they would have been better off with a higher-clocked 7168sp card at <$550 to beat 5070 in value/OCing and a 24Gbps card priced between 5070/5070Ti, but I'm not their marketing team.
Instead we get a card that probably literally replaces the 7800xt/gre for cheaper, and a </~ 7900xt for cheaper (less ram, but it's fine given performance).
This puts them in the pickle of a 3.2+/24gbps card being worth less relatively; say $600 instead of $650...but they may still price in toward the moon. Then nobody will buy it instead of the cheaper cards.
I think had they released something higher-clocked right-away the market will have loved it as that's really the max perf applicable to 16GB and have made 5070ti/5080 look ridiculous...
.... same as 12GB 5070 will vs whatever.
Instead we'll probably end up with cheaper/better value alternatives to 5070/5070ti and who knows if they'll ever actually release the 5070ti/5080 competitor.
I can't get over how weird it is if 9070xt is indeed using 20gbps ram, although I will grant the >20gbps ootb OC model hints they may be using something better...
....given greater than stock memory oc is almost-never allowed.
It literally makes no sense (versus overclocking a 9070 or 7800xt) if 20gbps. It appears like something created by marketing to look good versus 5070 stock and a stock 5070ti (when overclocked).
9070 better value ~5070 when OC.
That said, If that's what you're after (more than the limitation of 5070/12GB and/or cheaper) or without the fluff/anemic ram of of 5080 (whichever your view), I don't think these cards will be a bad deal.
It's cool that AMD is likely going to market a very good price, and to some people that may be more important, but to me I would've liked them to put up a fight (even if slightly more expensive).
Appears pretty obvious they need to clear stock of N31...but that's not our problem. I also think 7800xt was a very good card that's tough to follow-up on 4/5nm, but that's also not our problem.
I don't know why AMD does this (I mean, cheaper is good...but they want margin and people want better-performing cards ootb...and the stock clocks don't mesh with this). There is a better way.
I could imagine a 11520sp/256-bit/24GB card on 3nm with a ~/<$500 BOM (so if price ever drops to $700 there's still a 40% margin) competing with a full ($1000) OR beating a cut-down 6080 (~$750-800?).
If nVIDIA targets slightly better performance (12288sp+?), AMD would be a better value. If nVIDIA targets similar performance, AMD would be a MUCH better value.
If nVIDIA tries to make the 6080 better than really needed for the tier and 6070ti worse (<11520sp/24GB, as is the case with current Ti models), AMD would be the better alternative.
( I also think this design makes sense for a PS6 [but with denser libraries bc Playstations run around threshold voltage/lowest yield clock to save power/die space and keep cost down).
To me, for AMD, this is the way.
The part that makes me laugh is when people realize the PS6 will probably be just over the maximum potential of N48 and perhaps often use more than 16GB of ram on avg (a limitation of 5080).
Current PS5 pro uses a ~15000/3000mhz split for GPU/CPU memory. If PS6 uses a 27000/5000mhz split (32GB @ 32gbps), the PS6 could have a ~4ghz(+) cpu ('c' cores?) and up to 60TF on the GPU.
That would equate to 11264sp (or 5632sp if you're 'that guy'/Cerney) at 3nm threshold voltage/clocks (2664-2671mhz), ~60TF (for power/yields), while the extra bw to the cpu would be similar to v-cache.
27gbps is faster than any N48 could muster. 60TF likely faster than the core could handle, (that'd be close to ~3700mhz), even with a ton of voltage. But, ofc, similar compute/RT to a 5080.
I think this is the design they'll go for because if Cerney is the type of guy I think he is; he'll know 11264 (potentially 88 of 90 CUs) is an efficiency wet dream. I think we both think like that (as absolute nerds).
Also, 32gbps will likely be cheap as I don't think many 3nm cards will use it, while yields/pricing should be favorable given all companies can and will make it (on older process lines).
Hell, you see 28gbps OC to 34gbps (and gated). That's likely bc next-gen will likely use 36gbps and higher on the PC market.
I also think a 18gpu/14GB split makes the most sense as allocation (given current game usage in those areas), perhaps with a expanded subsystem for the os (4GB?),
but it's always possible < 32GB could be available to devs.
That's why I think this generation is a cash-grab...all-around. Hold onto your 4090's though.
The potential of a 192-bit 18GB high-clocked design will probably mesh well versus the PS6 (low-clocked 256-bit design),
where-as a high-clocked 256-bit/24GB design would match a 4090 for 'acceptable' enthusiast pricing, lasting a long time.
It's possible they may avoid these designs on 3nm, taking the over/under so people will keep buying graphics cards in future generations...
but I would hope at least one company shoots for the goal and good efficiency/pricing.
N48 and GB203 (16GB) do not match either of these criteria, and why I give them a pass.
I do like the potential pricing/value on the low-end (6070 vanilla) and maxing out the potential of 16GB on the high-end...if good price.
I just think the days of 16GB being enough for anything anything over vanilla 1440p (w/o RT/FG/upscaling to 4k etc) are numbered. Obviously not everyone cares about that, but I think enough do/should.