I guess their idea of agressive isn't as agressive as mine.
No, I agree. Overpriced by a good $50. Will continue to recommend 7800xt/7900xt/7900xtx (when you can get a fair deal) if these are the prices and xt is 20gbps ram and/or no cache improvements.
I understand there is always the possibility of stuff we don't know (like perhaps they beefed up the cache; for instance how Mark Cerney wants the L1 to take care of up-scaling), and that DOES matter.
I think that's likely a UDNA thing, but would love it if we see something like that here, and/or other ways to sooth bandwidth limitations and/or other things that hamper performance in general use-cases.
If not pure clockspeed, it would explain the die size and low-speed ram. If it is just for clockspeed and not cache, then these products are a ridiculous.
I really did think they might try to hit us with an XTX for $600, but I'll take a swing and guess they'll probably try to undercut the 5070ti at $700 (if it exists).
IOW: $500, $600, $700...which is asking a lot for a couple glorified 7800xt's and a not 20GB less-than-7900xt.
Yeah...I don't like it. As I have said for a bit...Whomever is making these decisions truly doesn't get it.
I once-again will state almost nothing makes sense for RT, so perhaps these win by default of being cheaper versus 5070ti at 1440p raster and better than 5070, which is just a bad product.
Clearly the plan is to get 1080p upscaling good-enough at *some* point by both companies. You can argue nVIDA is there, but you still need 7 engines and an OC for good RT (4080/5080). 5070/N48 = 4?.
With 6 engines you need ~3260mhz, and that's not accounting for up-scaling. This is why had AMD made a 7900xtx replacement (with 24gbps) with 'free' FSR4, it actually could have been a good product.
I'll be looking forward to seeing how FSR4 performs (IQ/perf hit) and/or they say anything about improving it in the future in either respect...because they gotta do both (versus N3x and potentiialy even N48)
5070 remains a horrible product to spend $550+ on because <45TF and 12GB of ram...and I guess they're also banking on that.
What I think it should be: $400, $500, $600 (if they have a XTX).
What it should have been: $450-500/$550-600 with higher clocks and faster RAM on the upper-end model.
The only problem with my thinking in the first regard is that would make the 9070 a very, very, good deal. But as it sits, 7800xt already is and hence these are not. I guess that's why it's being discontinued.
While I understand it could potentially be problematic for margin, I really do think they need to compete with the 5060Ti. People are seriously just "like that", and they need to claw back market/mind-share.
Like-wise, I think they can't price above the 5070 no matter what they do (granted 'real' 5070 price might be ~$600).
They have to compete a tier down imho. I'm not saying this bc I'm a cheap bastard, but bc the honest market reality.
They can't just do it by splitting the product stack more (which is a really stupid idea if that's what they're trying to do). Will conserve judgement until after performance, but...if I'm right...this is bullshit.
I think a 8192sp 9070xt product with 20gbps is just plain ridiculous, but if they were doing THAT (5060ti pricing for 5070 perf, cheaper slightly-better 16GB 5070, matching 5070ti for 5070 pricing) then ok.
There is literally no reason for it to exist versus a 7800xt or even a 9070 vanilla unless they limit the 9070 potential and/or the arch has a better cache (but then why clocks so low?).
If they artificially segment their stack just to drive higher prices I will be PISSED. Because they will fail where they didn't need to, and the stack will be a disorganized mess.
That stuff makes me very unhappy. That is literally what separates them from nvidia (outside the 7800xt).
They tried it on the 7900 GRE and the community called fuckin' bullshit on it (GOOD JOB!)
I will scream bloody murder about if they do it again. Because I'm sick of it. Somebody needs to start hacking bios again to remove PL/clock limits to stop this nonsense.
I would be much happier had they split the difference on the 7168/8192sp parts and 20/24gbps; price the 9070 between 5060ti/5070 and 9700xt between 5070/5070ti if they had to do that.
This is like trying to have your cake and eat it too...and while many people are indeed kind of not on top of things, there are enough people whom are. They can't lose those people or
they are fucked.
Putting FSR4 (and 8-bit ops) aside for a second, here is the simple math:
(7800xt) 7680sp is limited to ~2900mhz with 20gbps RAM. Most will not clock this high and/or much higher bc of artificial power-limiting. Still good for ~45TF, where 16GB starts to make sense.
(9070) 7168sp is limited ~3100mhz with 20gbps ram, and we know these chips will clock this high bc their previous gen did, including the 7600/7700/7900xtx. The rest PL/clock-limited. 50-series will as well.
(9070xt) 8192 is limited to ~2720mhz if it's actually using all units w/ 20gbps. This, ofc, makes no fucking sense other than utilization of shaders might be lower for some reason or boosted shader cache.
(9700xtx?) 8192 at 24gbps limited to 3264mhz. This product actually makes sense but we don't know if it exists.
(7900xt) 10752 @ 20gbps is limited to 2591mhz. It will overclock to ~2800/21600 making it 60TF and on the edge of needing >16GB, just like the 7800xt is for starting to need 16GB.
IOW, these cards split that difference at least two ways, but potentially more. It's nonsense, especially priced higher. Just buy a 7800xt for cheaper or get better perf from a 7900xt.
Hopefully the community will put pressure on for FSR4 being back-ported in whatever potential way it can...and don't let up that pressure. Bc this is how 'it' starts (exactly what nVIDIA does) if you don't.
As I have said, they should have products that are 7168sp/3100mhz/20gbps and 8192/3264/24gbps. That is a 20% spread, capitalizes well on the process, and isn't screwing people over. $500/600 fine.
This is not what they did. So either 9070 is artificially limited and/or 9070xt needs to have better ram as well as be binned to clock substantially higher...but they could have just done it at stock.
If 9070 is limited, it needs to be cheaper. If 9070xt has 24gbps it should be clocked higher. What they did is absolutely horrible product placement, as I have said it many times.
We obviously have to wait for seeing what arch advancements, such as potentially cache, make wrt previous bottlenecks. That said, if this is like N32 in a lot of ways (and it probably is) the math doesn't lie.
I really am waiting to see what AMD is trying to do here, because I see marketing failure in about 3 different potential ways, at the least.
Does it clock high? If it does, why is it clocked so low and the die so big? Are they trying to 'save' it for another product? That's just a waste of the other two SKUs potential to artifically inflate pricing.
If it's cache (potentially for free FSR and/or excusing the ram), that's cool. The core clocks are still way too low at stock.
I would love to be wrong about this, but I truly am afraid I'm going to have to complain about them just as much as nVIDIA pretty soon...bc no matter which way you slice it this stack is stupid.
Either in (stock) specs, potential performance, and/or price....if not all the above.