Wednesday, August 2nd 2023

AMD Confirms New "Enthusiast-class" Radeon 7000-series Graphics Cards This Quarter

AMD CEO Dr Lisa Su, in her Q2-2023 Financial Results call, confirmed that the company will launch new "enthusiast-class" gaming graphics cards within Q3-2023 (any time before October). "In gaming graphics, we expanded our Radeon 7000 GPU series in the second quarter with the launch of our mainstream RX 7600 cards for 1080p gaming. We are on track to further expand our RDNA 3 GPU offerings with the launch of new, enthusiast-class Radeon 7000 series cards in the third quarter," she stated.

There are two distinct possibilities of what "enthusiast class" entails. The first and most obvious one, could be the introduction of the RX 7800 series, including the RX 7800 XT, which is expected to closely resemble the limited-edition RX 7900 GRE by the specs; but a less talked-about possibility could even be the RX 7950 series. In its testing, the RX 7900 GRE was found to offer raster 3D performance comparable to the previous-generation RX 6950 XT although with better ray tracing performance on account of improved Ray Accelerators, which would put it behind the GeForce RTX 4070 Ti that AMD is trying to compete with. This should mean that for AMD to have a compelling "RX 7800 XT" product, it should perform faster than the RX 7900 GRE (possible through higher clock speeds or a few more CU).
The Radeon RX 7950 series is an exercise at significantly shoring up performance over the RX 7900 series, by increasing clock speeds and power limits. AMD is probably hoping for the RX 7950 XTX to take a swing at the performance crown held by the RTX 4090, while the RX 7950 XT could get a little closer to the performance of the RTX 4080. The current RX 7900 XT already beats the RTX 4070 Ti.

The announcement could also be a hint at the likelihood of mobile versions of the RX 7900 series, given that AMD has already developed the mobile-friendly package that's found powering the desktop RX 7900 GRE. This package is physically smaller than the regular "Navi 31," has lower Z-height, and is hence optimized for notebooks. Its lower pin-count could indicate a narrower 256-bit wide GDDR6 memory bus, and fewer power pins to go with the lower power-limits.
Sources: AMD Investor Relations, VideoCardz
Add your own comment

102 Comments on AMD Confirms New "Enthusiast-class" Radeon 7000-series Graphics Cards This Quarter

#76
oxrufiioxo
mechtechA question for all the silicon experts out there @TheLostSwede and others.

If you can make chiplet and slap all that stuff on an interposer??................would it be possible to take say 2 small cheaper chips like an RX6600 and 'glue' them together?? Would it work and be cheaper than a single RX6800 chip??
The main issue would be latency and getting the operating system to see it as one large gpu. I do think they will get multiple GCD working on a gaming gpu but we are probably a generation or 2 away from that. They already have a compute card the MI300 with multiple GCD so hopefully something similar for gamers fingers crossed by next generation.
Posted on Reply
#77
mama
Eiji7900 GRE EDITION
Thought of a joke. Joined just tell it. Thanks.
Posted on Reply
#78
Denver
mechtechA question for all the silicon experts out there @TheLostSwede and others.

If you can make chiplet and slap all that stuff on an interposer??................would it be possible to take say 2 small cheaper chips like an RX6600 and 'glue' them together?? Would it work and be cheaper than a single RX6800 chip??
In my opinion it would be cheaper.

They would have better yields and lower development costs, as they would have to develop only one "small" chip that would be scaled from low-end to high-end just by putting together 1, 2, 3, 4 of these base GPU/GDC, as already happens in ryzen and EPYC. The question is, If AMD could pull the magic out of the hat and make it work flawlessly (for gaming)...
Posted on Reply
#79
ViperXTR
will this drive nvidia in making 4070 Super with 256bit 16GB?
Posted on Reply
#80
AusWolf
DenverIn my opinion it would be cheaper.

They would have better yields and lower development costs, as they would have to develop only one "small" chip that would be scaled from low-end to high-end just by putting together 1, 2, 3, 4 of these base GPU/GDC, as already happens in ryzen and EPYC. The question is, If AMD could pull the magic out of the hat and make it work flawlessly (for gaming)...
I think the latency between chiplets would be too high. If the "magic" was that easy, they would have already done it with RDNA 3, imo.
Posted on Reply
#81
Icon Charlie
AusWolfI think the latency between chiplets would be too high. If the "magic" was that easy, they would have already done it with RDNA 3, imo.
100% agree with you. For the Record I've been B!tching a lot about this and the reasons why the 7000 series of GPU's are a overall an abysmal failure because of the lag when you go the chiplet route.

The 6000 series of GPU is a mono chipset and if you take into consider that the GPU is getting close to 3 years old, it is still a viable option because of it's performance/value over the 7000 series.

IMHO. The reasons why they are going this chiplet route is.

1. They get a better yield on the Silicon Wafers, there by getting more components to make more GPU's over the 6000 series.
2. They can get a better yield on partial defects that are on the edges of the Silicon Wafers. It is possible to use some of them for the lower value video cards.

Oh they are making money off of this. Make no mistake they are making more profit on this over a mono chiplet, but it is at the expense of overall performance. Latency will always be there. It just depends on how much money AMD wants to put into their Video card Division and their willingness of take on Ngreedia.

Judging from current and past business practices, it looks like they enjoy... being 2nd best.
Posted on Reply
#82
Minus Infinity
trsttteMaybe, but they risk getting the same problem of the current 4000 series where 3000 is just more atractive. We're at a performance point where there's not much to go from here, we're getting high refresh rate at 4k, bumping the resolution further will require a lot more compute and doesn't seem worth it not only on processing but also on storage to save all those high res assets.

Before we go into a 4 year cycle there's also the 3 year option, that could be a good option, but they could also jump into something similar to what cpu's did with a "tick tock" of new architecture and then a refresh improved version (4000 5000 in this case)

Hopefully in the Ada Next/5000 series/whatever they don't forget to implement DP2.1 like on 4000 series at least, that was a dick move that is also helping monitors justify staying with the older standard in what looks like a chicken and egg problem
Given Blackwell (let's stick to that for now) is not due to 2025, there are people reporting Nvidia is working on (some) new Ada cards to fill the 18 month gap. There may be a new 4060 and 4070 based on next tier higher dies, AD106 and AD103. But honestly, why they didn't hold off on the 4060 Ti' and just go with the rumoured AD106 192 bit 12GB card is bizarre. No need for two versions and could possibly justify $450+ pricing.
Posted on Reply
#83
Cidious
john_Best joke this year.
AX 8950XT without Display outputs and RAM. You have to select those as an option during the order process. It will only function when connected through their Apple Pro Stand ( starting at $999 ) monitor stand.
Posted on Reply
#84
AusWolf
Icon CharlieIMHO. The reasons why they are going this chiplet route is.

1. They get a better yield on the Silicon Wafers, there by getting more components to make more GPU's over the 6000 series.
2. They can get a better yield on partial defects that are on the edges of the Silicon Wafers. It is possible to use some of them for the lower value video cards.
Very well said! People tend to forget that Ryzen didn't take the chiplet route because it's better for us - far from it. It's cheaper to make, that's it. If anything, we're losing on it with the complications in cooling the offset chips, and the higher idle power consumption.
Posted on Reply
#85
TheLostSwede
News Editor
mechtechA question for all the silicon experts out there @TheLostSwede and others.

If you can make chiplet and slap all that stuff on an interposer??................would it be possible to take say 2 small cheaper chips like an RX6600 and 'glue' them together?? Would it work and be cheaper than a single RX6800 chip??
Most likely not right now, due to the assembly fabs that does that kind of stuff being run at 110% capacity.
In theory, it could be, assuming the chip to chip latency would be low enough, which apparently is one of the big hurdles today for something like that when it comes to GPUs, from my understanding of it.

I guess this company is hoping to win that kind of business in the future.
www.techpowerup.com/311529/silicon-box-opens-ususd-2-billion-advanced-semiconductor-assembly-plant-in-singapore
DenverIn my opinion it would be cheaper.

They would have better yields and lower development costs, as they would have to develop only one "small" chip that would be scaled from low-end to high-end just by putting together 1, 2, 3, 4 of these base GPU/GDC, as already happens in ryzen and EPYC. The question is, If AMD could pull the magic out of the hat and make it work flawlessly (for gaming)...
Long term yes, but not today, due to the the points above. But it does really seem to be the way a lof of companies are heading, as it's simply not viable to make massive chips with low yields.

A large part of it will depends on AMD's (or whoever's) partners as well, as the chip packaging companies need to be able to deliver flawless chips on their end as well, which I believe isn't always the case today, as this is still relatively new technology at the level it's being done these days and even more so as it gets more complex.
We also seem to need better chiplet interconnects that can handles every growing bandwidths, without adding latency or some other issues.

We'll most like end up with a combination of this and 3D stacking, as long as the thermals can be controlled when 3D stacking is used.
AusWolfI think the latency between chiplets would be too high. If the "magic" was that easy, they would have already done it with RDNA 3, imo.
It's a step by step process, this was clearly a step to try a lot of things, that didn't quite pan out as planned, so back to the drawing board.
It took AMD a few generations with Ryzen as well to get it to where it is today and what it pressumably will be in the near future.
It's by no means magic and as pointed out above, a lot of it will depend on their partners to deliver packaging solutions that can handle the negatives from doing this well enough, the key one as you point out being latency.
AusWolfVery well said! People tend to forget that Ryzen didn't take the chiplet route because it's better for us - far from it. It's cheaper to make, that's it. If anything, we're losing on it with the complications in cooling the offset chips, and the higher idle power consumption.
If it's so bad, why is Intel heading in the same direction as AMD?
Posted on Reply
#86
AusWolf
TheLostSwedeIf it's so bad, why is Intel heading in the same direction as AMD?
It's not bad at all - it's just that the benefits it brings to AMD (costs and yields) far outweigh the minor inconveniences the end user faces.
Posted on Reply
#87
john_
Eiji7900 GRE EDITION
No you fool. It's 7900 GREEK edition.
The food bundle also includes 5 suvlaki pita
ViperXTRwill this drive nvidia in making 4070 Super with 256bit 16GB?
Probably RTX 4070 Super will come in 2024 as an RTX 5070 or RTX 5060 Ti.
CidiousAX 8950XT without Display outputs and RAM. You have to select those as an option during the order process. It will only function when connected through their Apple Pro Stand ( starting at $999 ) monitor stand.
Yeah, I was thinking the same. External extremelly expensive power supply for GPU, instead of internal PCIe cables, compatibility only with Apple monitors and base functions, like DirectX 12 support, AV1 support etc. disabled and sold as features that you pay per month to enable.
mechtechIf you can make chiplet and slap all that stuff on an interposer??................would it be possible to take say 2 small cheaper chips like an RX6600 and 'glue' them together?? Would it work and be cheaper than a single RX6800 chip??
Probably they will end up with the same micro stuttering problems as with SLI and CrossFire. Which means driver complexity and bad reviews. Also, while AMD seems to be ahead in this area, I would expect Nvidia to come out first with a chiplet design for gaming, if they are thinking to offer something like this. That way they can limit their small dies for gaming and use even middle size dies for AI purposes where is the current need for capacity and also money and huge profit margins. Also with their vast money and resources, plus their software teams and positive support from the press and public, they are the only ones who can push even a half baked solution with chiplets in the market and still do record sales. If AMD comes out with a less than perfect solution, press and public will burn them at the stake.
Posted on Reply
#88
AusWolf
john_No you fool. It's 7900 GREEK edition.
The food bundle also includes 5 suvlaki pita
You've got me with that! Where do I pre-order? :p
Posted on Reply
#89
ToTTenTranz
ARFIf 7700-12 and 7800-16 are considered "enthusiast" according to AMD, then what is 7600-8? High-end? Ultra-high end? :rolleyes:
RX7600 8GB is mid-range. Navi 32 + 3x MCD 12GB (RX7700?) is high-end at >$400 and N32 + 4x MCD 16GB (RX7800?) is enthusiast at >$500.

They could be launching a RX7950XTX with e.g. stacked LLC on top of the MCDs, but unless they're capable of significantly increasing the clocks on the GCD I really doubt that happening.
Posted on Reply
#90
john_
AusWolfYou've got me with that! Where do I pre-order? :p
Here :p
Posted on Reply
#91
AusWolf
john_Here :p
Delivery to the UK by any chance? :D
Posted on Reply
#93
ARF
fancuckerthe dual issue FP32 fell flat (they should've expanded the CU count)
no proper dedicated raytracing units (strong reliance on shaders and an anemic cache hierarchy to feed it)
refusal to apply GDDR6X and instead rely on relatively slow last level cache
and the cherry on the cake, cant even hit target clock speeds outside of certain compute scenarios

RDNA3 is a veritable dud and at this point i wish AMD would sell the graphics division to Apple or something to ensure the IP and engineering legacy isnt wasted
I'd love to see AMD sells itself to a Chinese ownership.
Posted on Reply
#94
ToTTenTranz
fancuckerthe dual issue FP32 fell flat (they should've expanded the CU count)
Dual issue FP32 would have fallen flat if it had taken a significant chunk of die area.
Instead, we got N33 with dual-issue FP32 and the exact same amount of execution units as N23 while being >10% smaller, on a process change that shouldn't bring any area advantage.

So did the dual-issue FP32 bring massive a performance increase? Mostly no, because just a few instructions can use it so optimizations need to be hand-written and replace the game's shaders through drivers.

Did it come at some area cost that would have been better spent elsewhere? No really.
Posted on Reply
#96
trsttte
ARFUnfortunately, Navi 31 is now thought to be the flagship at least till (if) Navi 51 gets released sometime around 2026-2027 o_O

AMD returns to the RX 480/580/590 | RX 5700 XT type of releases with RDNA 4 o_O


videocardz.com/newz/amd-rumored-to-be-skipping-high-end-radeon-rx-8000-rdna4-gpu-series
Well, it's what the majority of us buy. If they focus on middle range stuff and bringing affordability back to those classes of cards, using chiplets for example, it will hurt nvidia much more than the dick measuring contest on the top end.

We're also approaching a performance ceiling with there not being anything above 4k 144hz to realistically drive, not that current cards are able to do that consistently now but the point stands. A navi 43 being about the performance of navi 32 would be fine
Posted on Reply
#97
ARF
trsttteWell, it's what the majority of us buy. If they focus on middle range stuff and bringing affordability back to those classes of cards, using chiplets for example, it will hurt nvidia much more than the dick measuring contest on the top end.

We're also approaching a performance ceiling with there not being anything above 4k 144hz to realistically drive, not that current cards are able to do that consistently now but the point stands. A navi 43 being about the performance of navi 32 would be fine
I don't think so. I think AMD will be badly hurt, and its market share will decline because the reputation will be damaged.
The halo product is always extremely important.

Navi 31 can't drive some games at 4K@144, its ray-tracing performance is lackluster. AMD needs to do something to make more performant cards.
Posted on Reply
#98
AusWolf
ARFI don't think so. I think AMD will be badly hurt, and its market share will decline because the reputation will be damaged.
The halo product is always extremely important.

Navi 31 can't drive some games at 4K@144, its ray-tracing performance is lackluster. AMD needs to do something to make more performant cards.
They survived without a halo product with RDNA 1, I'm sure they'll manage now as well. Imo, it's better to release a high-end card when it's ready rather than rushing a half-assed response to Nvidia only to be laughed at.
Posted on Reply
#99
tfdsaf
These would sell like hot cakes if they came in at $500 and $370. A 7700XT that is about 5% faster over the RTX 4060ti while costing $370 and comes with 16GB of vram, I'm sold!

Or alternatively this could be 10% slower than the RTX 4060ti, but cost $330 and come with 16GB of vram with maybe a 8GB version existing as well and costing something like $300.

The 7800XT needs to be close in performance to the RTX 4070, maybe up to 15% slower, but come in with 16GB of vram and cost $500, if it can also beat the 4060ti 16GB edition by 20% margins all at the same $500 price it will be one of the best value cards.

If I was AMD these would be the cards and pricing:
RX 7500 at $170, 15% slower than the RX 7600, but also come with 8GB of vram and cost no more than $170.
RX 7600 at $250msrp, it's about 3% slower than the 4060 in rasterization at 1080p and 1440p, but at $250msrp it makes a lot more sense over the 4060 and is actually good value!
RX 7700 12GB at $300, 7-10% faster than the 4060 in rasterization. A decent 7-10% uplift over the 4060 and with 4GB more vram it's going to make a lot of sense and actually be good value, better value than the current RX 6700XT.
RX 7700XT 16GB at $370, 5% faster than the 4060ti, but comes with 16GB of vram. Slightly cheaper, slightly faster and with more vram its going to be the overall much better choice over the 4060ti.
RX 7800XT 16GB at $500, can be 10-13% or so slower than the 4070, but at $100 cheaper and with more vram it's going to be the best value card this generation.
RX 7900GRE 16GB at $600, generally 10% faster than the 4070 and with more vram makes it the better choice at $600.
RX 7900XT at $750 is already very good option and offers good value for once this generation, but it needs to become $750msrp, so that most of the models are sold at this price, right now only a few models sell at this price and they generally go back up after a while.
RX 7900XTX at $900msrp would actually be a solid high-end purchase and make wayyy more sense than the over expensive $1200 RTX 4080. At $300 dollars cheaper and generally 3-4% faster than the 4080 its going to be even better value and a much better choice.
Posted on Reply
#100
AusWolf
tfdsafThese would sell like hot cakes if they came in at $500 and $370. A 7700XT that is about 5% faster over the RTX 4060ti while costing $370 and comes with 16GB of vram, I'm sold!

Or alternatively this could be 10% slower than the RTX 4060ti, but cost $330 and come with 16GB of vram with maybe a 8GB version existing as well and costing something like $300.

The 7800XT needs to be close in performance to the RTX 4070, maybe up to 15% slower, but come in with 16GB of vram and cost $500, if it can also beat the 4060ti 16GB edition by 20% margins all at the same $500 price it will be one of the best value cards.

If I was AMD these would be the cards and pricing:
RX 7500 at $170, 15% slower than the RX 7600, but also come with 8GB of vram and cost no more than $170.
RX 7600 at $250msrp, it's about 3% slower than the 4060 in rasterization at 1080p and 1440p, but at $250msrp it makes a lot more sense over the 4060 and is actually good value!
RX 7700 12GB at $300, 7-10% faster than the 4060 in rasterization. A decent 7-10% uplift over the 4060 and with 4GB more vram it's going to make a lot of sense and actually be good value, better value than the current RX 6700XT.
RX 7700XT 16GB at $370, 5% faster than the 4060ti, but comes with 16GB of vram. Slightly cheaper, slightly faster and with more vram its going to be the overall much better choice over the 4060ti.
RX 7800XT 16GB at $500, can be 10-13% or so slower than the 4070, but at $100 cheaper and with more vram it's going to be the best value card this generation.
RX 7900GRE 16GB at $600, generally 10% faster than the 4070 and with more vram makes it the better choice at $600.
RX 7900XT at $750 is already very good option and offers good value for once this generation, but it needs to become $750msrp, so that most of the models are sold at this price, right now only a few models sell at this price and they generally go back up after a while.
RX 7900XTX at $900msrp would actually be a solid high-end purchase and make wayyy more sense than the over expensive $1200 RTX 4080. At $300 dollars cheaper and generally 3-4% faster than the 4080 its going to be even better value and a much better choice.
The only problem with that is that the 7700 12 GB would cannibalise 7600 sales. 50 bucks for a tier higher performance and 4 GB more VRAM is too close. Would be nice, though.
Posted on Reply
Add your own comment
Jan 4th, 2025 11:19 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts