No, I don't think they do. Chiplets would make sense for the highest end possible GPUs where you literally cannot fit more shaders on a single die, in other words this would make sense in the segment where AMD is no longer looking to compete in. 300-400 mm^2 monolithic Navi GPUs is the best we'll see from AMD for a long period of time. For the Instinct line-up, yeah I think we'll see this at some point.
You don't seem to understand that even on smaller GPUs it would be cheaper as well due to the increase in functional dies. 7nm costs about 2x as much as 12/14 w.e they wish to call it. smaller dies = less defective dies and more dies per wafer, using the same chiplet style package as Ryzen also means future additions like say NVIDIA's tensor cores can be done on the side without impacting overall yields.
this means AMD has 2 chips to build an I/O / interposer and a GPU chiplet. entry level GPU could be 2 chiplets in a package scaling up to could 3 then 4 etc just like they do with Ryzen / Threadripper and Epyc and now Rome with 8 chiplets.
All one has to do is open there eyes. AMD's GCN can in many ways already operate as chiplets just look at how there GPUs are arranged with the various CU counts / memory bus widths etc.
Tonga x2 = Fury for instance, In many cases AMD has already moved towards a single GPU design and then just doubles it to get the next segment.
Example 7750/7770 vs 7850/7870 etc the odd tier GPU was actually the 7900 series back then which didn't follow the concept.
Right now using chiplet designs would actually free up their RD they already have a proper interposer and have figured out seperate I/O dies. With 7nm
A chiplet based GPU would likely lose about 25% of the overall die space to an I/O chiplet however Looking at wafer size / number of defective dies etc. Its actually more cost affect for AMD. Its inevitable that this is the design they will go with. Since APUs still represent the bottom design and most consumer CPUs pack an IGPU. They can basically make every GPU in their linup out of a single GPU chiplet design.
Face facts, if AMD is to ever support DXR the only way they can do it is with a chiplet design becuase they do not have enough market penetration to absorb the cost of massive dies. AMD will compete with NVIDIA's monolithic designs because the market for it is there and its much more profitable. Or do you expect AMD's graphics division to limp along like they did with the Phenom II / Bulldozer / Excavator etc in the GPU segment? As is they only thing helping prop them up is GPU contracts for consoles. Chiplets make sense its scalable to meet demand it fits in well with their custom SoC offerings, and leverages there strengths. Because everyone is competing for fab time, smaller chiplets on an interposer means they can meet demand as necessary be it for entry level offerings or HPC offerings.
It also means future consoles can see iterative upgrades like the XBox One X and PS4. Were as development continues and refinements are made they can increase performance with a half generation by simply adding another chiplet or two to the design. This also helps bypass the issue of waiting for the next node.
Going by die shrink alone a 28nm 130 mm2 give or take =
Shaders 640 / TMUS 40 / ROPS 16
At 14nm that would drop ro about 75 mm2
at 7 nm thats a further 35% reduction.
dropping that to 49 mm2 for a 640 40 16 chiplet with its own I/O.
Mean 4x chiplets would be 2560 160 / 64 at 200 mm2 considering Ryzen is around 80 mm2 we can figure effectively AMD could likely push out a chiplet design with 1280 Shaders / 80 TMUs and 32 ROPs
This means a chiplet design would be equivalent to about a 400 mm2 monolithic die and deliver 5120 Shaders 320 TMUs 128 ROPs granted this is based on older GCN designs however considering GCN performance hasnt changed much per generation it holds value as a worthwhile comparison.
AMD could in theory use 8 smaller chiplets or 4 larger chiplets to achieve there goal. The likely hood of say 3 proper chiplets out of 4 vs a single monolithic die is entirely in their favor. It also allows for a 1280 shader entry level 2560 shader mid range 3840 high end and 5120 extreme range of GPUs. Obviously they could sacrifice some die space to DXR tech however the comparison remains valid.
A chiplet GPU to cover their entire product range is the only way forward. It will likely cover all market segments and scale up or down depending on client need. Results in lower power draw vs monolithic designs and would likely improve AMD's performance per watt. For instance looking at Intel vs AMD for wattage difference you can probably estimate that AMDs chiplet design would save around 20% in terms of power usage. This would drop the 300w TDP for the Radeon VII down to 240w. If your keeping track that would be abour 1280 shaders = about 100w. if you take 20% off that that would be 80watts. meaning at a 320w TDP they could up their shader count to 5000 with the ability to push TMU and ROP count higher as well. Obviously thats still quiet bad however. At 3840 shaders and its current design it would still save about 60 watts. obviously I/O die consumers power but you could estimate maybe 20 watts to that. a Chiplet design would end up cheaper, more power efficient and would cost less in R&D going forward.
Obviously the above is just speculation based on current die sizes, GCN features that have carried forward etc. However it still remains a valid comparison. A Chiplet GPU from a pure performance standpoint at all market segments makes far more sense. GPUs will not be getting cheaper in fact with each new process node we can expect performance and price to rapidly increase. Meaning these prices we see will not change. However chiplet designs would offset that somewhat.