I'm curious if and/or when we see consumer APUs from AMD using chiplets. For the moment they're sticking with monolithic dies, which I can see making sense from a power consumption point of view (communicating over an interposer will consume more power than communicating within a monolithic die) and from a cost point of view, as their APUs are typically aimed at the lower end of the market. The main thing chiplets give AMD is the flexibility to deliver a wider range of products within a limited R&D budget. For example with Ryzen they could tape out a single die and be competitive with Intel from entry-level desktops all the way up to high-end server chips. I could see them getting to a place where a high-end APU becomes viable even if there's not a huge market for it because they've already got all the necessary chiplets ready and it's just a matter of sticking them together on an interposer.
In the console space, the place where I'm most interested in seeing the impact of chiplets is memory. As we've discussed in this thread, one of the main limiting factors on the performance of a Switch form-factor device is the memory. LPDDR5(X) only goes so fast, and going wider with the interface is just going to add cost, power consumption, and motherboard complexity. The only viable way to get better bandwidth without sacrificing power consumption is by moving to a wider, slower interface, and the only way that's viable is if you ditch the motherboard traces and move the memory onto an interposer with the chip. That is, adopt HBM or something like it.
HBM has faded away from the consumer space, but I wouldn't be surprised if we start seeing it (or a variant of it) re-appear in the next few years. When HBM debuted on AMD's R9 Fury back in 2015, Ryzen was still a couple of years away, and chiplet-style packaging was very rare, limited to exotic products like the Wii U. With chiplets becoming the standard, the previously prohibitive packaging costs of HBM will come down, and with TSVs also becoming more commonplace in consumer chips, I would expect that the cost of manufacturing the HBM stacks themselves will also likely decrease. Another factor is that the move to chiplets in the GPU space opens up HBM to the consumer market where it makes the most sense: laptop GPUs.
Laptop GPUs suffer from GDDR6's high power consumption, but as they typically share the same die as desktop parts, it hasn't been viable to switch to HBM unless the desktop lineup does as well, where the power consumption and reduced board space of HBM aren't nearly as beneficial. A couple of GPUs have been designed with HBM explicitly for the laptop space, namely AMD's Pro Vega 20, and the bizarre Kaby Lake G (which in retrospect seems like UCIe's dream), but they're very niche products. With AMD moving memory interfaces onto their own chiplets, though, it will now be viable for them to use a single GCD across both GDDR6 and HBM based products. Rather than having to tape out a new version of an entire monolithic die, they just have to tape out one HBM interface chiplet on a cheaper process, and it can be re-used multiple times across different products. So when AMD comes to produce laptop chips from their Navi 31 and Navi 32 dies, they could in theory offer versions with HBM memory and better power-efficiency. They could also, if they wanted, add an extra desktop product above the RX 7900 that swaps out GDDR6 for a much higher-bandwidth HBM memory pool.
Nintendo is kind of lucky with [redacted] in that they've been able to double the memory bus width over the original Switch, and when combined with the improvements from LPDDR4 to LPDDR5 and improvements in GPU bandwidth efficiency they've got room to make a sizeable jump in performance without being severely constrained by bandwidth. With whatever their successor to [redacted] is, I don't know if that will be the case. Doubling the interface width again seems pretty unlikely. Obviously it's foolish to try to predict Nintendo's actions this far in advance, but hypothetically if they were to try to make a successor to [redacted] in a similar form-factor with a significant jump in performance, I'm not sure how they would manage to do that without moving memory on-package with the SoC.
Nintendo, incidentally, are the only console manufacturer with experience of chiplet-style technologies, with their CPU and GPU combined on an MCM in the Wii U. Of course, that almost certainly added significantly to the cost of the Wii U, so perhaps they don't consider their experience with the technology a positive one.
I think the fact that their chiplet-based laptop CPUs are a minimum of 45W is more a matter of where they're positioned in their line-up than an inherent limitation of the technology. Their chiplet-based parts are all high-end 12-16 core CPUs with very limited iGPUs intended to be used with powerful dedicated GPUs (as you say, chunky laptops). If you go much below ~40W you're primarily looking at laptops without dedicated GPUs, and in that space these chips would be a hard sell, considering their iGPUs would be outperformed by even the most entry-level alternatives in that segment.
I suspect that, all other things being equal, a monolithic die should be more power-efficient than a chiplet setup, but we may get to a point where the difference is so minor, and the economics line up such that chiplet-based APUs make sense even in the low-power end of the market.