I'm gonna actually try and answer this question, so buckle up
TL;DR: Nvidia bet on features, AMD bet on power. Features won in the market, which left AMD spending their extra power to simulate those features, leaving them with less power to go around, and features that aren't as good.
Every big bump in resolution roughly doubles how much detail a human eye can pick up, but roughly quadruples the number of pixels. And because it quadruples the pixels, it quadruples the amount of power it takes to put that stuff on screen. If you think about that for more than a minute, the problem becomes obvious - this shit can't go on forever.
And that resolution leap doesn't include making this pixels
prettier. Not just advanced details but advanced effects, like higher quality lighting, reflections, etcetera. So you need to quadruple performance just to stand still. You need to do better than that to advance.
In every field
except the GPU, those advances in performance have become extremely difficult. At some point, making CPU's faster got really hard, which is why they put more and more cores in every generation. GPUs happen to scale very well with adding more cores, so GPUs have dodged the wall that other system's have been hitting. But that won't last forever.
Both Nvidia and AMD clearly saw this writing on the wall. Neither of them (or Intel, in fact, but that's a tangent right now) misunderstood the problem. What happened next is that they tried two very different solutions.
AMD is a secondary player in the desktop space, with a lot of their core consumers being budget players. They absolutely dominate consoles, and have for the last 2 decades. They're a strong player in the data center, and they have a CPU product that dominates the industry, and is based on a technology called "chiplets" where they can mix and match parts from different foundries. That lets them rapidly customize products, while also manufacturing performance critical chunks of a chip with the most advanced but expensive tech, and less performance critical chunks on cheaper tech.
AMD's strategy was this - keep pursuing that classic gen-on-gen power, by iterating on their core design. Keep it backwards compatible for their console customers. Keep their data center and consumer segments different, but invest heavily in bringing their chiplet tech to GPUs. That will allow them to very quickly adapt products to the market without having to design new hardware from scratch each time, while also keeping costs down.
AMD saw the wall coming, laid down the gauntlet and said, fuck it, we're going to bust straight through that thing. It was smart and aggressive.
That... is not what Nvidia did. Nvidia decided that
the only winning move was not to play. Nvidia decided that instead of pursuing more and more
power they would pursue
features. Nvidia added Ray Tracing, which doesn't make More Pixels, but does make Prettier Pixels. And because it's a relatively new tech for the consumer space, there is a much much longer road of innovation ahead of them, betting they can deliver huge leaps on the RT side while the traditional rasterization side slows down.
They didn't pursue chiplets, instead deciding to just make their datacenter designs and their consumer designs the same to reduce design costs. That meant putting AI hardware on consumer products, AI being another feature where huge leaps are still possible. And
that meant finding a use for that AI hardware in the first place.
Which lead to AI assisted upscaling, and DLSS in the first place.
Early on, it looked like AMD had pulled it off. Low RT powered device couldn't deliver much, and few games took advantage of it without Nvidia throwing money at it. AMD figured out how to add basic RT to their hardware with minimal modification, instead of the huge investment Nvidia made. At "traditional" rendering, AMD was delivering better performance, and AI upscaling wasn't just
bad, it invented new kinds of bad no one had ever seen before. Bad upscaling would miss detail, and just create blurrier images. DLSS 1.0 was instead finding detail
that didn't exist and looked wrong adding bizarre details to images that made no sense. And it was
expensive requiring a supercomputer to train an custom AI model for each game that wanted to use it.
Then 2020 happened.
Control was out, and people started to see RT could really do, and because AMD had at least minimal support, RT modes started to become common in games, which of course ran better on Nvidia. And then came DLSS 2.0 which was a generic solution, easy to implement, that didn't have DLSS 1.0's problems and actually delivered on the AI promise - and that only worked on Nvidia cards.
DLSS 2.0 and RT actually don't interact super well with each other, but Nvidia very smartly figured out how to make them seem like they did. Instead of selling DLSS 2.0 as a way to make resolutions higher, it sold it as a way to make
frame rates higher - as a tech that could recover the performance "lost" by enabling RT. So even though the two technologies actually fight each other a little bit under the hood, Nvidia managed to find a way to make them seem tied at the hip.
So Nvidia had features, but arguably, AMD had
power and
cost. But it was about to get worse for AMD. Even with the power of advanced GPUs, developers were having trouble pushing all those damn pixels with 4k everything, and so they started to use temporal upscaling - the same class of tech as DLSS 2 - everywhere. AMD released a best-in-class upscaler (FSR 2) which delivered similar results to DLSS without the tensor cores.
On paper that's great - it is great! - but it meant that all that extra power AMD had bet on was being used to replicate Nvidia features.
Game X might run better on AMD than Nvidia out the gate, but then enable DLSS 2.0 and NVidia runs
much better. AMD brings out FSR2 to match, but FSR2 itself eats the extra power that gave AMD the advantage in the first place. And it doesn't look quite as good without that AI to help.
Then came the RTX 40, and jaws dropped. Prices were
awful because the advanced foundry nodes that GPUs had been rushing to for decades were getting more and more expensive, and without chiplets, Nvidia was carrying that cost on every single square millimeter on their new chip. This was exactly what AMD had expected, and why they invested in chiplet designs.
Months later and the RX 7000 series was revealed and prices... were just as bad. AMD had pulled it off, they had managed to build a chiplet GPU. But it turned out to be a very different problem than a chiplet CPU, and because of that, very little of the GPU could be built on a cheaper node, and thus, very little cost savings on this first version of the design.
And there were other problems with RX 7000 as well. AMD updated their cores to have some of Nvidia's advantages that made DLSS/RT fast - like dual issue compute, and accelerated matrix instructions. But the commitment to backwards compatibility was showing it's age, with lots of complexity in the front end making utilization of these features fall way short of their theoretical max.
But Nvidia does have chickens coming home to roost. AMD might not have nailed it, but they weren't wrong. Chiplets are the future, and Nvidia has to get there. Nvidia didn't skip the chiplet investment, they just delayed it. And in this time of surging AI products, AMD's chiplet design
is paying off. They're able to put together custom data center products that combine several of their technologies
extremely quickly.
When it comes to backwards compatibility, Nvidia and Nintendo have likely invested
huge quantities of money to make it happen, and will probably have to do so again in 5 years. If their BC is emulation driven, then Nvidia is developing the software that make it possible for Nintendo to go to a different vendor in the future. AMD has gotten said BC nearly for free, and has locked in the other two console makers likely for a couple more generations.
AMD is also innovating, with the recent previews of Frame Gen technology that works in legacy games without patches. That's potentially a huge win for the PS6/Next Box, allowing 120fps modes for
everything but also potentially a major win for those handheld PCs whose value proposition is often around being able to run last gen games in your hand.
And AMD is likely to dominant in the handheld PC space not just because they have top-tier PC CPU, but because again, this is a place where their chiplet tech has huge potential to pay off, with AMD able to deliver customized APUs extremely quickly, and with a low enough design cost that a customized APU is affordable even for products that don't sell millions of units.
It remains to be seen if AMD can deliver on the potential of chiplets, and can catch up on the feature space. But it also remains to be seen how much further Nvidia can take DLSS, with Frame Gen and Ray Reconstruction being the obvious evolutions of the tech. Nvidia has got a big roadblock with their move to chiplets, but AMD actually already has top-class machine learning hardware in their server offerings, and could catch up rapidly if they decide to go that path.
May you live in interesting times!