The real issue is that AMD spent a long time barely holding on financially. The Bulldozer CPU architecture (and its successors) were disastrous for the company, and they lost almost all of their market share in the (high margin) server sector, and could only hang on to a small part the (low margin) entry level of the PC space. The GPU business wasn't doing completely terribly at the time, but CPU sales have always been the core of AMD's revenue stream, so they were struggling to make a profit as a company.
What do you do when you're struggling to make a profit and there's no quick way to increase sales? Cut back on any expenses that don't have a direct path to profitability. That means dropping exploratory R&D on long-term technologies like hardware ray tracing and AI. It also meant cancelling their planned "K12" ARM CPU core to focus on Zen instead (a sensible move in retrospect). AMD spent some very lean years working on a slim R&D budget that could only really justify straight-forward technological advances, meaning new CPU and GPU architectures that do the same thing, but faster. They couldn't justify the spend on something like hardware ray tracing that's not guaranteed to pay off.
Even their biggest innovation of the past few years was largely motivated by minimising R&D cost and risk. The reason they started using chiplets when Intel was still entirely focused on monolithic chips is that designing and taping out chips on leading-edge nodes costs a lot of money, money which AMD didn't have at the time. By moving to a chiplet approach, AMD could tape out just one chip on a leading-edge node, plus one I/O die on an older, cheaper node, and cover everything from entry-level desktops to 64-core servers. They were looking to do with one die what Intel were doing with 5 or 6, which they did quite successfully.
It was only really in 2019 that AMD's financials started to turn around. In 2019, AMD's revenue was $6.73 billion, by 2022 that had gone up to $23.6 billion. The payoff of the Zen architecture was slow, because although home PC builders adopted Ryzen pretty quickly, it took time to convince OEMs/server/HPC customers/etc. to actually consider AMD chips again. They're now in a much better place financially, so they can start investing in more of that fundamental R&D again, but it takes a long time to start up that kind of research. You have to hire experts in the area, you have to work out all the low-level fundamental details before you can start designing hardware, and even when you've designed the hardware it will be a year or two before it's in anyone's hands.
In terms of AI acceleration, I'd say AMD is actually in a pretty good place from a hardware point of view. They've been shipping HPC chips with matrix cores (basically the same thing as tensor cores) for several generations now, although initially they were more focussed on higher-precision work for HPC applications (ie really good FP64 performance). For pure AI use-cases, they seem to have made significant improvements, and it's quite possible that AMD's new MI300X flat-out outperforms Nvidia's H100 with equivalently optimised software.
On the consumer side, AMD have added matrix cores to their GPUs starting with the RDNA3 architecture, but as yet haven't been using them in games. They didn't really publish performance figures on them, but
this blog post states they can get 512 ops/clock/CU for FP16 and BF16. That would come to about 122 Tflops on the RX 7900XTX, which is about the same as an RTX 4070, which is to say more than enough for something like DLSS. Again, there's a lack of software there on AMD's side, which comes from them not having the head-start that Nvidia had. I'd wager we'll see a AI-based version of FSR at some point in the next few years, though.
On the ray tracing side of things, it's a trickier problem to solve. They added hardware triangle intersection testing pretty quickly, but it's a relatively simple bit of circuitry, so it was an easy win for them. I have no doubt they're working on hardware accelerated BVH traversal, but it's not as easy a problem to solve in hardware as triangle intersection testing, and I'm betting Nvidia were working on it for a
long time before actually launching Turing. I'd expect to see it in the next generation or two of AMD GPUs, at which point it will probably close the gap significantly.
TLDR: AMD had no money for ages so couldn't afford R&D on things like RT and AI. They now have money, so can fund this R&D, but it takes time.