Well, I know for one that MS kind of jumped the gun on DX12, and ended up missing a couple really big features that came to Metal and Vulkan, for example, the subpasses concept and the function constants concept are both missing from DX12 and both could provide a decent performance uplift if used effectively (subpasses by allowing GPU work to be scheduled in larger and more tightly synchronized batches, and function constants by enabling more optimization during shader linking and side-stepping uniform registers and uniform buffer locking). At least DX12 got push constants which helps a lot but that covers a slightly different use case.
On the RT and Neural side of things, DF just touched on this, that Nvidia’s architecture actually allows for a lot more direct mixing between compute, RT, and neural processing within a given shader, but the APIs gated it off because other GPUs wouldn’t be able to conform to it easily, leaving the neural cores mostly only accessible through DLSS or through discrete workloads scheduled with CUDA or DirectML.
A custom API like NVN2 would have no such problem. In fact according to those at CES (including DF) who got the hands on demo of neural rendering, part of the magic there is a proposed expansion of DXR to fix this limitation, and that expansion of DXR is what enabled the “neural materials” demo, which is essentially embedding a ML model inline into the DXR material shader.