Deploying and developing royalty-free open standards for 3D graphics, Virtual and Augmented Reality, Parallel Computing, Neural Networks, and Vision Processing
www.khronos.org
https://github.com/KhronosGroup/Vulkan-Docs/blob/main/proposals/VK_EXT_shader_object.adoc
New Vulkan extension just dropped. The interesting bit here is that Nintendo appears to be a significant contributor.
It's not immediately clear to me that this would be directly related to new hardware, but that said, I'm certainly not an expert on graphics APIs. It seems like this could potentially have some uses in emulation (something we know Nintendo uses Vulkan for in their N64 emulator), as well as generally increasing flexibility and reducing CPU overhead. Also if I'm reading this right, it might reduce storage costs of shipping precompiled shaders for Vulkan games.
This is very interesting indeed. I don't think it pertains to new hardware at all (ray tracing not being supported yet would certainly be a limiting factor), but it's very interesting not just that Nintendo is contributing to Vulkan directly, but they're contributing a major feature which fundamentally changes one of the key paradigms the API is built on.
I'm definitely no expert on graphics APIs either, but I would be surprised if this relates to emulation or BC in any way. From an emulation perspective, it would absolutely make sense that it could allow the dynamism supported by the graphics API being emulated to be better mapped to Vulkan. However, the only prior Nintendo hardware which would seem to warrant this would be the Wii U, as it's the only console other than the Switch which supports programmable shaders. In my understanding (which may absolutely be wrong), emulation of fixed-function hardware like GC or Wii would map well to the existing pipeline paradigm. And for reasons that should be obvious, I would expect Wii U emulation to be very far down Nintendo's priority list.
In terms of BC, I can't see any good reason to involve Vulkan in Nintendo's BC efforts. The benefit of Vulkan is that it is an open, general purpose, cross-platform API, and the benefit of contributing to it is that you can then depend on that contribution being supported across a variety of hardware and driver implementations. The implementation of backwards compatibility is almost the opposite of this; it is a very specific problem, for which a single implementation is made, to run on a single target architecture, and the entire stack can be (and probably is) proprietary. Nintendo and Nvidia can implement this in any way they wish without any regard for anyone else, so why would they constrain themselves by implementing it on top of a cross-platform API?
My guess is that Nintendo's involvement in Vulkan, and their usage of it in emulators, is a kind of future-proofing or insurance for future hardware. In the case of the emulators, there is probably little to no benefit from using NVN over Vulkan, but if they ever change hardware vendor in the future, then using Vulkan can at least mean avoiding having to re-implement their emulators as they had to do moving from Wii U to Switch. It's very unlikely they'll actually use Vulkan for games, but if they do change hardware vendor, they will have to work with them to implement a new graphics API for the new hardware. Using Vulkan as the starting point for such an API would make a lot of sense, as it's an API Nintendo is already familiar with, and any hardware vendor will also be familiar with and have a mature driver implementation for. Proposing improvements to Vulkan like the one we see here could simply be a way for Nintendo to push it closer to their "ideal" graphics API, and the closer it is to that, the less friction there would be in implementing any new custom graphics API in the future.
The blog post discusses that, I just don't see anywhere where it says anybody's pipeline is actually going to perform better, just that in the best case it will be the same, and that in general the elimination of the CPU overhead should offset a non-zero loss of performance due to dynamism.
I'm not sure there would be a meaningful loss in performance due to dynamism. Again, I'm anything but an expert here, so I'm happy to be contradicted, but my expectation would be that the performance overhead of dynamism in an API like this is policing that dynamism, ie performing validation at run-time to ensure that a particular dynamic combination is a valid one. In this case that would manifest as vkCmdBindShadersEXT validating that the shader objects provided are compatible with each other. However
the proposal explicitly states that that's not the case, and that it's the application's responsibility to ensure the set of shaders is valid.
I could also see the dynamism losing some performance compared to a pipelined approach when shader compilation optimisations can be made when a full pipeline is bound, however the option of linking shader objects would appear to give developers the same potential performance improvement in cases where it's found to be meaningful while being much more flexible.