Mesh shaders are actually super easy to explain. A mesh shader is a program on the GPU that makes
meshes - a
mesh being a 3D object. That's it. If you don't get in the weeds with GPUs you may be
shocked to learn that 3D Graphics Cards couldn't make 3D objects.
But they couldn't! That's what makes mesh shaders revolutionary, and why it's so hard to explain
how they are revolutionary. You need to understand how GPUs work without them to understand what mesh shaders are replacing. Let me give it a shot.
Here is a
extremely simplified view of what rendering looks like without mesh shaders. I'm skipping a lot, but it will do for this conversation.
- CPU sends a 3D mesh - a collection of triangles - to the GPU
- A vertex shader manipulates that 3D mesh, by twisting the triangles in arbitrary ways
- Occlusion culling deletes triangles which cannot be seen
- A compute shader applies texture and lighting to the 3D object
Imagine a
3D wireframe of Link, in a T pose. That gets uploaded to the GPU, but Link is supposed to be facing away from the screen. The vertex shader is able to move all those triangles in space (as long as they all stay connected) until the model is flipped away. Then it adjusts the position and angles of those triangles until the model begins to twist into a running pose. Then, because the front of the model is now blocked by the back, occlusion culling deletes Link's face and chest. The remaining triangles get painted and chell shaded. Easy peasy
All of this seems really sane, and it isn't a bad model, but there are
tons of problems and limitations. Let's pick out a few of them.
- The GPU only understands built in mesh formats. Does your engine use a custom format? Come up with a clever way to compress assets? Too bad, you need to decompress them and convert them to a standard format before the GPU can use them
- Vertex shaders can twist your triangles all it wants, but it can't add or delete them. Did the camera zoom in close? Too bad, you can't add detail without giving me a new model. Zoom way out? Too bad, you have to pay the cost of all those triangles even if the detail is invisible.
- Occlusion culling happens after vertex shading. Did you run a really expensive vertex shader to animate Link's face? That sucks, because his face isn't actually visible at this angle.
Mesh shaders replace vertex shaders without vertex shader limitations. Mesh shaders can consume any data they want, regardless of format, and it can output an arbitrary number of triangles, which doesn't have to be consistent over time.
The CPU can send over it's weird, custom 3D format, the mesh shader can read it and generate 3D objects that the rest of the pipeline can use. The mesh shader can do it's own occlusion culling, never even generating triangles that aren't visible - that means not only do you not need to animate invisible triangles, they don't even take up memory in the first place. And mesh shaders aren't stuck in rigid pipeline where they have to do these steps in order. Mesh shaders can be decompressing one chunk of data, while occlusion culling another and animating a third.
Mesh shaders are faster, even when they're doing exactly the same thing that vertex shaders did. The vertex shader programming model predates the modern GPU, and the two have evolved away from each other. The vertex shader model is a long pipeline with lots of steps one after the other. Modern GPU hardware is a highly parallel system, designed to do lots of work simultaneously. The parts of the vertex model that
do work well parallelized actually want the data shaped in a totally different way from the rest of the GPU.
This makes it really hard for GPU hardware to execute traditional vertex operations efficiently. Mesh shaders are designed to reflect the way GPUs are built under the hood. For the most part, if you just rebuild your classic vertex pipeline in mesh shaders, they should run faster, even if you don't take advantage of the features that mesh shaders offer.
There is a big caveat. Mesh shaders are a much more dramatic change to rendering engines than, say, adding DLSS support. And mesh shading hardware is pretty recent. PS5 has a custom solution that isn't quite like the others, and AMD graphics cards have only had it since 2020. You can expect that non-cross-gen Nintendo games will take advantage of mesh shaders, especially as the engine matures. But as long as engines are supporting older hardware - and that includes the base Switch - it won't be surprising that mesh shader support will be minor at best.