- Default Kernel
- One Dimension Kernel
- Two Dimensions Kernel
- Passing values
- Project 1
- Project 2
GPU cards become more and more flexible for graphics programmers. Those ones are already able to use shaders (vertex, fragments/pixels...) giving them freedom over some parts of the Rendering Pipeline. However, they were still unable to use the power of the Graphics Processing Unit to enhance performances of general purpose computing. Then Compute shaders were introduced to offer them the possibility to move computation on the GPU. They are programs that run on the graphics card and does not depend on the main rendering pipeline. You can finally do GPGPU (general purpose computing on GPU) to boost your application by, for instance, dealing with accurate physics simulation, heavy algorithms or control a huge number of meshes that would otherwise break the targeted framerate.
Although you can find tutorials explaining the basics of using Compute Shaders with Unity, It can be a little bit hard to understand in practice how to use them for games or other applications. Within this tutorial, I will explain the theory of Compute shaders and show how to implement them inside Unity. Once you wrap your head around it, it becomes a powerful and flexible tool usable in many situations. I personally often work on artistic installations and compute shaders is a way for me to unlock the GPU power while using Unity.
All the parts of this tutorial contain a dedicated scene. You can download the Unity project on my GitHub here. I would like to mention that while learning about Compute Shaders, I could find a lot of resources on @scnsh's website. Many thanks to Julien for reading my draft and giving me back feedback when needed.