It's neat to see the new primitives getting picked up, but at the same time the original problem doesn't seem to motivate it.
Instead of a single neural model per material, why not pass in a small vector summarizing the material properties besides the U,V,normal,etc properties for the pixel? A single model for all the materials would presumably be more compactly represented compared to one per material type: there is opportunity for compression in common properties of the materials.
They talk about three applications for the coming cooperative vectors / linear algebra API: Neural Materials, Neural Radiance Caching, Neural Texture Compression.
But what about already much more popular ML-based applications: upscaling, frame generation, denoising? These are currently vendor specific. E.g. DLSS is only supported on Nvidia GPUs. Could these run on arbitrary DirectX 13 GPUs in the future by using the new API?
Granted, GPU manufacturers would likely not be keen on getting their tech on other platforms, but engine developers like Epic would clearly want to use it if they come up with their own ML upscalers, denoisers etc in the future.
Upscaling and related applications are already supported by cooperate matrix ops. Those shaders/kernels run in workgroups that operate on a tile of pixels at a time, so you naturally get matrix-matrix operations. And yes, at least in Vulkan, there has been cross-vendor support for those for a while now.
Neural materials and similar applications often operate on a single datum at a time, like in a pixel shader or a raytracing shader.
Yes, the hardware still groups the threads of a pixel shader together, but there is inherently more divergence. And so the natural program representation tends to have matrix-vector operations instead of matrix-matrix operations.
Instead of a single neural model per material, why not pass in a small vector summarizing the material properties besides the U,V,normal,etc properties for the pixel? A single model for all the materials would presumably be more compactly represented compared to one per material type: there is opportunity for compression in common properties of the materials.
But what about already much more popular ML-based applications: upscaling, frame generation, denoising? These are currently vendor specific. E.g. DLSS is only supported on Nvidia GPUs. Could these run on arbitrary DirectX 13 GPUs in the future by using the new API?
Granted, GPU manufacturers would likely not be keen on getting their tech on other platforms, but engine developers like Epic would clearly want to use it if they come up with their own ML upscalers, denoisers etc in the future.
Upscaling and related applications are already supported by cooperate matrix ops. Those shaders/kernels run in workgroups that operate on a tile of pixels at a time, so you naturally get matrix-matrix operations. And yes, at least in Vulkan, there has been cross-vendor support for those for a while now.
Neural materials and similar applications often operate on a single datum at a time, like in a pixel shader or a raytracing shader.
Yes, the hardware still groups the threads of a pixel shader together, but there is inherently more divergence. And so the natural program representation tends to have matrix-vector operations instead of matrix-matrix operations.
That's where cooperative vectors come in.