In Halide, the concept was great, yet the problems in kernel development were moved to the side of "scheduling", i.e. determining tiling/vectorization/parallellization for the kernel runs.
Love it. I've been using cudarc lately; would love to try this since it looks like it can share data structures between host and device (?). I infer that this is a higher-level abstraction.
Very interesting project! I am wondering how it compare against OpenCL, which I think adopts the same fundamental idea (write once, run everywhere)? Is it about CUbeCL's internal optimization for Rust that happens at compile time?
This appears to be single source which would make it similar to SYCL.
Given that it can target WGPU I'm really wondering why OpenCL isn't included as a backend. One of my biggest complaints about GPGPU stuff is that so many of the solutions are GPU only, and often only target the vendor compute APIs (CUDA, ROCm) which have much narrower ecosystem support (versus an older core vulkan profile for example).
It's desirable to be able to target CPU for compatibility, debugging, and also because it can be nice to have a single solution for parallelizing all your data heavy work. The latter reduces mental overhead and permits more code reuse.
A lot of things happen at compile time, but you can execute arbitrary code in your kernel that executes at compile time, similar to generics, but with more flexibility. It's very natural to branch on a comptime config to select an algorithm.
From the moment I understood the weakness of my flesh, it disgusted me. I craved the strength and certainty of steel. I aspired to the purity of the Blessed Machine.
In Halide, the concept was great, yet the problems in kernel development were moved to the side of "scheduling", i.e. determining tiling/vectorization/parallellization for the kernel runs.
Given that it can target WGPU I'm really wondering why OpenCL isn't included as a backend. One of my biggest complaints about GPGPU stuff is that so many of the solutions are GPU only, and often only target the vendor compute APIs (CUDA, ROCm) which have much narrower ecosystem support (versus an older core vulkan profile for example).
It's desirable to be able to target CPU for compatibility, debugging, and also because it can be nice to have a single solution for parallelizing all your data heavy work. The latter reduces mental overhead and permits more code reuse.