Surfel-based global illumination on the web

(juretriglav.si)

67 points | by vmg12 22 hours ago

6 comments

  • rendaw 12 hours ago
    I've been playing around with reviving radiosity for incremental GI for low-poly scenes, and this sounds very similar (and probably much better). I put a camera on each lightmap texel, rendered the scene, then summed the pixels (roughly) in the render to get the light. I chose the "slow" approach, where lighting took several seconds, but was done idly. And then once the lightmap had a certain amount of stability I'd stop the light calculations until the scene changed.

    It sounds like the advantages here are:

    - Optimized sampling, rather than just every lightmap texel. My idea was to tie the lightmap to LOD, but I feel like this is much smarter.

    - Optimized light accumulation, dedicating more resolution to high-light areas to reduce noise

    It seems like it has a more advanced "stability" calculation.

    Things that are the same:

    - Lighting is still incremental - when they e.g. change the light direction, even with optimizations, there's still some ghost light that slowly moves over so I'm not sure how this would work in really dynamic situations (car traffic)

    Things that are different:

    - It looks like the light data is cached based on the current view. I store light for the whole scene, so there's no light fluctuation when doing camera movement/rotation. I think the tradeoff here is the view-relative caching is probably more optimized (light detail is view invariant) - I think that's mostly important for HD-style assets.

    Limitations of both, IIUC:

    - Reflections, water, etc. Radiosity is diffuse lighting only. I think you can combine it with other hacks like screen space reflections though

  • cadamsdotcom 13 hours ago
    Awesome.

    > The fact that you can get physically-plausible light bounce and temporal stability all running in real-time on a web page... on a phone... feels like we're actually in the future.

    Even as some things about the open web are in trouble, others are thriving! This was such a great in depth read, learned a ton and got to see great graphics and play with lots of knobs. A+ :)

  • yunnpp 2 hours ago
    Great walk-through, thanks. Will have to give this a try.
  • ivanjermakov 9 hours ago
    Sad to see that most GI techniques require temporal caching and denoising. We might never come back to crisp, noise-free, instant graphics.
    • yunnpp 2 hours ago
      The noise from this technique would come from moving lights and world-space disocclusions. Lights don't move erratically in most scenes, objects maybe. But even then, this handles diffuse illumination only, which by its nature has low frequency noise. You won't get noise by shaking the camera violently like in an FPS, for example, which you would from modern ray/path-traced pipelines and I assume is what you're complaining about. So on the list of temporal techniques, this one is probably the most graceful to the noise/lag trade-off.

      "Crisp, noise-free, instant graphics" that were also incorrect and did not communicate mood and depth the way GI does. I see no reason to go back.

  • Panzerschrek 12 hours ago
    Why not using an approach with light probes instead? They can be placed statically (or be changed only rarely), usually much less probes are required compared to surfels.
    • juretriglav 11 hours ago
      The AAA games that use surfels combine the with probes to get the immediate light information while surfels haven't accumulated enough (the black spots when camera moves), since surfel generation is driven by screen space placement. Surfels have better light leak properties and their dynamic resolution (more of them when you get close to a surface) provides higher quality lighting, which is why they are preferred/the first option. The issue with a surfel + light probe system on the web, specifically, is that you run out of storage buffers = the current system is right on the limit, which is 10 storage buffers for the integrator pass.

      I think there's some discussion to up that limit on adapters that support it, but right now we're stuck at 10. It would be SUPER beneficial to raise that limit, for a wide variety of projects. Two specifically that I'm working on now are WebGPU implementations of Alber's Markov Chain Path Guiding paper, and the ReSTIR PT Enhanced paper, and they are both similarly handicapped by the storage buffer limit.

  • altmanaltman 10 hours ago
    Demo page doesn't seem to be working for me. Getting this error;

    Something went wrong Cannot read properties of null (reading 'isInterleavedBufferAttribute')