URAvatar: Universal Relightable Gaussian Codec Avatars

(junxuan-li.github.io)

22 points | by mentalgear 4 hours ago

4 comments

  • dwallin 6 minutes ago
    Given the complete lack of any actual details about performance I would hazard a guess that this approach is likely barely realtime, requiring top hardware, and/or delivering an unimpressive fps. I would love to get more details though.
  • michaelt 8 minutes ago
    Those demo videos look great! Does anyone know how this compares to the state of the art in generating realistic, relightable models of things more broadly? For example, for video game assets?

    I'm aware of traditional techniques like photogrammetry - which is neat, but the lighting always looks a bit off to me.

  • mentalgear 26 minutes ago
    With the computational efficiency of Gaussian splatters, this could be ground-breaking for photorealistic avatars, possible driven by LLMs and generative audio.
  • chpatrick 42 minutes ago
    Wow that looks pretty much solved! Is there code?
    • mentalgear 35 minutes ago
      Unfortunately not yet. Also code alone without the training data and weights might still requires considerable effort. I also wonder how diverse their training data is, i.e. how well the solution will generalize.