Waypoint-1: Real-Time Interactive Video Diffusion from Overworld

(huggingface.co)

60 points | by avaer 11 hours ago

9 comments

  • roskelld 6 hours ago
    The context seemed to last a few seconds. I went from a mock up screenshot of a fantasy video game, complete with first person weapon. Then as I moved forward the weapon became part of the scenery and the whole world blurred and blended until it became some sort of sci-fi abstract space. Spinning the camera completely changed look and style.

    I ended up with a UI that closely resembled the Cyberpunk 2077 one complete with VO modal popup. I guess it must have featured a lot in the training data.

    Really not sure what to make of this, seems to have no constraints on concept despite the prompt (I specifically used the word fantasy), no spatial memory, no collision, or understanding of landscape features in order to maintain a sense of place.

    • avaer 5 hours ago
      Accurate to my experience hacking on this model today, but I don't think anyone's blowing smoke about it.

      Thinking back to where GPT-3 was 5 years ago, I can't help but be a little bit excited. And unlike GPT-3 this is Apache.

  • ecmulli 2 hours ago
    I dont have a big enough GPU but I was able to play around with the model using this plugin https://github.com/daydreamlive/scope-overworld via Runpod - very cool!
    • cmuir 1 hour ago
      Wow, very cool. Starring now.
  • lcastricato 4 hours ago
    BTW, there is a gradio space here:

    https://huggingface.co/spaces/Overworld/waypoint-1-small

    And our streamed version:

    https://overworld.stream

  • avaer 5 hours ago
    If you think this is cool you might also be interested in https://github.com/MineDojo/NitroGen which is kind of the opposite (and complimentary).
  • Plankaluel 6 hours ago
    An RTX 5090 for 20-30fps for the small model: That is not as unreasonable as I had feared :D
  • lcastricato 5 hours ago
    Hi,

    Louis here. CEO of overworld. Happy to answer questions :)

    • anotheryou 3 hours ago
      Wouldn't a little google maps style navigation solve latency mostly?

      Project on to a sphere, crop a little bit, do onset of motions by rotating or moving in the sphere

    • dsrtslnd23 4 hours ago
      great work! Will the medium model be also open/apache-licensed?
      • lcastricato 4 hours ago
        Medium is going to bc cc by sa nc 4.0. We may reevaluate in the future and make it more lenient. Small is meant to be the model for builders and hackers.
  • dsrtslnd23 4 hours ago
    10,000 hours training data seems quite low for a world model?
    • lcastricato 4 hours ago
      60fps training data goes a long way ;)
      • echelon 3 hours ago
        You guys have my support. I'll pay you when you open up payments.

        We need open source world models.

  • khimaros 6 hours ago
    this is like an open weights version of DeepMind's Genie