The context seemed to last a few seconds. I went from a mock up screenshot of a fantasy video game, complete with first person weapon. Then as I moved forward the weapon became part of the scenery and the whole world blurred and blended until it became some sort of sci-fi abstract space. Spinning the camera completely changed look and style.
I ended up with a UI that closely resembled the Cyberpunk 2077 one complete with VO modal popup. I guess it must have featured a lot in the training data.
Really not sure what to make of this, seems to have no constraints on concept despite the prompt (I specifically used the word fantasy), no spatial memory, no collision, or understanding of landscape features in order to maintain a sense of place.
Medium is going to bc cc by sa nc 4.0. We may reevaluate in the future and make it more lenient. Small is meant to be the model for builders and hackers.
I ended up with a UI that closely resembled the Cyberpunk 2077 one complete with VO modal popup. I guess it must have featured a lot in the training data.
Really not sure what to make of this, seems to have no constraints on concept despite the prompt (I specifically used the word fantasy), no spatial memory, no collision, or understanding of landscape features in order to maintain a sense of place.
Thinking back to where GPT-3 was 5 years ago, I can't help but be a little bit excited. And unlike GPT-3 this is Apache.
https://huggingface.co/spaces/Overworld/waypoint-1-small
And our streamed version:
https://overworld.stream
Louis here. CEO of overworld. Happy to answer questions :)
Project on to a sphere, crop a little bit, do onset of motions by rotating or moving in the sphere
We need open source world models.