With Moonglow, you can start and stop pre-configured remote cloud machines within VSCode, and it makes those servers appear to VSCode like normal Jupyter kernels that you can connect your notebook to.
We built this because we learned from talking to data scientists and ML researchers that scaling up experiments is hard. Most researchers like to start in a Jupyter notebook, but they rapidly hit a wall when they need to scale up to more powerful compute resources. To do so, they need to spin up a remote machine and then also start a Jupyter server on it so they can access it from their laptop. To avoid wasting compute resources, they might end up setting this up and tearing it down multiple times a day.
When Trevor used to do ML research at Stanford, he faced this exact problem: often, he needed to move between cloud providers to find GPU availability. This meant he was constantly clicking through various cloud compute UIs, as well as copying both notebook files and data across different providers over and over again.
Our goal with Moonglow is to make it easy to transfer your dev environment from your local machine to your cloud GPU. If you’ve used Google Colab, you’ve seen how easy it is to switch from a CPU to a GPU - we want to bring that experience to VSCode and Cursor.
If you’re curious, here’s some background on how it works. You can model a local Jupyter server as actually having three parts: a frontend (also known as a notebook), a server and an underlying backend kernel. The frontend is where you enter code into your notebook, the kernel is what actually executes it, and the server in the middle is responsible for spinning up and restarting kernels. Moonglow is a rewrite of this middle server part: where an ordinary Jupyter server would just start and stop kernels, we’ve added extra orchestration that provisions a machine from your cloud, starts a kernel on it, then sets up a tunnel between you and that kernel.
In the demo video (https://www.youtube.com/watch?v=Bf-xTsDT5FQ), you can see Trevor demonstrate how he uses Moonglow to train a ResNet to 94% accuracy on the CIFAR-10 classification benchmark in 1m16s of wall clock time. (In fact, it only takes 5 seconds of H100 time; the rest of it is all setup.)
On privacy: we tunnel your code and notebook output through our servers. We don’t store or log this data if you are bringing your own compute provider. However, we do monitor it if you are using compute provided by us, to make sure that what you are running doesn’t break our compute vendor’s terms of service.
We currently aren’t charging individuals for Moonglow. When we do, we plan to price individuals a reasonable amount per-seat, and we have a business plan for teams with more requirements.
Right now, we support Runpod and AWS. We’ll add support for GCP and Azure soon, too. (If you’d like to use us to connect to your own AWS account, please email me at [email protected].)
For today’s launch on HN only, you can get a free API key at https://app.moonglow.ai/hn-launch. You don’t need to sign in and you don’t need to bring your own compute; we’ll let you run it on servers we provide. This API key will give you enough credit to run Moonglow with an A40 for an hour.
If you're signed in, you won't be able to see the free credits page, but your account will have automatically had free credits added to it.
We’re still very early, and there are a lot of features we’d still like to add, but we’d love to get your feedback on this. We look forward to your comments!
What do you think of this compared with running a Jupyter server on Modal? (I think Modal is slightly harder, ie, you run a terminal command, but curious!) https://modal.com/docs/guide/notebooks
On the other hand, if you are trying to run an entire notebook on a remote machine by starting a Jupyter server with Modal, then the workflow with Modal is not that different from other clouds (e.g. you can start an EC2 instance and run a Jupyter server there). For that, Moonglow still makes it easier by letting you stay in your IDE and avoid juggling Jupyter server URLs.
Also, you might need to use a specific cloud e.g. if you have cloud credits, sensitive data that needs to stay on that cloud or just expensive egress fees. One of Moonglow's strengths is that you can do your work in that cloud, rather than having to move stuff around.
I suspect that’s a matter of time right?
For a lot of ML/AI workloads and tasks, Python is just a binding for underlying C/C++.
It's already a nightmare to try to reproduce any ML/AI paper, pip breaks 3 times, incompatible peer deps, some obscure library emits an obscure CLANG error that means I need to brew install some libwhatever, etc...
I don't think the WebAssembly toolchain is quite ready for plug and play "pip install" time yet. I hope it eventually will be though.
Hopefully someday you'll have 8 H100s on your Macbook, but I think we're still a long way away from that.
Yay I really can have serverless notebooks! Not just an easy to manage server environment but literally a static html file that can be passed around and runs the full notebook environment. It’s weird it was ever done any other way.
Pyodide GPU support is a ways away, but it is theoretically possible once WebGPU is stable. https://github.com/pyodide/pyodide/issues/1911
The big difference is that Google Colab runs in your web browser, whereas Moonglow lets you connect to compute in the VSCode/Cursor notebook interface. We've found a lot of people really like the code-completion in VSCode/Cursor and want to be able to access it while writing notebook code.
Colab only lets you connect to compute provided by Google. For instance, even Colab Pro doesn't offer H100s, whereas you can get that pretty easily on Runpod.
That is no longer true - you can use remote kernels on your own compute via colab: https://research.google.com/colaboratory/local-runtimes.html
There is also the same feature in CoCalc, including using the official colab Docker image: https://doc.cocalc.com/compute_server.html#onprem
Cocalc also supports 1 click use of vscode.
(The above might not work with runpod, since their execution environment is locked down. However it works with other clouds like Lambda, Hyperstack, etc.)
The big reason it's annoying is because (I believe) Colab still only lets you connect to runtimes running on your computer - which is why at the end at the end of that article they suggest using SSH port forwarding if you want to connect to a remote cluster. I know at least one company has written a hacky wrapper that researchers can use to connect to their own cluster through Colab, but it's not ideal.
I think Moonglow's target audience is slightly different than Colab's though because of the tight VSCode/Cursor integration - many people we've talked to said they really value the code-complete, which you can't get in any web frontend!
If you're interested, you might find embedding OpenZiti into Moonglow a pretty compelling alternative to port forwarding and it might open even crazier ideas once your connectivitiy is embedded into the app. You can find the cheapest compute for people and just connect them to that cheapest compute using your extension... Might be interesting? Anyway, I'd be happy to discuss some time if that sounds neat... Until then, good luck with your launch!
OpenZiti looks really cool though - I'll take a look!
Using OpenZiti w/ Serverless probably means integrating an OpenZiti SDK with your serverless application. That way, it'll connect to the OpenZiti network every time it spawns.
The SDK option works anywhere you can deploy your application because it doesn't need any sidecar, agent, proxy, etc, so it's definitely the most flexible and I can give you some examples if you mention the language or framework you're using.
The pod option says "container based" so it'll take some investigation to find out if an OpenZiti sidecar or other tunneling proxy is an option. Would you be looking to publish something running in RunPod (the server is in RunPod), or access something elsewhere from a RunPod pod (the client is in RunPod), or both?
https://www.pogs.cafe/software/tunneling-sagemaker-kaggle
At the risk of repeating the famous Dropbox comment
I like the idea and that the ease of usage is your selling point. But I don't know if that is actually a reasonable reason. People who are entrenched that much in VSCode ecosystem wouldn't find it a problem to deploy dockerized Nvidia GPU container and connect to their own compute instance via remote/tunnel plugins on VSCode which one can argue does make more sense.
Congratulations on the launch and good luck with the product.
We don't yet transfer the python environment on the self-serve options, though for customers on AWS we'll help them create and maintain images with the packages they need.
I do have some ideas for making it easy to transfer environments over - it would probably involve letting people specify a requirements.txt and some apt dependencies and then automatically creating/deploying containers around that. Your idea of actually just detecting what's installed locally is pretty neat too, though.
I havent had a personal jupyter need like this thread is about, yet - but I am a curious mind in search of tools to help me curious harder.
I need to find a project that lets me leverage moonglow.
I hope they got the name MoonGlow from Ultima:
>Moonglow
>>Moonglow is a city dedicated to magic and mystical arts, found on the island of Verity. The principle area of the city is fenced and gated, surrounding a maze from which teleporters connect to the more outlying shops and facilities. One of these, the Encyclopedia Magicka, contains a magical pentagram, which proves to be a further teleporter. Saying, or more discretely whispering, the password ‘Recdu’ while standing on this will transport you to the Lost lands, to a second pentagram in the town of Papua. The city’s tinkers have no dedicated shop, instead they can be found in the city’s bank.
Seems appropriate for this project...
even has a theme song: https://www.youtube.com/watch?v=XMRlwVmetcc
[1] https://github.com/joouha/euporie
1. How is this different from Syncthing and similar solutions? Syncthing is free, open-source, cloud agnostic and easy to use to accomplish what seems to be the same task as Moonglow. 2. What is serverless about this? It is not clear from the pitch above.
2. I think the serverless here is actually pretty literal - you don't have to think about or spin up a Jupyter server. We normally describe it as 'run local jupyter notebooks on your own cloud compute,' which I think might be a little more clear.
This is brilliant and "obvious" in a good way along those lines, congrats on the launch!
Curious, are the SSH keys stored on Moonglow's internal servers?
There's no SSH keys to store - we start a tunnel from the remote machine and connect to that.
I really like their Jupyter repl format because it separates the cells with python comments so it’s much easier to deploy your code when you are done versus a notebook.
One nice thing about our VSCode extension is that it's not just a remote kernel - our extension also lets you see what kernels you have and other details, so we'd need to write something like it for Zed. We probably wouldn't do this unless there's a lot of demand.
By the way, VSCode also supports the # %% repl spec and Moonglow does work with that (though we haven't optimized for it as much as we've optimized for the notebooks).
One thing I've found while working in the ML space is that it seems like ML researchers have to deal with a lot of systems cruft. I think that in the limit, ML researchers basically only care having about a few things set up well:
- secrets and environment management
- making sure their dependencies are installed
- efficient access to their data
- quick access to their code
- using expensive compute efficiently
But to get all this set up for their research they need to wade through a ton of documentation about git, bash, docker containers, mountpoints, availability zones, cluster management and other low-level systems topics.
I think there's space for something like Replit or Vercel for ML researchers, and Moonglow is a (very early!) attempt at creating something like it.
However, looking at its replacement here (https://docs.databricks.com/en/dev-tools/bundles/index.html) - I think we're trying to solve the same problems at different levels. My guess is Databricks is the right solution for big teams that need well-defined staging/prod/dev environment. We're targeting smaller teams that might be doing more of their own devops or are still at the 'using a bash script to run notebooks remotely' stage.
Wouldn't targeting smaller teams lead to a lot of pricing pressure? Or do you think there's enough volume to justify that?
Moonglow abstracts over this, so you don't need to think about the server connection details at all. We're aiming for an experience where it feels like you've moved your notebook from local to cloud compute while staying in your editor.