Sylve looks like a decent project with a promising future but this article really doesn't explain why they picked it over Proxmox at all. They explain a lot of things but I can't see the advantage over prox other than they wanted to use it.
OP here. One thing we mentioned in the blog but probably didn’t emphasize enough is how deeply ZFS is integrated into the UI.
With Sylve, you rarely need to touch the CLI. Snapshots, datasets, ZVOLs, even flashing images directly to ZVOLs, it’s all handled from the UI in a straightforward way.
That tight ZFS integration also lets us build more flexible backup workflows. You can back up VMs, jails, or entire datasets to any remote machine that supports SSH + ZFS. This is powered by Zelta (https://zelta.space) (which is embedded directly into the Go backend), so it’s built-in rather than something you bolt on.
In Proxmox, you can achieve similar things, but it’s less intuitive and usually involves setting up additional components like Proxmox Backup Server.
I did actually notice the ZFS gui which is indeed something lacking in proxmox which doesn't default to ZFS in the installer. However once you do install it using ZFS it actually makes use of it pretty well and the user does not need to mess with the zfs cli tools much. Obviously it would be nice to have a GUI for all zfs operations too. Then again even TrueNAS refers you back to the cli for SOME operations.
On proxmox ZFS syncs do not require proxmox backup server, which actually has its own format which is very efficient in speed and disk space, but you do either need something like sanoid/syncoid or use of the shell.
I just installed Proxmox for the first time with a 5 disk ZFS array. Very basic stuff but I already had to go to the CLI a few times and it didn't really feel that well integrated. Even setting up the array didn't work (non-descript -1 error message, and ended up needed to use -f on the cli). I also couldn't find a zfs create equivalent (but that could have been me?)
It's fine because I'm comfortable in the CLI but I read your comment and wanted to share that it felt a bit rudimentary at best.
Yeah, that’s pretty much been my experience as well. Last time I seriously used Proxmox with ZFS (I think 8.4.x), it felt a bit… bolted on.
It works fine for the common VM workflows, but once you step outside that path, you end up dropping to the CLI more than you’d expect.
In Sylve, we tried to make ZFS a first-class part of the system rather than something sitting underneath it. You can create pools, scrub them with actual progress visibility, replace disks, and manage datasets (Filesystems, Volumes, Snapshots) directly from the UI.
Proxmox tends to abstract datasets away and handle them for you, which is great for standard VM usage, but gets limiting if you want to do something custom, like creating your own dataset for media or Samba shares.
That’s really where Sylve differs, it gives you both the "it just works" path and the flexibility without forcing you into the CLI.
Do you have any opinions on how this works vs doing iSCSI to some other storage system using ZFS? That's how I've been handling Proxmox on the backend, and have mixed feelings. The GUI leaves a very great deal to be desired in honestly curious ways, have to touch the CLI a lot even for super basic networking or auth stuff, and of course neither side has the same insight to the data structures in question. Either you've got to do ZVOL instances and thus manual effort or scripting, or you give Proxmox a single big blob then let it manage that with LVM but that means the storage side can't give any granular help on snapshots and the like. It still can deal with data integrity and backups and storage redundancy and all that but no further, and some increased overhead. But on the other hand, I do feel like a really firm separation of concerns isn't without value. Having native support though is an interesting alternative I hadn't really considered.
Too late to edit, but just as a note for anyone else who gets confused by my post: I was not paying careful enough attention and missed/misread the "backups" bit in the parent post, completely my fault. As far as I can tell from reading through the (quite pleasant!) documentation [0], Sylve does not (at least for now) support any sort of network storage for direct use as the VM backing store, though as it is FreeBSD underneath it's presumably doable to get something going from the command line. I'd thought they'd somehow managed to set something up so you could directly use another ZFS system via SSH as the primary backing store with management which would be pretty awesome. It still looks like a beautiful design, but since I'm pretty invested right now in separating out storage into its own hardware vs where compute happens it'd be hard to setup nodes as AIO for the near future at least here.
Still an awesome project to learn about and I hope it's successful.
It's funny, I love how FreeBSD manages iSCSI even though I have only used it a few times, I put it in my to-do list but never really got around to writing a UI for it. Come next release (v0.3.0) I will definitely integrate it because as your put it's quite necessary to have that as a way to isolate storage from the main system.
I run Proxmox at home, but now that I have been drinking the NixOS koolaid over the past 2 years, all of my homelab problems suddenly look like Nix-shaped nails.
Well it looks like we might soon be able to have the benefits of NixOS while also having bhyve (and presumably Sylve): https://github.com/nixos-bsd/nixbsd
I have the same thing with proxmox especially after I realized how well it integrates with proxmox backup server. And I haven't even gotten into clustering yet. It really is a very solid product.
> They explain a lot of things but I can't see the advantage over prox other than they wanted to use it.
A huge, totally obvious, advantage is that FreeBSD isn't using systemd. I'm now nearly systemd-free, if not for Proxmox. But my VMs are systemd free. And, by definition, my containers too (where basically the entire point is that there's a PID 1 for the service and that PID 1, in a container is not systemd).
So the last piece missing for me is getting rid of Proxmox because Proxmox is using systemd.
I was thinking about going straight to FreeBSD+bhyve (the hypervisor) but that felt a bit raw. FreeBSD+Sylve (using bhyve under the hood) seems to be, at long last, my way out of systemd.
I've got several servers at home with Proxmox but I never, on purpose, relied too much on Proxmox: I kept it to the bare minimum. I create VMs and use cloudinit and tried to have most of it automated and always made it with the idea of getting rid of Promox.
I've got nothing against Proxmox but fuck systemd. Just fuck that system.
Whether an appliance OS uses SystemD or not is as silly of a concern as “does the lead developer prefer cheddar or brie”
What about performance characteristics? Recoverability of workloads?
I’m interested in a FreeBSD base OS because it seems ZFS is better integrated and ZFS has a lot of incredibly useful tools that come with it. If Bhyve is at least nearly as performant as KVM, I’d be hard pressed not to give it a whirl.
I've been repeatedly burned by systemd, both on machines I've administered and on appliances. In every situation, the right fix was either "switch distros" or "burn developer-months of work in a fire drill".
In fact, I just decided to go with FreeBSD instead of proxmox specifically because proxmox requires systemd. The last N systemd machines I've had the misfortune to touch were broken due to various systemd related issues. (For large values of N.)
I assume that means anything built on top of it is flaky + not stable enough for production use.
I have never really understood the systemd hate. It sure as hell beat the sorcery that was managing init.d scripts for everything.
I managed the distro upgrade on hundreds of remotely-managed nodes, porting our kiosk appliance from a pre-systemd debian to a post-systemd debian, and out of all the headaches we suffered systemd was not one of them, short of a few quirks we caught in our development process. It pretty much just worked and the services it provided made that upgrade so much easier.
Curious how you got burned, I hear a lot of complaining but haven't seen a lot of evidence
It absolutely is silly. I’ve been responsible for managing low-thousands of Linux servers with systemd and it’s standardized a lot of things that otherwise would’ve been a lot of bespoke scripts.
Yeah, I’m kind of in the same camp. I never really had issues with systemd either. It mostly just works, even if it’s a bit heavy.
For me, moving to FreeBSD wasn’t about escaping systemd, it was more about the overall system design and how cohesive everything feels. That said, I’ve tried to keep Sylve neutral on that front. I don’t really position it as “systemd vs not”, just focus on what it actually does well.
It’s still early and not as feature complete as Proxmox yet, but I think it already stands on its own as a solid option.
“does the lead developer prefer cheddar or brie” Quite right but given I live in Somerset (UK) I can have both: Cheddar is in Somerset and where the eponymous cheese originated and quite a lot of brie is produced here too - it's not the French original effort but rather good.
I have quite a lot of customers that we have migrated from VMware to Proxmox. Some of them are rocking zfs instead of vmfs. Mostly these are Dell servers. Proxmox with zfs seems to be more aggressive about disc failure warnings, which I think is helpful.
OP here. It’s less about Sylve doing something Proxmox can’t do, and more about a bunch of QoL improvements that come from us being heavy Proxmox users and building what we felt was missing.
A few concrete things:
ZFS-first UX: Not just "ZFS as storage”, but everything built around it. Snapshots, clones, ZVOLs, replication, all cleanly exposed in the UI without dropping to CLI.
Simple backups without extra infra: Any remote box with SSH + ZFS works. No need to deploy something like PBS just to get decent backups.
Built-in Samba shares: You can spin up and manage shares directly from the UI without having to manually configure services.
Magnet / torrent downloader baked in: Sounds small, but for homelab use it removes a whole extra container/VM people usually end up running.
Clustering: but not all-or-nothing, You can cluster nodes when you need it, and also disable/unwind it later. Proxmox clusters are much more rigid once set up.
Templates done right: Create a base VM/jail once and spin up N instances from it in one go, straight from the UI.
FreeBSD base: It's not really a benefit of Sylve, but rather the ecosystem that FreeBSD provides.. Tighter system integration, smaller surface area, no systemd, etc. (depending on what you care about)
None of this is to say Proxmox is bad, it’s great. This is more "we used it a lot, hit some friction points, and built something that feels smoother for our workflows."
I get the impression bhyve does all that stuff too. Is sylve basically just a thin GUI wrapper on top?
(That'd be amazing if it's possible to do stuff like dump configs + check them into git from the cli, then stand them up on any bhve/sylve box later...)
Without looking at the Sylve docs, I'll conjecture that it has deeper integration with ZFS. With a foundation on FreeBSD, there is a likelihood Sylve can support ZFS-on-root rollbacks better than hacking it into Proxmox. A rollback capability is why I'm looking for Proxmox alternatives. In the Linux world, Talos Linux and IncusOS provide A/B updates which achieve a similar rollback capability. With something based on FreeBSD, your "immutable" OS and all of it's data can be treated equally as ZFS datasets. There's also a higher risk that a Linux kernel update will break ZFS.
Folks using TrueNAS or unRAID for backup instead of safe keeping, and then get mad when everything goes sideways and the data is gone. Your NAS must have a backup elsewhere, snapshot and what not won't save you if everything goes RIP.
ZFS is redundancy and redundancy only, but people see ZFS as some sort of backup. That is silly and wrong.
>A rollback capability is why I'm looking for Proxmox alternatives.
Your VMs and LXC container should have an automated backup. Proxmox itself takes a second to clean install it.
I had to change the motherboard and had to literraly install Proxmox 9.1 from scratch. BUT.... before doing that, I checked the LXC backups sent to a TrueNAS spool in mirror for safe keeping.
Reinstalled Proxmox, mounted the NFS share on Proxmox and voila, all the LXC containers were restored and started like nothing happened.
Regardless of the number of drives available, you gain an advantage when your file system can leverage snapshots to roll backwards or forwards. There are other Linux-native filesystems that can provide this capability too, but many admins prefer ZFS because the full range of capabilities is unparelleled.
Perhaps I'm missing your point, but proxmox+lxc on zfs storage works fine in proxmox? If just looks like any other storage in proxmox and on commandline you've got all the usual zfs tools
I think it comes down to the standard argument against ZFS on linux -- uncertainty. It works *now*. Will it continue to work? Will any upstream changes in the Linux kernel cause issues with the ZFS modules bolted on top?
It is unlikely for there to be issues with ZFS and Linux. It's too common now, but it's not included in the main Linux tree, so it's not explicitly tested.
So, it's a low risk, but not zero risk.
More to the point here, when working with FreeBSD, ZFS is a first-class citizen (moreso even), so working with it *should* be more integrated with a FreeBSD solution than Proxmox, but how much more (and is that meaningful) is probably a qualitative feel than quantitative fact.
Sylve appears to be a FreeBSD/BSD exclusive implementation of managing vms, etc.
Proxmox is Debian/Ubuntu based.
Both will have their advantages. It might not be about better or worse, the particular things you use may in some cases run better on BSD, or the security management could more fit what you are after.
>A lot of our week is made up of the same kinds of small tasks: provision a VM, tweak storage settings, pass through a device, replicate a dataset, share a file, test an image, throw the machine away, do it again. None of that is exciting.
All I read is that they are still doing ClickOPS over DevSecOps!!
At no moment I heard automation, if you aren't using automation in 2026, your future in IT is cooked.
I run Proxmox at home for my homelab. I used to use VMs and now I have fully adopted Proxmox LXC containers (I hate Docker). I use Ansible to automate everything.
Last night I wanted to setup a notification service called Gotify, the Ansible playbook must:
1. Create a LXC container with specified resources
2. Prepare the system, network and what not
3. Give me a fully operational LXC and service running, go to the browser and voila.
All of that by running one command line, so now I can deploy it over and over.
I have setup a LXC container running Radarr, qBittorrent, Sonarr, Jackett, WireGuard VPN via Proton VPN, Iptables firewall aka kill-switch.
All of what you just read running within a LXC container fully automated via Ansible, OP is doing everything manually.
Even if I was running Sylve, Ansible would be doing the whole automation stuff.
Author of Sylve here, and I helped deploy the setup in the post.
> All I read is that they are still doing ClickOPS over DevSecOps!!
Their setup is mostly working on embedded stuff, and this involves some amount of moving VM disk images around, sometimes they run different software within the same VM disk, so that means ZFS properties need to be tweaked accordingly (compression, recordsize, etc). This is a lot easier to do with a UI than it is with CLI, and the UI is pretty good at showing you what’s going on. Now I'm all for automating stuff, but there's no clear pattern here to automate away,
Now regarding automation in Sylve, you can create a template out of Sylve (with networking, storage, CPU config etc.) and then deploy that template as many times as you want (from the UI), last I checked proxmox only allows you to clone from template one at a time.
What I do is pretty similar to what you mention, but I don't really use ansible since on FreeBSD if it's in the ports tree its one command (after base system is set up) which is `pkg install -y <package>`. And your entire stack (from your list), can be done with one command each. The only thing I see that would need a bit setup would be the wireguard vpn, but even that is pretty straightforward under FreeBSD (so you can do it with a jail and no need for a VM).
I see, based on your comment and others, Sylve seems to be heavily GUI for everything.
There is nothing wrong with that but if an user cannot perform the same tasks via CLI, I see that as a big blocker for a project to be fully adopted with exceptions. OPNSense, there is zero reasons to manage the whole network and what not via CLI, GUI makes life so much easier. I would hate it having to do everything via CLI.
The other thing is LXC, Sylve seems to call it jail.
I would expect this jail to support something like below.
Ansible only automates what you do manually, the server itself only sees the command and it will never run Ansible itself, so intead of manually creating a LXC, Ansible would send:
All of that from my PC without having to go to a browser. That is the friction that your team should look into automating, there is always a way, it is just easier to go to the browser.
We’re API-first, the UI is just a client on top. We already ship Swagger docs with the code (docs/ on the repo), so everything the UI does is exposed and usable programmatically today.
Right now we’re still early (v0.2), so the CLI/SDK pieces aren’t fully there yet, but that’s what we’re building next.
Before v0.4 the plan is:
* a proper CLI for scripting
* a well-defined API lib (TypeScript/Go first, others later)
> Terraform
I am not and do not intend in becoming a Kubernetes expert, many companies run Kubernetes and they don't know why they do it, some hypes make things so much harder.
But I do have a single cluster at home which allowed me to learn both Kubernetes and Terraform, I also hate Docker so much that I prefer to convert a Dockerfile into a Terraform template and voila, I do not use it to run my stuff.
I enjoy Terraform very much with Terragrunt. Terraform alone is too messy, Terragrunt makes the house cleaner.
How many times are you redeploying your homelab stuff? I also run lxc containers and thought about automating deployments but in my one year running proxmox I have only deployed each container once. If anything breaks I have PBS running to recover a previous backup. I don't see myself having to repeat this process more than once or twice
It less about how many times and more about used to automate everything, spend less time doing boring things and more time doing fun stuff.
For example, when I first deployed a Jellyfin LXC container with GPU and what not, the container itself hosts nothing, Proxmox mounts the NFS shared from TrueNAS to it, and it uses a local NVMe for transcoding.
And yet, novice me picked a small storage size, 5GB or something because I only run Debian Netinst which uses 200MB of ram and 0.00001% CPU. Debian Netinst itself requires what 1-2GB of disk??
Back to your question, I had to redeploy another Jellyfin container coz it ran out of disk space with:
1. the GPU passthrough
2. mount all the NFS shares once the LXC is up
3. the transcode folder
4. rsync from TrueNAS and restore the metadata with all the movies and what not.
Had I planned to do it?? Nope.
One command line and I have a brand new Jellyfin LXC with much bigger storage, and working like nothing happened, fully automated from my PC via Ansible.
When I first read this I was like wow bad choice vs sticking w proxmox but then I reflected a bit on my rashness. A tight zfs L1 w/o systemd actually does sound interesting. I'm going to wipe a machine and give it a spin and see for myself. Could be interesting!
The article promotes the value of UI for the infrastructure by touching on ZFS. But in this age of Ai, what I’m really looking for is a good api or cli one can let LLM drives. I basically care more about using my infrastructure than how to create it. I know proxmox can do this, but I wish there was a nixos like system where all my VMs are in one file I can verify between LLM making the change and deployment
That’s true, bhyve doesn’t support nested virtualization right now.
In practice though, most setups don’t actually need it if you’re running workloads directly on the host.
Also, if your goal is testing or simulating clusters, you can already run Sylve inside jails. That gives you multiple isolated “nodes” on a single machine without needing nested virt. We have a guide for it here:
https://sylve.io/guides/advanced-topics/jailing-sylve/
So you can still experiment with things like clustering, networking, failure scenarios, etc., just using jails instead of spinning up hypervisors inside VMs.
Nested virt is still useful for specific cases like testing other hypervisors or running Firecracker inside VMs, but for most Sylve-style setups it hasn’t really been a blocker.
We run Proxmox VMs that are running Hashicorp's Nomad orchestration at $DAYJOB. The Nomad clients are then turning around and running the docker containers (Proxmox -> Nomad VM -> Docker). For us it's easier to manage and segregate duties on the initial metal this way.
Nested virtualization can be very handy in both the lab and in production. In the lab, you can try out a new hosting platform by running one atop the other. IE: Proxmox on VMWare, Hyper-V on KVM. This lets you try things out without needing fresh bare metal hardware.
In prod, let's say you run workloads in Firecracker VMs. You have plenty of headroom on your existing hardware. Nested virtualization would allow you to set up Firecracker hosts on your existing hardware.
Perhaps I'm misunderstanding, but wouldn't that case be covered by simply putting some vms under a vnet and others on another vnet and make them talk to each other? I can't also understand what you mean by "fresh bare metal hardware". In either case you don't need bare metal, being a top level vm or a nested one.
If you're evaluating VM hosts (proxmox, hyper-V, vmware, etc...) You need to have support for nested virtualization all the way down. Otherwise, if you want to evaluate a VM infrastructure, you need to start with bare-metal. Really, you just need to make sure that your top level support nested virtualization, but I understand their point.
However, the point about firecracker VMs in place of containers I think is really a good use-case. Firecracker can provide a better isolation environment, so it would be great to be able to run Firecracker VMs for workloads, which would require that the host (and the VM host above) support nested virtualization.
One example: when learning Proxmox itself. I was able to set up a multi-node cluster with more complicated networking than I was normally comfortable with and experiment with failures of all sorts (killing a node, disabling NICs, etc.) without needing more hardware or affecting my existing things.
Outside of learning and testing I am not sure of what uses there might be but I'm curious to know if there are.
This is really interesting. I've played with bhyve before but I didn't realise anyone actually used it in anger. And that people had written such great tooling around it.
My home lab still uses ESXi 8. But it needs something new and I was looking at proxmox. However I may give this a try first.
I love FreeBSD but Linux just provides every feature under the sun when it comes to virtualization. Do you find any missing features on bhyve ? Is bhyve reliable ? I can't imagine its been tested as thoroughly as KVM ...
Bhyve is quite cool but no nested virt which means you cannot nest vm_enter/exit calls with EPT pages so you cannot virtualize within those guests. I found this crucial. For instance Qubes OS won't run in Bhyve by any means.
Anecdotally, Bhyve has worked in FreeBSD for a decade now. Eventually it got ported to Illumos because it was better than their implementation of QEMU.
If you are unsure of bhyve's abilities then why not test yourself? Speculation and guessing about stability or testing is useless without seeing if it works in your application.
> If you are unsure of bhyve's abilities then why not test yourself?
It is not possible to come to a conclusion about everything in the world yourself "from scratch". No one has the time to try out everything themselves. Some filteration process needs to be applied to prevent wasting your finite time.
That is why you ask for recommendations of hotels, restaurants, travel destinations, good computer brands, software and so on from friends, relatives or other trusted parties/groups. This does not mean your don't form your opinions. You use the opinions of others as a sort of bootstrap or prior which you can always refine.
HN is actually the perfect place to ask for opinions. Someone just said bhyve does not support nested virtualization (useful input !). Someone else might chime in and say they have run bhyve for a long time and they trust it (and so on...)
I agree with you and do not understand the “I read every manual” and “I test all software” crowd. I play around with A LOT of software but I cannot test it all.
Speculation is not useless if you are saying “the answer I got makes it 99% likely that this solution will not work for me”. Curation has immense value in the world today. I investigate only the options most likely to be useful. And that still takes all my time.
The phrasing of your questions is the problem. They are uninformed, too general, and assuming. The last sentence reads as if you outright dismiss bhyve because YOU can't imagine it was tested thoroughly.
> It is not possible to come to a conclusion about everything in the world yourself "from scratch". No one has the time to try out everything themselves. Some filteration process needs to be applied to prevent wasting your finite time.
It's totally possible when you know what your application requires but you didn't state anything.
> Someone just said bhyve does not support nested virtualization (useful input !).
Ok you have a problem with the way I framed my questions and my (unintentional) tonality. Fair enough. Let's move from critique of the way I asked my questions to what your experience with bhyve has been, if you're willing to share that.
Have you used bhyve ? What has your experience been with it ? Have you used KVM+QEMU -- can you compare your experience between both of them ?
With Sylve, you rarely need to touch the CLI. Snapshots, datasets, ZVOLs, even flashing images directly to ZVOLs, it’s all handled from the UI in a straightforward way.
That tight ZFS integration also lets us build more flexible backup workflows. You can back up VMs, jails, or entire datasets to any remote machine that supports SSH + ZFS. This is powered by Zelta (https://zelta.space) (which is embedded directly into the Go backend), so it’s built-in rather than something you bolt on.
In Proxmox, you can achieve similar things, but it’s less intuitive and usually involves setting up additional components like Proxmox Backup Server.
On proxmox ZFS syncs do not require proxmox backup server, which actually has its own format which is very efficient in speed and disk space, but you do either need something like sanoid/syncoid or use of the shell.
It's fine because I'm comfortable in the CLI but I read your comment and wanted to share that it felt a bit rudimentary at best.
It works fine for the common VM workflows, but once you step outside that path, you end up dropping to the CLI more than you’d expect.
In Sylve, we tried to make ZFS a first-class part of the system rather than something sitting underneath it. You can create pools, scrub them with actual progress visibility, replace disks, and manage datasets (Filesystems, Volumes, Snapshots) directly from the UI.
Proxmox tends to abstract datasets away and handle them for you, which is great for standard VM usage, but gets limiting if you want to do something custom, like creating your own dataset for media or Samba shares.
That’s really where Sylve differs, it gives you both the "it just works" path and the flexibility without forcing you into the CLI.
Still an awesome project to learn about and I hope it's successful.
----
0: https://sylve.io/docs/
I run Proxmox at home, but now that I have been drinking the NixOS koolaid over the past 2 years, all of my homelab problems suddenly look like Nix-shaped nails.
[1] https://github.com/EnigmaCurry/nixos-vm-template
I actually have a few hosts that only run docker. I might be able to test with those.
Looks like Nix will eat the world soon. :)
A huge, totally obvious, advantage is that FreeBSD isn't using systemd. I'm now nearly systemd-free, if not for Proxmox. But my VMs are systemd free. And, by definition, my containers too (where basically the entire point is that there's a PID 1 for the service and that PID 1, in a container is not systemd).
So the last piece missing for me is getting rid of Proxmox because Proxmox is using systemd.
I was thinking about going straight to FreeBSD+bhyve (the hypervisor) but that felt a bit raw. FreeBSD+Sylve (using bhyve under the hood) seems to be, at long last, my way out of systemd.
I've got several servers at home with Proxmox but I never, on purpose, relied too much on Proxmox: I kept it to the bare minimum. I create VMs and use cloudinit and tried to have most of it automated and always made it with the idea of getting rid of Promox.
I've got nothing against Proxmox but fuck systemd. Just fuck that system.
What about performance characteristics? Recoverability of workloads?
I’m interested in a FreeBSD base OS because it seems ZFS is better integrated and ZFS has a lot of incredibly useful tools that come with it. If Bhyve is at least nearly as performant as KVM, I’d be hard pressed not to give it a whirl.
I've been repeatedly burned by systemd, both on machines I've administered and on appliances. In every situation, the right fix was either "switch distros" or "burn developer-months of work in a fire drill".
In fact, I just decided to go with FreeBSD instead of proxmox specifically because proxmox requires systemd. The last N systemd machines I've had the misfortune to touch were broken due to various systemd related issues. (For large values of N.)
I assume that means anything built on top of it is flaky + not stable enough for production use.
I managed the distro upgrade on hundreds of remotely-managed nodes, porting our kiosk appliance from a pre-systemd debian to a post-systemd debian, and out of all the headaches we suffered systemd was not one of them, short of a few quirks we caught in our development process. It pretty much just worked and the services it provided made that upgrade so much easier.
Curious how you got burned, I hear a lot of complaining but haven't seen a lot of evidence
For me, moving to FreeBSD wasn’t about escaping systemd, it was more about the overall system design and how cohesive everything feels. That said, I’ve tried to keep Sylve neutral on that front. I don’t really position it as “systemd vs not”, just focus on what it actually does well.
It’s still early and not as feature complete as Proxmox yet, but I think it already stands on its own as a solid option.
I have quite a lot of customers that we have migrated from VMware to Proxmox. Some of them are rocking zfs instead of vmfs. Mostly these are Dell servers. Proxmox with zfs seems to be more aggressive about disc failure warnings, which I think is helpful.
Pick what OS works for you.
Or better, how does it do it better than proxmox?
This isn't to say that proxmox is the best thing since sliced bread, I'm curious as to what makes sylve better, is it the API?
A few concrete things:
ZFS-first UX: Not just "ZFS as storage”, but everything built around it. Snapshots, clones, ZVOLs, replication, all cleanly exposed in the UI without dropping to CLI.
Simple backups without extra infra: Any remote box with SSH + ZFS works. No need to deploy something like PBS just to get decent backups.
Built-in Samba shares: You can spin up and manage shares directly from the UI without having to manually configure services.
Magnet / torrent downloader baked in: Sounds small, but for homelab use it removes a whole extra container/VM people usually end up running.
Clustering: but not all-or-nothing, You can cluster nodes when you need it, and also disable/unwind it later. Proxmox clusters are much more rigid once set up.
Templates done right: Create a base VM/jail once and spin up N instances from it in one go, straight from the UI.
FreeBSD base: It's not really a benefit of Sylve, but rather the ecosystem that FreeBSD provides.. Tighter system integration, smaller surface area, no systemd, etc. (depending on what you care about)
None of this is to say Proxmox is bad, it’s great. This is more "we used it a lot, hit some friction points, and built something that feels smoother for our workflows."
(That'd be amazing if it's possible to do stuff like dump configs + check them into git from the cli, then stand them up on any bhve/sylve box later...)
Folks using TrueNAS or unRAID for backup instead of safe keeping, and then get mad when everything goes sideways and the data is gone. Your NAS must have a backup elsewhere, snapshot and what not won't save you if everything goes RIP.
ZFS is redundancy and redundancy only, but people see ZFS as some sort of backup. That is silly and wrong.
>A rollback capability is why I'm looking for Proxmox alternatives.
Your VMs and LXC container should have an automated backup. Proxmox itself takes a second to clean install it.
I had to change the motherboard and had to literraly install Proxmox 9.1 from scratch. BUT.... before doing that, I checked the LXC backups sent to a TrueNAS spool in mirror for safe keeping.
Reinstalled Proxmox, mounted the NFS share on Proxmox and voila, all the LXC containers were restored and started like nothing happened.
I'm talking about this, basically: https://www.linuxquestions.org/questions/*bsd-17/howto-zfs-m...
There have since been implementations for Linux but no distribution is designed to support them.
Can you explain your use case when you absolutely can't provide a separate M.2 drive solely for the OS?
There have since been implementations for Linux but no distribution is designed to support them.
It is unlikely for there to be issues with ZFS and Linux. It's too common now, but it's not included in the main Linux tree, so it's not explicitly tested.
So, it's a low risk, but not zero risk.
More to the point here, when working with FreeBSD, ZFS is a first-class citizen (moreso even), so working with it *should* be more integrated with a FreeBSD solution than Proxmox, but how much more (and is that meaningful) is probably a qualitative feel than quantitative fact.
A Un*x system that doesn't use systemd as an init system.
Proxmox is Debian/Ubuntu based.
Both will have their advantages. It might not be about better or worse, the particular things you use may in some cases run better on BSD, or the security management could more fit what you are after.
I wonder why not run both :).
Proxmox is due for it's viral moment though.
All I read is that they are still doing ClickOPS over DevSecOps!!
At no moment I heard automation, if you aren't using automation in 2026, your future in IT is cooked.
I run Proxmox at home for my homelab. I used to use VMs and now I have fully adopted Proxmox LXC containers (I hate Docker). I use Ansible to automate everything.
Last night I wanted to setup a notification service called Gotify, the Ansible playbook must:
1. Create a LXC container with specified resources
2. Prepare the system, network and what not
3. Give me a fully operational LXC and service running, go to the browser and voila.
All of that by running one command line, so now I can deploy it over and over.
I have setup a LXC container running Radarr, qBittorrent, Sonarr, Jackett, WireGuard VPN via Proton VPN, Iptables firewall aka kill-switch.
All of what you just read running within a LXC container fully automated via Ansible, OP is doing everything manually.
Even if I was running Sylve, Ansible would be doing the whole automation stuff.
> All I read is that they are still doing ClickOPS over DevSecOps!!
Their setup is mostly working on embedded stuff, and this involves some amount of moving VM disk images around, sometimes they run different software within the same VM disk, so that means ZFS properties need to be tweaked accordingly (compression, recordsize, etc). This is a lot easier to do with a UI than it is with CLI, and the UI is pretty good at showing you what’s going on. Now I'm all for automating stuff, but there's no clear pattern here to automate away,
Now regarding automation in Sylve, you can create a template out of Sylve (with networking, storage, CPU config etc.) and then deploy that template as many times as you want (from the UI), last I checked proxmox only allows you to clone from template one at a time.
What I do is pretty similar to what you mention, but I don't really use ansible since on FreeBSD if it's in the ports tree its one command (after base system is set up) which is `pkg install -y <package>`. And your entire stack (from your list), can be done with one command each. The only thing I see that would need a bit setup would be the wireguard vpn, but even that is pretty straightforward under FreeBSD (so you can do it with a jail and no need for a VM).
There is nothing wrong with that but if an user cannot perform the same tasks via CLI, I see that as a big blocker for a project to be fully adopted with exceptions. OPNSense, there is zero reasons to manage the whole network and what not via CLI, GUI makes life so much easier. I would hate it having to do everything via CLI.
The other thing is LXC, Sylve seems to call it jail.
I would expect this jail to support something like below.
Ansible only automates what you do manually, the server itself only sees the command and it will never run Ansible itself, so intead of manually creating a LXC, Ansible would send:
Of I wanna exec into the LXC container to restore a backup and start the system, I would expect Sylve to support this. All of that from my PC without having to go to a browser. That is the friction that your team should look into automating, there is always a way, it is just easier to go to the browser.We’re API-first, the UI is just a client on top. We already ship Swagger docs with the code (docs/ on the repo), so everything the UI does is exposed and usable programmatically today.
Right now we’re still early (v0.2), so the CLI/SDK pieces aren’t fully there yet, but that’s what we’re building next.
Before v0.4 the plan is:
* a proper CLI for scripting
* a well-defined API lib (TypeScript/Go first, others later)
* parity between UI, CLI, and API
This is the first time I heard about it, I will check its documentation later. Workplace flagged it as grayware, go figure haha
This x 10.
Ansible and OpenTofu/Terraform is where it is at. And you can use Claude/Codex with that setup.
But I do have a single cluster at home which allowed me to learn both Kubernetes and Terraform, I also hate Docker so much that I prefer to convert a Dockerfile into a Terraform template and voila, I do not use it to run my stuff.
I enjoy Terraform very much with Terragrunt. Terraform alone is too messy, Terragrunt makes the house cleaner.
It less about how many times and more about used to automate everything, spend less time doing boring things and more time doing fun stuff.
For example, when I first deployed a Jellyfin LXC container with GPU and what not, the container itself hosts nothing, Proxmox mounts the NFS shared from TrueNAS to it, and it uses a local NVMe for transcoding.
And yet, novice me picked a small storage size, 5GB or something because I only run Debian Netinst which uses 200MB of ram and 0.00001% CPU. Debian Netinst itself requires what 1-2GB of disk??
Back to your question, I had to redeploy another Jellyfin container coz it ran out of disk space with:
1. the GPU passthrough
2. mount all the NFS shares once the LXC is up
3. the transcode folder
4. rsync from TrueNAS and restore the metadata with all the movies and what not.
Had I planned to do it?? Nope.
One command line and I have a brand new Jellyfin LXC with much bigger storage, and working like nothing happened, fully automated from my PC via Ansible.
In practice though, most setups don’t actually need it if you’re running workloads directly on the host.
Also, if your goal is testing or simulating clusters, you can already run Sylve inside jails. That gives you multiple isolated “nodes” on a single machine without needing nested virt. We have a guide for it here: https://sylve.io/guides/advanced-topics/jailing-sylve/
So you can still experiment with things like clustering, networking, failure scenarios, etc., just using jails instead of spinning up hypervisors inside VMs.
Nested virt is still useful for specific cases like testing other hypervisors or running Firecracker inside VMs, but for most Sylve-style setups it hasn’t really been a blocker.
In prod, let's say you run workloads in Firecracker VMs. You have plenty of headroom on your existing hardware. Nested virtualization would allow you to set up Firecracker hosts on your existing hardware.
However, the point about firecracker VMs in place of containers I think is really a good use-case. Firecracker can provide a better isolation environment, so it would be great to be able to run Firecracker VMs for workloads, which would require that the host (and the VM host above) support nested virtualization.
Outside of learning and testing I am not sure of what uses there might be but I'm curious to know if there are.
My home lab still uses ESXi 8. But it needs something new and I was looking at proxmox. However I may give this a try first.
Likewise for disk i/o -some people swear by 9P as a backing mechanism, some by ZVOL.
Also: https://www.youtube.com/watch?v=wo4oD5UON30
Not sure why this copy made the SCP
It is not possible to come to a conclusion about everything in the world yourself "from scratch". No one has the time to try out everything themselves. Some filteration process needs to be applied to prevent wasting your finite time.
That is why you ask for recommendations of hotels, restaurants, travel destinations, good computer brands, software and so on from friends, relatives or other trusted parties/groups. This does not mean your don't form your opinions. You use the opinions of others as a sort of bootstrap or prior which you can always refine.
HN is actually the perfect place to ask for opinions. Someone just said bhyve does not support nested virtualization (useful input !). Someone else might chime in and say they have run bhyve for a long time and they trust it (and so on...)
So I can't agree with your viewpoint.
Speculation is not useless if you are saying “the answer I got makes it 99% likely that this solution will not work for me”. Curation has immense value in the world today. I investigate only the options most likely to be useful. And that still takes all my time.
> It is not possible to come to a conclusion about everything in the world yourself "from scratch". No one has the time to try out everything themselves. Some filteration process needs to be applied to prevent wasting your finite time.
It's totally possible when you know what your application requires but you didn't state anything.
> Someone just said bhyve does not support nested virtualization (useful input !).
What nested applications are you planning to run?
Have you used bhyve ? What has your experience been with it ? Have you used KVM+QEMU -- can you compare your experience between both of them ?