I’ve been dreading doing it, however, it wasn’t too bad. I had 4 LXC’s to convert to VM’s in total.
The biggest difference is that the LXC I had which hosted CodeProject.AI for my Blue Iris server went from using 120GB down to 19GB to for the same containers. I’m guessing it’s due to being able to change from using vfs in the LXC to overlay2 in the VM.
Having docker-compose yml files to recreate all containers on the VM helped a TON as well as using rsync to move everything over to the new VM that the containers needed.
Has anyone else made the move?
I got the kick in the pants to do it after trying to restore the 120GB LXC from PBS, giving up after 2 hours, and restoring it in totality from a standard Proxmox backup instead which only took 15 minutes.
This is why I use locally mounted volumes for all my docker containers. Makes transferring over to new hosts an easy rsync one liner.
I haven’t dug into this too much, but I have seen mention of using overlay2 for lxcs on proxmox. If that was available would you still prefer VMs?
Here’s one post I just found talking about the availability of it: https://forum.proxmox.com/threads/lxc-zfs-docker-overlay2-driver.122621/
LPT: if you store your data on a NAS or somesuch, you can use NFS volumes to expose that data to your containers. That way, if you need to rejigger or blow away your setup, you don’t lose anything that can’t be rebuilt easily.
Did you see a performance improvement going from vfs to overlay2?
I’m still stuck on vfs as I don’t want to mount my storage to the VM via NFS. Passing it into the lxc is so much more flexible and it doesn’t come with the problems when dealing with database files.
I can’t say that I’ve noticed anything, however, my containers aren’t IO intensive either.
I agree completely about the LXC being much more flexible when using bind mounts rather than fstab.