My apologies for the long post.

I have a single server running Unraid with about 12 services (Pihole, Wordpress, Heimdall, Jellyfin, etc.) all running on Docker. This server is also acting as my home lab NAS. Everything runs fine for my use case (at least for right now) but I’ve been thinking about creating some type of compute cluster for my services instead of a single server.

Recently, I saw a discussion about Proxmox, Docker, LXD and Incus where a user felt that Incus was a better option to all the others. Curious, I started reading up on Incus and playing around with it and contemplating switching all my services from Docker in Unraid to an Incus cluster (I’m thinking around 3 nodes) and leaving the Unraid server to serve as a NAS only.

In a nutshell Incus/LXD appear to be (to me) a lightweight version of a VM in which case I would have to manually install and configure each service I have running. Based on the services I have running, that seems like a ton of work to switch to Incus when I could just do 3 physical servers (Debian) in docker swarm mode or a Proxmox cluster with 3 Debian VMs with docker in swarm mode. I’d all possible, I would like to keep my services containerized rather then actual VMs.

What has me thinking that a switch to Incus may be worth it is performance. If the performance of my services is significantly better in Incus/LXDs as compared to Docker, then that’s worth it to me. I have not been able to find any type of performance comparison between Incus/LXD and Docker. I don’t know if there are other reasons as to “Incus over Proxmox and Docker” which is why I’m asking the greater community.

Here’s my question:

Based on your experience and taking into consideration my use case (home lab/home use), do the pros and cons of Incus outweigh accomplishing my goal by creating standalone hosts cluster or Proxmox cluster?

  • Avid Amoeba@lemmy.ca
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    1
    ·
    edit-2
    10 months ago

    Docker has native compute performance. The processes essentially run on the host kernel with a different set of libs. The only notable overhead is in storing and loading those libs which takes a bit more disk and RAM. This will be true for any container solution and VMs. VMs have a lot of additional overhead. An a cursory glance, Incus seems to provide an interface to run Linux containers or VMs. I wouldn’t expect performance differences between containers run through it compared to Docker.

      • vegetaaaaaaa@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        10 months ago

        VMs have a lot of additional overhead.

        The overhead is minimal, KVM VMs have near-native performance (type 1 hypervisor). There is some memory overhead as each VM runs its own kernel, but a lot of this is cancelled by KSM [1] which is a memory de-duplication mechanism.

        Each VM runs its own system services (think systemd, logging, etc) so there is some memory/disk usage overhead there - but it would be the same with Incus/LXC as they do the same thing (they only share the same kernel).

        https://serverfault.com/questions/225719/so-really-what-is-the-overhead-of-virtualization-and-when-should-i-be-concerned

        I usually go for bare-metal > on top of that, multiple VMs separated by context (think “tenant”, production/testing, public/confidential/secret, etc. VMs provide strong isolation which containers do not. At the very minimum it’s good to have at least separate VMs for “serious business” and “lab” contexts) > applications running inside the VMs (containerized or not - service/application isolation through namespaces/systemd has come a long way, see man systemd-analyze security) - for me the benefit of containerization is mostly ease of deployment and… ahem running inscrutable binary images with out-of-date dependencies made by strangers on the Internet)

        If you go for a containerization solution on top of your VMs, I suggest looking into podman as a replacement for Docker (less bugs, less attack surface, no single-point-of-failure in the form of a 1-million-lines-of-code daemon running as root, more unix-y, better integration with systemd [2]. But be aware of the maintenance overhead caused by containerization, if you’re serious about it you will probably end up maintaining your own images)