I have 2 servers both running a Debian VM each. The old VM was one of the first o installed several years ago when I knew lityle and its messed up and has little space left. It running on Truenas Scale and has a couple of docker apps that I’m very dependent on (Firefly, Hammond). I want to move the datasets for these docker apps to a newer VM running on Proxmox server. It a Debian 13 VM with loads of space. What are my options for moving the data given neither Firefly nor Hammond have the appropriate export / import functions? I could migrate the old VM that that wouldn’t resolve my space issue. Plus it Debian 10 and it would take a lot to being it up to Trixie.

  • thirdBreakfast@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 days ago

    Great. There’s two volumes there - firefly_iii_upload & firefly_iii_db.

    You’ll definitely want to docker compose down first (to ensure the database is not being updated), then:

    docker run --rm \
      -v firefly_iii_db:/from \
      -v $(pwd):/to \
      alpine sh -c "cd /from && tar cf /to/firefly_iii_db.tar ."
    

    and

    docker run --rm \
      -v firefly_iii_upload:/from \
      -v $(pwd):/to \
      alpine sh -c "cd /from && tar cf /to/firefly_iii_upload.tar ."
    

    Then copy those two .tar files to the new VM. Then create the new empty volumes with:

    docker volume create firefly_iii_db
    docker volume create firefly_iii_upload
    

    And untar your data into the volumes:

    docker run --rm \
      -v firefly_iii_db:/to \
      -v $(pwd):/from \
      alpine sh -c "cd /to && tar xf /from/firefly_iii_db.tar"
    
    docker run --rm \
      -v firefly_iii_upload:/to \
      -v $(pwd):/from \
      alpine sh -c "cd /to && tar xf /from/firefly_iii_upload.tar"
    

    Then make sure you’ve manually brought over the compose file and those two .env files, and you should be able to docker compose up and be in business again. Good choice with Proxmox in my opinion.

    • @thirdBreakfast @trilobite 🤔
      Interestingly, handling of volumes with podman is much more easier:
      podman volume export myvol --output myvol.tar
      podman volume import myvol myvol.tar
      https://docs.podman.io/en/latest/markdown/podman-volume-export.1.html

      I also checked the docker volume client documentation and there is no export command available like for podman.
      https://docs.docker.com/reference/cli/docker/volume/

    • trilobite@lemmy.mlOP
      link
      fedilink
      arrow-up
      1
      ·
      1 day ago

      Interestingly, I did something similar with Linkwarden where I installed the datasets in /home/user/linkwarden/data. The dam thing caused my VM to run out of space because it started downloading pages for the 4000 bookmarks I had. It went into crisis mode so I stopped it. I then created a dataset on my Truenas Scale machine and NFS exported to the VM on the same server. I simply cp -R to the new NFS mountpoint, edited the yml file with the new paths and voila! It seems to be working. I know that some docker container don’t like working off NFS share so we’ll see. I wonder ho well this will work when the VM is on a different machine as the there is a network cable, a switch, etc. in between. If for any reason the nas goes down, the docker containers on the Proxmox VM will be crying as they’ll lose the link to their volumes? Can anything be done about this? I guess it can never be as risilient as having VM and has on the same machine.

      • thirdBreakfast@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 day ago

        I run nearly all my Docker workloads with their data just in the home directory of the VM (or LXC actually since that’s how I roll) I’m running them in, but a few have data on my separate NAS via and NFS share - so through a switch etc with no problems - just slowish.