Hello, fellow Linux users!

My question is in the titel: What is a good approach to deploy docker images on a Raspberry Pi and run them?

To give you more context: The Raspberry Pi runs already an Apache server for letsencrypt and as a reverse proxy, and my home grown server should be deployed in a docker image.

To my understanding, one way to achieve this would be to push all sources over to the Raspberry Pi, build the docker image on the Raspberry Pi, give the docker image a ‘latest’ tag and use Systemd with Docker or Podman to execute the image.

My questions:

  • Has anyone here had a similar problem but used a different approach to achieve this?
  • Has anyone here automated this whole pipeline that in a perfect world, I just push updated sources to the Raspberry Pi, the new docker image gets build and Docker/Podman automatically pick up the new image?
  • I would also be happy to be pointed at any available resources (websites/books) which explain how to do this.

At the moment I am using Raspbian 12 with a Raspberry Pi Zero 2 W and the whole setup works with home grown servers which are simply deployed as binaries and executed via systemd. My Docker knowledge is mostly from a developer perspective, so I know nearly nothing about deploying Docker on a production machine. (Which means, if there is a super obvious way to do this I might not even be aware this way exists.)

    • wolf@lemmy.zipOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 days ago

      Thanks for the idea! I try to keep as little ‘moving’ parts as possible, so hosting gitlab is something I would want to avoid if possible. The Raspberry Pi is supposed to be sole hardware for the whole deployment of the project.

      • CameronDev@programming.dev
        link
        fedilink
        arrow-up
        2
        ·
        2 days ago

        Its definitely not a lightweight solution. Is the pi dedicated to the application? If so, is it even worth involving docker?

        • wolf@lemmy.zipOP
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 days ago

          You are asking exactly the right questions!

          I have an Ansible playbook to provision the Pi (or any other Debian/Ubuntu machine) with everything need to run a web application, as long as the web application is a binary or uses one of the interpreters of the machine. (Well, I have also playbooks to compile Python/Ruby from source or get an Adoptium JDK repository etc.)

          Right now I am flirting with the idea of using Elixir for my next web application, and it just seems unsustainable for me to now add Erlang/OTP and Elixir to my list of playbooks to compile from source.

          The Debian repositories have quite old versions of Erlang/OTP/Elixir and I doubt there are enough users to keep security fixes/patches up to date.

          Combined with the list of technologies I already use, it seems to reduce complexity if I use Docker containers as deployment units and should be future proof for at least the next decade.

          Writing about it, another solution might simply be to have something like Distrobox on the PI and use something like the latest Alpine.

          • CameronDev@programming.dev
            link
            fedilink
            arrow-up
            2
            ·
            1 day ago

            Up-to-date runtimes definitely makes sense, that is where docker shines.

            Gitlab is obviously a bit overkill, but maybe you could just create some systemd timers and some scripts to auto-pull, build and deploy?

            The script would boil down to:

            cd src
            git pull
            docker compose down
            docker compose up --build
            

            Your welcome to steal whatever you can from the repo I linked before.

            • wolf@lemmy.zipOP
              link
              fedilink
              English
              arrow-up
              2
              ·
              23 hours ago

              Thanks a lot!

              Yeah, if I go down that road, I’ll probably just add a git commit hook on the repo for the Raspberry Pi, so that I’ll have a ‘push to deploy’ workflow!

    • hades@lemm.ee
      link
      fedilink
      arrow-up
      3
      ·
      2 days ago

      systemd has nothing to do with docker, except to start the docker daemon.

      I think what OP was describing was writing systemd unit files that would start/stop docker containers.

      • wolf@lemmy.zipOP
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 days ago

        Exactly, this is what I am doing right now (just for binaries, not execute Docker or Podman).

      • CameronDev@programming.dev
        link
        fedilink
        arrow-up
        2
        ·
        2 days ago

        Yeah, probably, but thats not very common is it? Normally you’d just let the docker daemon handle the start/stop etc?

        • hades@lemm.ee
          link
          fedilink
          arrow-up
          2
          ·
          2 days ago

          I actually have no sense how common that is. My experience is with very small non-production docker environments, and with Kubernetes, but I have no idea what people typically do in between.

          • med@sh.itjust.works
            link
            fedilink
            arrow-up
            2
            ·
            edit-2
            1 day ago

            It’s common with rootless docker/podman. Something needs to start up the services, and you’re not using a root enabled docker/podman socket, so systemd it is.

        • hades@lemm.ee
          link
          fedilink
          arrow-up
          1
          ·
          2 days ago

          Well, someone needs to run docker compose up, right? (or you set restart policy, but that’s not always possible)