1. 16

Something I’ve been thinking about for years and never managed to get to a proper conclusion.

I’m using and hosting something like 10 small websites/projects on 1-2 VPS and only update them very infrequently. Some are written in PHP, some in Python, Clojure, etc - doesn’t really matter. What they all have in common that I never thought of proper deployment. Some of them are developed on said VPS (or elsewhere) and after finishing a task usually just checked in to some git repo and then rsynced over manually.

What do you people use for this? Most projects are small and get a handful of deploys per year, usually just bumping up some dependencies, some are so small it’s literally copying 5 files without any dependencies.

  • [ ] proper pipeline with build, tests and then maybe deployment
  • [ ] scm hooks on checkin, e.g. git post-commit hook doing rsync
  • [ ] Github actions or something similar
  • [ ] manually like described above
  • [ ] something completely different

NB: When I’m doing this for serious[tm] or work (included in serious, I guess) projects, deployments is one of the first things that is set up, but it comes tied in with CI most of the time, as in building a pipeline. Because I don’t have tests or a build pipeline for these things, nothing comes for free with it…

  1.  

  2. 5

    Most of my projects are ‘git push dokku master’, where dokku is hosted on a VPS. I have one project on Heroku, which offers the same deployment, and one on Azure App Services which is the click of a button in VSCode.

    No CI/CD, no tests, just a manual push. They’re hobby projects and tests/pipelines are a bit heavyweight for a hobby in my opinion.

    1. 3

      I use docker and Kubernetes because I’m insane. Here’s an example script for one of my projects: https://tulpa.dev/tulpa-ebooks/tulpanomicon/src/branch/master/deploy.sh

      I used to use Dokku, but I needed to learn Kubernetes.

      EDIT: However christine.website has this all automated with GitHub Actions

      1. 2

        Script which scp‘s stuff to the VPS. It’s low-tech, but works well, is easy to set up, easy to debug (but never breaks anyway), etc.

        Using Go makes stuff a bit easier than PHP I suppose, since I don’t need to worry about installing the correct version of an interpreter on the servers and whatnot. I guess I’d use a similar script but replace the build step with a simple Docker image or something in that case, or maybe just rsync is the server is stable enough (Although I’m not sure if rsync verifies the transfer is correct? Also harder to do rollbacks etc.)

        #!/bin/sh
        #
        # Deploy to a server. Assumes cwd is of the to-be-deployed project.
        #
        # This only copies the binary; you will still need to restart the app.
        #
        
        set -euC
        
        name=$(basename "$(pwd)")
        if [ ! -d "./cmd/$name" ]; then
        	echo >&2 "./cmd/$name doesn't exist"
        	exit 1
        fi
        
        # Build
        go generate ./...
        CC=musl-gcc go build -trimpath \
        	-ldflags "-X main.version=$(git log -n1 --format='%h_%cI')" \
        	"./cmd/$name"
        strip "$name"
        upx -qqq "$name"
        
        # Test
        go test -race -cover ./...
        
        # Send to servers.
        for s in 139.162.153.248; do
        	ssh -p9012 "$s" mkdir -p "$name/bin"
        
        	file="$name/bin/$name.$(date +%Y-%m-%dT%H:%M:%S).$(git log -n1 --format='%h')"
        	scp -P9012 "$name" "scp://$s/$file"
        
        	suml=$(sha256sum "$name")
        	sumr=$(ssh -p9012 "$s" sha256sum "$file")
        	rm "$name"
        	if [ "${suml%% *}" != "${sumr%% *}" ]; then
        		echo >&2 "checksums don't match:"
        		echo >&2 "  local:  $suml"
        		echo >&2 "  remote: $sumr"
        		exit 1
        	fi
        
        	ssh -p9012 "$s" ln -sf '$(readlink -f '"$file"')' "$name/bin/$name"
        	ssh -p9012 "$s" ls -1t "$name/bin" '|' tail -n +101 '|' xargs rm -f
        done
        
        exit 0
        
        1. 2

          name=$(basename "$(pwd)")

          There’s no need for subshells, parameter expansion will do just fine:

          name="${PWD##*/}"

          :^)

          1. 1

            I’ve seen that trick before, but I never really trusted it to deal with all cases (although I can’t really think of cases where it would fail from the top of my head) so I tend to opt for basename. I also think it’s clearer.

            1. 1

              I’ve seen that trick before, […]

              This is not a trick, it’s a standard POSIX-compatilbe shell behaviour. The most frequently used idiom would probably be:

              ${0##*/}
              
              1. 1

                I know how shells work. Path manipulation using parameter substitution doesn’t strike me as something that POSIX defines, and is thus a “trick”.

                It won’t work if you use prog /foo/, for example. Will $PWD ever be set to something with a trailing slash? Probably not, but I don’t really want to have to think about whether or not my trick will work in this particular case, so I just use basename instead of resorting to tricks that often-but-not-always work to get a performance gain that’s negligible even with hardware from the 90s.

                Imagine someone trying to do path manipulation using ad-hoc string mangling in a C or Python program, instead of using the standard functions for it. It wouldn’t pass any decent code review.

                1. 1

                  I know how shells work.

                  Never said you don’t :^)

                  Path manipulation using parameter substitution doesn’t strike me as something that POSIX defines, and is thus a “trick”.

                  It doesn’t define it because it’s a specific use case of the broader parameter expansion. If you look at the Examples section in the linked page, you’ll see path being very much manipulated so, they way I see it, it’s as an suggestion/endorsement at the very least.

                  It won’t work if you use prog /foo/, for example.

                  Sorry, you’ve lost me a bit here - use it as what? Both PWD and 0 are set by the $SHELL.

                  […] but I don’t really want to have to think about whether or not my trick will work in this particular case, […]

                  Sure, I never claimed it’ll work in all cases. I thinks I was quite specific - using PWD wasn’t a mere example.

                  Imagine someone trying to do path manipulation using ad-hoc string mangling in a C or Python program, instead of using the standard functions for it. It wouldn’t pass any decent code review.

                  But we’re not manipulating just any old path, I’ve only mentioned PWD and 0 and these are defined by POSIX.

                  1. 1

                    I know how shells work.

                    Never said you don’t :^)

                    Yeah, sorry; that was a little bit snipy.

                    It won’t work if you use prog /foo/, for example.

                    Sorry, you’ve lost me a bit here - use it as what? Both PWD and 0 are set by the $SHELL.

                    My point is, that I don’t want to have to think about where a variable comes from, or what possible values it can be. $(basename ..) always works. It’s just easier because I can use the same tool in every case, and don’t have to think about “is this tool right for this particular case?”

                    I also feel that $(basename "$PWD") just reads better than ${PWD##*/}, especially for people not very familiar with shell scripting, which are most people.

                    1. 1

                      I also feel that $(basename “$PWD”) just reads better than ${PWD##*/}, especially for people not very familiar with shell scripting, which are most people.

                      That is a very valid point!

                      I always have to copy ${0##*/} from my existing scripts as I can never remember the exact order… was it a hash (#) or percent (%)… one or two… ;^)

                      Then again, I frequently have to consult man pages for seemingly more trivial things.

              2. 1

                Forgottent to add that, most importantly, you aren’t spawning a subshell in a subshell so you are two full shells lighter :^)

          2. 2

            git push heroku master for most of them. I’ve got the 12-factor model pretty well internalized and with 2450 dyno hours a month, I could keep three apps (of the maybe 40 live-ish ones I have in heroku) running full-time.

            1. 2

              I built a project called phost[1] which is a command line application that I use to deploy static sites as subdomains of my website. This is what I do during development as a sort of minimal CD without having to deal with cloud providers.

              phost update deployment-name ./dist tar-gzip’s up the dist directory, uploads it to my server, and then creates + deploys a new version of the static site. It’s then available publicly at https://deployment-name.ameo.design/

              For my homepage[2] which is just a static site, I simply rsync it over to the serving directory of an Apache2 server: https://github.com/Ameobea/homepage/blob/master/Justfile#L32

              I’ve got Github Actions configured to build a static version of my site and then deploy it: https://github.com/Ameobea/web-synth/runs/330459508#step:6:1

              I deploy serverless APIs via Google Cloud Run: https://github.com/Ameobea/web-synth/blob/master/Justfile#L42

              For everything else, I just pull down the repo to my VPS, use some scripts that I run via the just[3] command runner on my server to build the image, stop + delete the old container, and replace it with a new one: https://github.com/Ameobea/robintrack/blob/master/backend/Justfile#L1

              I have logs for everything running in tabs of a screen session. That piece could certainly be improved, but the next step would probably be a Kubernetes cluster and I’ve spent wasted enough time at work configuring that already to do so for my personal projects as well.

              [1] https://github.com/ameobea/phost

              [2] https://github.com/ameobea/homepage

              [3] https://github.com/casey/just

              1. 2

                Building a Debian package that contains a daemon with a systemd unit file. Atomic deployment, self-contained, sandboxed, easy to copy, install, remove.

                1. 2

                  Even for configuration files I use Debian packages that enhance or act as alternatives for the default configurations packages.

                  In addition I have a meta-package that depends on all the such packages created by me, so aptitude always shows that there are no “outdated or locally installed packages”.

                2. 2

                  I’m running a couple of projects on one VPS.

                  Current status:

                  • running Guix System
                  • redeploying involves
                    1. run git pull in the local checkout of any project I want to update
                    2. run guix system reconfigure path/to/config.scm
                    3. run sudo herd restart ... for any services that changed
                  • downsides:
                    • Guix System itself is unstable
                    • I’m compiling everything on an underpowered VPS
                    • even besides compiling, guix tooling is very slow (e.g. a no-op guix system reconfigure)
                    • building from local checkouts is a bit messy and unprincipled, I might accidentally deploy local modifications
                    • rollbacks are impossible because guix system switch-generation requires a reboot
                    • having to restart services manually is a pain and error-prone Some of these could be addressed with a bit of work, e.g. I believe I could offload the compilation (which would also force me to deal with the local checkouts).

                  Previous status:

                  • running Debian stable
                  • redeploying involved:
                    • build locally on macos for things I could cross compile (go projects, javascript), then rsync over
                    • build using CI (travis) for others (haskell projects), then wget
                    • either tell systemd to reload/restart the configured user-level services, or connect to tmux and switch to relevant window, interrupt, arrow-up, enter
                  • downsides:
                    • outdated dependencies, meaning manual installation of some daemons (postgres, postgrest, …)
                    • similarly, outdated dependencies made it hard to do any one-off development on the server
                    • I tended to make a mess of deploying javascript things with rsync

                  I’m a bit happier with the current situation, but. Probably now that I’ve learned Guix, getting into Nix would be more feasible than before. Perhaps Guix on top of NixOS would offer a reasonable migration path; besides slowness Guix is mostly fine, it’s Guix System that I have most issues with.

                  One thing I don’t know how to solve nicely yet (in any setting, but particularly with Guix/Nix): How to deal with versioning / cache-busting of static web files. It seems that the right thing to do would be to have multiple versions of resources served simultaneously. Perhaps there’s a way to serve up the last couple generations?

                  1. 2

                    I’m compiling everything on an underpowered VPS

                    It’s probably also possible with guix. On NixOS it’s fairly simple to build a system configuration on one machine and then ship the results to the target host over SSH. Assuming they both run the same kernel and arch: nixos-rebuild -I nixos-config=./target-configuration.nix --target-host mytargethostname switch. It’s also possible to provide a --build-host other-machine flag if you need a build machine.

                    One thing I don’t know how to solve nicely yet (in any setting, but particularly with Guix/Nix): How to deal with versioning / cache-busting of static web files. It seems that the right thing to do would be to have multiple versions of resources served simultaneously. Perhaps there’s a way to serve up the last couple generations?

                    That would be possible if the html pages point to /(guix|nix)/store entries for the static assets, and that folder was served by the webserver. Then all the css, js and images would still be available until a garbage-collection is run on the system.

                    1. 1

                      Yes, I believe there’s ways to compile remotely with Guix, too. I haven’t tried to run the guix tools on macos though, and I doubt it would be able to cross-compile, so this would required setting up a VM, which is also a bit of a pain. The way to go there would probably be to build with some CI service, e.g. like nix with cachix.

                      Somehow serving the whole store sounds like a terrible idea, but thanks for the suggestion!

                      1. 1

                        Somehow serving the whole store sounds like a terrible idea, but thanks for the suggestion!

                        Haha yes, don’t put any credentials in your nix code if you do that!

                  2. 1

                    For my Elixir projects, I use edeliver, which leverages the language’s deployment features and a bunch of bash scripts + ssh. It’s got a bit of config but I like how un-“magic” it is, and covers a lot of boxes of “just deploy the thing.”

                    1. 1

                      $ git pull

                      $ ./clean-build ; ./update.sh

                      1. 1

                        For me, I mostly develop on a VPS. For me, deploying a new version of a project usually consists of rebuilding the new project, and re-running the run-script (which is a mix of setting ENV vars, and putting the executable for the web server behind nohup).

                        But, I develop mostly in either Go or Nim for hobby web projects, and rely on using port-based reverse proxy in Nginx.

                        1. 1

                          VPS. git. Bare repos on VPS; clone from and push to these for development; clone from them for deployment. This lets me check out a different branch (than master) for the deployment if I need to. Full stack tech on VPS, like PostgreSQL, Node.JS, Ruby. Run backend (API) daemons on VPS; compile/build frontends (e.g. webpack). Nginx to proxy from somedomain.com to the backend daemon, and also to serve the static build of the frontend. Manage daemon life with monit. Run same backend and frontend in development on local machine(s), except frontend is usually dev mode, with auto reload, debug-friendly error messages and call stacks, inspectable with frontend dev tools (Vue), etc.

                          The above setup serves my purposes 99% of the time. With properly-managed migrations and versioning (git tags), I get a very similar setup across dev, staging and prod. For now, my test suites are only run locally in development. No CI-ish stuff set up (yet).

                          1. 1

                            I’m going down a similar path but considering Dokku to combine their buildpacks for simple sites and containers for the more complicated setups. I’ll probably end up with something mostly manual, although if something is getting updated often I’ll build out the automation.

                            1. 1

                              At home I’ve a fabric script that logs into the server, does a git pull, then runs the docker builds and copies/updates the systemd docker-compose configs. I’d rather have something like ECS because it’s like a distributed docker-compose without the complexity (or cost!) of Kubernetes.

                              1. 1

                                I’ve been using Netlify to auto-deploy on each push to my website’s repo. This works flawlessly – only requires a single line config in netlify.toml to specify the build command.

                                1. 1

                                  Circle CI workflows, typically pushing resources to S3 and/or Lambda. Here’s a sample configuration: https://github.com/PlanScore/PlanScore/blob/master/.circleci/config.yml

                                  I try to do everything with static files + serverless now, can’t afford the sysadmin overhead of running actual servers!

                                  1. 1

                                    I basically end up using Ansible, it is the best way I have found to manage a hobbyist server smoothly (and yet I still fear upgrades).

                                    1. 1

                                      netlify make it just too easy these days.

                                      1. 1

                                        All my static sites are published for free with https://surge.sh/.

                                        Everything else is built, tested, and deployed with NixOps.

                                        1. 1

                                          I use https://deployer.org (but only because I’ve built it 8-D)

                                          1. 1

                                            I use GitHub Pages to serve my front end resources. Front End app communicates with Firebase Auth/Storage. I spend only money on GH subscription :)

                                            Each push to master updates the resources in Github and they are auto served to next users.

                                            Previously I ran all this stuff on digital ocean. But the overhead of maintaining nginx, https and other things not related directly to my hobby was too much.

                                            1. 1

                                              I have a VCS repo for each site/domain, and then a single ‘published’ VCS repo that holds the built artifacts that are published.

                                              so deploy is the equivalent of a git pull on the published repo. my /var/www is just a VCS repo.

                                              CD/CD from built -> published commit is just a Makefile that runs. I happen to use sourcehut(sr.ht), so I have a build file that takes commits, builds the website, and checks it into my published repo.

                                              my web servers just do a vcs update every 5 minutes or so, as I don’t want CD/CD systems to have remote capabilities to execute stuff on my servers.

                                              1. 1

                                                Thanks for all the replies, I have the feeling most of us fall into roughly the same end of the spectrum, between manual work and automating a few steps if the tooling easily supports it with some outliers where now.sh/netlify/github actions produce the desired outcome without endless fiddling.

                                                I’ll need to think about this a little longer but I think I have an idea already how to try to at least standardize my projects so that I can deploy all of them with just a few parameters per project.

                                                1. 0

                                                  We’ve been using Now.sh synced with GitHub on push to get automatic deployments for that. It’s a pretty seamless process!