1. 46
  1.  

  2. 9

    Being in the middle of deploying my website on openbsd, it really is simple, it’s been a pain free experience (For now)

    1. 2

      Deploying a website once is quite simple. What I find hard is going beyond rsyncing the files up there and migrating a db scheme, all manually.

      I know about lots of tools: ansible, puppet, docker, what have we, but how do I set up a server and say a python based web app to be able to be deployed to the server, rolled back a version if deployment is awry, and all in a reproducible way where I can add more server for more power to scale demand if need be, and where I don’t have to remember all if it off the top of my head and do it manually again if a new server is needed, but it’s all “scripted” or written down, you might say.

      Now that is what I think is hard, and is the reason that I only host a couple of static sites myself and the rest goes onto heroku, or are in the hands of the sysadmins at work.

      If anyone could guide me in the right direction, or git push the knowledge onto my internal remote, please!

      1. 2

        I really like Heroku-like platforms (and I’ll throw Deis, Dokku, Managed VMs, and a few others in there), but I think, even in that case, it’s really important to understand how your app is actually going to be deployed. Heroku doesn’t save you from that; it just provides a (very good!) pre-canned deployment script that you need to be familiar with. You will eventually need to understand how that all actually manifest, and make a decision about whether an alternative process is the one you actually want.

        For example, here are five (equally good, depending on context!) ways to deploy Python:

        1. For each version you wish to deploy, create a virtualenv on the server for that app version, upload a tarball of the source, and then run a shell script that activates the virtualenv and runs setup.py from the application’s sources. This is the simplest solution to implement, but requires a full Python build environment on the server. It’s also what Heroku does.
        2. For each version you wish to deploy, build a .wheel, push it to a local PyPI-compatible server (e.g., devpi), and then, on the server, create a virtualenv and pip install your package and dependencies into it. and then install from that on the server. This is more complex, but still really simple, and can avoid needing the full Python toolchain on the server.
        3. For each version you wish to deploy, vendor all the libraries, make a wheel of that, push that to your PyPI server, and install it directly (no virtualenv required). This now requires a lot more build process (for library vendoring), but runs even faster. On the downside, you now have a hermetically sealed Python environment.
        4. For each version you wish to deploy, vendor all the libraries, make a DEB or RPM package of that mess (making sure it depends on the Python interpreter), and push that to your local package server. This is the considerably more work, but allows you to use the same distribution mechanism for all of your software, regardless of language. This is how e.g. Mercurial is packaged in most Linux distributions.
        5. Same as above, but instead of your package depending on the Python package, bundle your own Python interpreter, using something like cx_Freeze. This will produce the leanest and most consistent install option, but is by far the most work. (Building a Docker image also arguably falls into this bucket, even though the details are different.)

        Which is right for you? I have no idea. The first option is great for small apps; the last is the only way to fly if you need 100% consistency and sane deployment times across a large build farm. Heroku picks a single one of these options for you, and it’s unquestionably the right one for simple stuff (and scales surprisingly far for more complex stuff), but it doesn’t free you from learning the nuts and bolts at some point.

        1. 1

          Thank you for your reply! You have gone beyond and described a lot of stuff but there is still something in my mind asking “but how”.

          There is nothing on your list where I think “woah, what, how”? It all sounds simple, or more accurately, doable. On the other hand there is the stuff that you dont mention, and that is what I am having trouble with myself.

          Say I choose one of the examples … Lets just say #2 … All the things you describe is doable manually. But how do you automate it? And not only setting a server up where you have an automated procedure between you local machine and the server where you can just make deploy. How do you, if need be, add a second server without going through the same manual process that you did to set up the first one, because right after the second one is up and running, you need to set up a third one, and a fourth. Oh, and you somewhere have a fifth server that acts as a load-balancer (which I don’t know much about either) and this server should automatically know about nth-server so it can use all available servers. And then there is the database? You might need more than one, and they might need to run on their own servers, not on the servers where the app is running. But how do you provision them, migrating data between them, steering the master-slave relationship all while the servers have access to the address they should use to contact the db.

          Doing the setup manually with one server is in my experience easy. But being able to reproduce it on a whim to add more servers, and take a server down, and orchestrating the whole thing, making all the different parts play together… Sigh. I just don’t even.

          1. 1

            Is https://bitbucket.org/snippets/bpollack/arbLj helpful? That’s a (slightly better commented) Fabfile for how my blog used to get deployed. I can find an Ansible equivalent if you want.

            1. 1

              So if I understand it, that is “just” a script that you used to deploy (get now version onto the server and starting it) to any number of server. But there would still need to be some sort of setup on the servers, right? What are their “base” if you get my meaning? And what about orchestrating the servers? There is no need to deploy MyApp to two servers if only one of the get hit by traffic.

              Sorry if I’m asking too much :)

    2. 1

      I know it’s not going to happen but I wish they had gzip compression planned for httpd.

      1. 4

        Yeah, I also really wish httpd had support for setting arbitrary headers. Requiring relayd for setting expires/cache-control, charset, CORS, etc.. just adds yet another moving part. At least hsts is possible now. Not sure why they didn’t just add support for setting arbitrary headers though.

        1. 5

          Informally, I think of httpd as ftpd that speaks http.