1. 73
  1. 70

    Nobody knows how to correctly install and package Python apps.

    That’s a relief. I thought I was the only one.

    1. 8

      Maybe poetry and pyoxidize will have a baby and we’ll all be saved.

      One can hope. One can dream.

      1. 4

        After switching to poetry, I’ve never really had any issues.

        pip3 install --user poetry
        git clone...
        cd project
        poetry install
        poetry run python -m project
        

        You can pull the whole install sequence in a Docker container, push it in your CI/CD to ECR/Gitlab or whatever repo you use, and just include both the manual and the docker command in your readme. Everyone on your team can use it. If you find an issue, you can add that gotcha do the docs.

        Python is fine for system programming so long as you write some useful unittests and force pycodestyle. You loose the type-safety of Go and Rust, yes, but I’ve found they’re way faster to write. Of course if you need something that’s super high performance, Go or Rust should be what you look towards (or JVM–Kotlin/Java/Scala if you don’t care about startup time or memory footprints). And of course, it depends on what talent pools you can hire from. Use the right tool for the right job.

        1. 2

          I’ve switched to poetry over the last several months. It’s the sanest installing python dependencies has felt in quite a few years. So far I prefer to export it to requirements.txt for deployment. But it feels like about 95% of the right answer.

          It does seem that without some diligence, I could be signing up for some npm-style “let’s just lock in all of our vulnerabilities several versions ago” and that gives me a little bit of heartburn. From that vantage point, it would be better, IMO, to use distro packages that would at least organically get patched. I feel like the answer is to “just” write something to update my poetry packages the same way I have a process to keep my distro packages patched, but it’s a little rotten to have one more thing to do.

          Of course, “poetry and pyoxidize having a baby” would not save any of this. That form of packaging and static linking might even make it harder to audit for the failure mode I’m worrying about here.

        2. 1

          What are your thoughts on pipenv?

        3. 5

          I’d make an exception to this point: “…unless you’re already a Python shop.” I did this at $job and it’s going okay because it’s just in the monorepo where everyone has a Python toolchain set up. No installation required (thank god).

          1. 4

            I think the same goes for running Python web apps. I had a conversation with somebody here… and we both agreed it took us YEARS to really figure out how to run a Python web app. Compared to PHP where there is a good division of labor between hosting and app authoring.

            The first app I wrote was CGI in Python on shared hosting, and that actually worked. So that’s why I like Unix – because it’s simple and works. But it is limited because I wasn’t using any libraries, etc. And SSL at that time was a problem.

            Then I moved from shared hosting to a VPS. I think I started using mod_python, which is the equivalent of mod_php – a shared library within Apache.

            Then I used a CherryPy server and WSGI. (mod_python was before WSGI existed) I think it was behind Apache.

            Then I moved to gunicorn behind nginx, and I still use that now.

            But at the beginning of this year, I made another small Python web app with Flask. I managed to configure it on shared hosting with FastCGI, so Python is just like PHP now!!! (Although I wouldn’t do this for big apps, just personal apps).

            So I went full circle … while all the time I think PHP stayed roughly the same :) I just wanted to run a simple app and not mess with this stuff.

            There were a lot of genuine improvements, like gunicorn is better than CherryPy, nginx is easier to config than Apache, and FastCGI is better than CGI and mod_python … but it was a lot of catching up with PHP IMO. Also FastCGI is still barely supported.

            1. 2

              nginx, uWSGI, supervisord. Pretty simple to setup for Flask or Django. A good shared hosting provider for Python is OpalStack, made by the people who created Webfaction (which, unfortunately, got gobbled up by GoDaddy).

              I cover the deployment options and reasoning in my popular blog post, “Build a web app fast: Python, JavaScript & HTML resources”. Post was originally written in 2012 but updated over the years, including just this month. See especially the recommended stack section at the end, starting at “Conclusion: pick a stack”, if you want to ctrl+f for that section. You can also take a peek at how OpalStack describes their Python + uWSGI + nginx shared hosting setup here. See also my notes on the under the hood configuration for nginx, uWSGI, and supervisord in this presentation, covered in the 5-6 sections starting from this link.

              You’re right that there are a lot of options for running a Python web app. But nginx, uWSGI, supervisord is a solid option that is easy to configure, high performance, open source, UNIXy, and rock solid. For dependency management in Python 3.x you can stick with pip and venv, remotely configured on your server via SSH.

              My companies have been using this stack in production at the scale of hundreds of thousands of requests per second and billions of requests per month – spanning SaaS web apps and HTTP API services – for years now. It just works.

              1. 2

                I’m curious, now that systemd is available in almost all Linux distributions by default, why are you still using supervisord? To me it feels like it is redundant. I’m very interested.

                1. 1

                  I think systemd can probably handle the supervisord use cases. The main benefit of supervisord is that it runs as whatever $USER you want without esoteric configuration, and it’s super clear it’s not for configuring system services (since that’s systemd’s job). So when you run supervisorctl and list on a given node, you know you are listing “my custom apps (like uwsgi or tornado services)”, not all the system-wide services as well as my custom app’s ones. Also this distinction used to matter more when systemd was less standard across distros.

                  1. 1

                    Understood! Thanks very much for taking the time to explain!

                2. 1

                  Hm thanks for the OpalStack recommendation, I will look into it. I like shared hosting / managed hosting but the Python support tends to be low.

                  I don’t doubt that combination is solid, but I think my point is more about having something in the core vs. outside.

                  PHP always had hosting support in the core. And also database support. I recall a talk from PHP creator Ramsus saying how in the early days he spent a ton of time inside Apache, and committed to Apache. He also made some kind of data limiting support to SQL databases to make them stable. So he really did create “LAMP”, whereas Python had a much different history (which is obviously good and amazing in its own way, and why it’s my preferred language).

                  Similar to package management being outside the core and evolving lots of 3rd party solutions, web hosting was always outside the core in Python. Experts knew how to do it, but the experience for hobbyists was rough. (Also I 100% agree about not developing on Windows. I was using Python on Windows to make web apps from ~2003-2010 and that was a mistake …)

                  It obviously can be made to work, I mean YouTube was developed in Python in 2006, etc. I just wanted to run a Python web app without learning about mod_python and such :) Similarly I wish I didn’t know so much about PYTHONPATH!

                  1. 1

                    I agree with all that. This is actually part of the reason I started playing with and working on the piku open source project earlier this year. It gives Python web apps (and any other Python-like web app programming environments) a simple git-push-based deploy workflow that is as easy as PHP/Apache used to be, but also a bit fancier, too. Built atop ssh and a Linux node bootstrapped with nginx, uWSGI, anacrond, and acme.sh. See my documentation on this here:

                    https://github.com/amontalenti/webappfast-piku#build-a-web-app-fast-with-piku

                    1. 1

                      Very cool, I hadn’t seen piku! I like that it’s even simpler than dokku. (I mentioned dokku on my blog as an example of something that started from a shell script!)

                      I agree containers are too complex and slow. Though I think that’s not fundamental, and is mostly Docker … In the past few days, I’ve been experimenting with bubblewrap to run containers without Docker, and different tools for buliding containers without Docker. (podman is better, but it seems like it’s only starting to get packaged on Debian/Ubuntu, and I ran into packaging bugs.)

                      I used containers many years ago pre-Docker, but avoided them since then. But now I’m seeing where things are at after the ecosystem has settled down a bit.

                      I’m a little scared of new Python packaging tools. I’ve never used pyenv or pipx; I use virtualenv when I need it, but often I just manually control PYTHONPATH with shell scripts :-/ Although my main language is Python, I also want something polyglot, so I can reuse components in other languages.

                      That said I think piku and Flask could be a very nice setup for many apps and I may give it a spin!

                      1. 1

                        It’s still a very new and small project, but that’s part of what I love about it. This talk on YouTube gives a really nice overview from one of its committers.

                  2. 1

                    In addition to @jstoja’s question about systemd vs supervisord, I’d be very curious to hear what’s behind your preference for nginx and uWSGI as opposed to caddy and, say, gunicorn. I kind of want caddy to be the right answer because, IME, it makes certificates much harder to screw up than nginx does.

                    Have you chosen nginx over caddy because of some gotcha I’m going to soon learn about very unhappily?

                    1. 2

                      Simple answer: age/stability. nginx and uWSGI have been running fine for a decade+ and keep getting incrementally better. We handle HTTPS with acme.sh or certbot, which integrate fine with nginx.

                      1. 1

                        That’s a super-good point. I’m going to need to finish the legwork to see whether I’m willing to bet on caddy/gunicorn being as reliable as nginx/uWSGI. I really love how terse the Caddy config is for the happy path. Here’s all it is for a service that manages its own certs using LetsEncrypt, serves up static files with compression, and reverse proxies two backend things. The “hard to get wrong” aspect of this is appealing. Unless, of course, that’s hiding something that’s going to wake me at 3AM :)

                3. 3

                  Why is Python’s packaging story so much worse than Ruby’s? Is it just that dependencies aren’t specified declaratively in Python, but in code (i.e. setup.py), so you need to run code to determine them?

                  1. 9

                    I dunno; if it were me I’d treat Ruby exactly the same as Python. (Source: worked at Heroku for several years and having the heroku CLI written in Ruby was a big headache once the company expanded to hosting more than just Rails apps.)

                    1. 3

                      I agree. I give perl the same handling, too. While python might be able to claim a couple of hellish inovations in this area, it’s far from alone here. It might simply be more attractive to people looking to bang out a nice command line interface quickly.

                    2. 6

                      I think a lot of it is mutable global variables like PYTHONPATH which is sys.path. The OS, the package managers, and the package authors often fight over that, which leads to unexpected consequences.

                      It’s basically a lack of coordination… it kinda has to be solved in the core, or everybody else is left patching up their local problems, without thinking about the big picture.

                      Some other reasons off the top of my head:

                      • Python’s import mechanism is very dynamic, and also inefficient. So the language design kind of works against the grain of easy distribution, although it’s a tradeoff.
                      • There’s a tendency to pile more code and “solutions” on top rather than redoing things from first principles. That is understandable because Python has a lot of users. But there is definitely a big mess with distutils + setuptools + pip + virtualenv, plus a few other things.
                      • Package managers are supposed to solve versioning issues, and then you have the tricky issue of the version of the package manager itself. So in some sense you have to get a few things right in the design from the beginning!
                      1. 5

                        Ruby’s packaging story is pretty bad, too.

                        1. 3

                          In what way?

                          1. 4

                            I don’t know, it’s been a long time since I’ve written any Ruby. All I know is that we’re migrating the Alloy website from Jekyll to Hugo because nobody could get Jekyll working locally, and a lot of those issues were dependency related.

                        2. 4

                          Gemfile and gemspec are both just ruby DSLs and can contain arbitrary code, so that’s not much different.

                          One thing is that pypi routinely distributes binary blobs that can be built in arbitrarily complex ways called “wheels” whereas rubygems always builds from source.

                          1. 5

                            Not true. Ruby has always been able to package and distribute precompiled native extensions, it’s just that it wasn’t the norm in a lot of popular gems, including nokogiri. Which by the way, ships precompiled binaries now, taking couple of seconds where it used to take 15m, and now there’s an actual tool chain for targeting multi arch packaging, and the community is catching up.

                            1. 2

                              Hmm, that’s very unfortunate. I haven’t run into any problems with gems yet, but if this grows in popularity the situation could easily get as bad as pypi.

                            2. 1

                              Thanks for the explanation, so what is the fundamental unfixable issue behind Python’s packaging woes?

                              1. 1

                                I could be wrong but AFAICT it doesn’t seem to be the case that the Ruby crowd has solved deployment and packaging once and for all.

                            1. 2

                              I just run pkg install some-python-package-here using my OS’s package manager. ;-P

                              It’s usually pretty straightforward to add Python projects to our ports/package repos.

                              1. 3

                                Speaking from experience, that works great up until it doesn’t. I have “fond” memories of an ex-coworker who developed purely on Mac (while the rest of the company at the time was a Linux shop), aggressively using docker and virtualenv to handle dependencies. It always worked great on his computer! Sigh. Lovely guy, but his code still wastes my time to this day.

                                1. 1

                                  I guess I’m too spoiled by BSD where everything’s interconnected and unified. The ports tree (and the package repo that is built off of it) is a beauty to work with.

                                  1. 4

                                    I’m as happy to be smug as the next BSD user but it isn’t justified in this case. Installing Python packages works for Python programs installed from packages but:

                                    • They don’t work well in combination with things not in packages, so if you need to use pip to install some things you may end up with conflicts.
                                    • The versions in the package repo may or may not be the ones that the thing you want to install that isn’t in packages need, and may conflict with the ones it needs.
                                    • The Python thing may depend on one of the packages that depends on Linux-specific behaviour. The most common of these is that signals sent to the process are delivered to the first thread in the process.

                                    In my experience, there’s a good chance that a Python program will run on the computer of the author. There’s a moderately large chance that it will run on the same OS and version as the author. Beyond that, who knows.

                                    1. 3

                                      I mean, we used Ubuntu, which is pretty interconnected and unified. (At the time; they’re working on destroying that with snap.) It just often didn’t have quiiiiiite what we, or at least some of us, wanted and so people reached for pip.

                                      1. 1

                                        Yeah. With the ports tree and the base OS, we have full control over every single aspect of the system. With most Linux distros, you’re at the whim of the distro. With BSD, I have full reign. :-)

                                        1. 3

                                          But it could still be the case that application X requires Python 3.1 when application Y requires Python 3.9, right? Or X requires version 1.3 of library Z which is not backwards compatible with Z 1.0, required by Y?

                                          1. 3

                                            The Debian/Ubuntu packaging system handles multiple versions without any hassle. That’s one thing I like about it.

                                            1. 1

                                              Does it? Would love to read more about this if you have any pointers!

                                              1. 2

                                                I guess the main usability thing to read about it the alternatives system.

                                            2. 2

                                              The ports tree handles multiple versions of Python fine. In fact, on my laptop, here’s the output of: pkg info | grep python:

                                              py37-asn1crypto-1.4.0          ASN.1 library with a focus on performance and a pythonic API
                                              py37-py-1.9.0                  Library with cross-python path, ini-parsing, io, code, log facilities
                                              py37-python-docs-theme-2018.2  Sphinx theme for the CPython docs and related projects
                                              py37-python-mimeparse-1.6.0    Basic functions for handling mime-types in Python
                                              py37-requests-toolbelt-0.9.1   Utility belt for advanced users of python-requests
                                              py38-dnspython-1.16.0          DNS toolkit for Python
                                              python27-2.7.18_1              Interpreted object-oriented programming language
                                              python35-3.5.10                Interpreted object-oriented programming language
                                              python36-3.6.15_1              Interpreted object-oriented programming language
                                              python37-3.7.12_1              Interpreted object-oriented programming language
                                              python38-3.8.12_1              Interpreted object-oriented programming language
                                              
                                  2. 1

                                    Fwiw, I’ve had good luck using Pyinstaller to create standalone binaries. Even been able to build them for Mac in Circleci.

                                    1. 1

                                      It can feel a bit like overkill at times, but I’ve had good luck with https://www.pantsbuild.org/ to manage python projects.

                                    2. 19

                                      I’m not sure I agree on the ‘Don’t Design for Multiple Cloud Providers’ point. It sounds as if his experience is designing for multiple cloud providers but deploying in only one. This means that your second-provider implementation is never tested and you’re always optimising for the first one. AWS probably isn’t going away, but a particular service in AWS that you depend on might. If AWS’s market position gets more entrenched, they can easily put up prices on you. This is what a lot of companies are deploying a mix of AWS and Azure: it prevents either company from putting up prices in a way that would seriously impact your costs. If Azure gets more expensive, shift the majority over to AWS. If AWS gets more expensive, shift the majority to Azure.

                                      Advice that boils down to ‘don’t have a second source for your critical infrastructure’ feels very bad.

                                      1. 19

                                        AWS knows this, and they’ve structured their pricing to make deploying to multiple clouds prohibitively expensive by making internal AWS bandwidth cheap/free, but AWS<->rest of the world super expensive. There’s no point hedging your bets against AWS. Either you’re all-in, or avoid them entirely.

                                        I’ve worked for a company that had a policy of developing everything for AWS+Rackspace running in parallel. When AWS had an outage we could boast that we remained up, but it wasn’t even a big win, since most of our customers had other dependencies on AWS and were down anyway.

                                        1. 7

                                          I agree with the author. I was going about kubernetes implementation and just had a realization that I was so focused on not tying my knowledge to one provider that I was wasting a lot of time. I specialized in Unix systems and other things in my career, and honestly AWS is so large that just being an AWS specialist is a skill in itself. Thankfully, the large cloud providers basically all clone themselves, and if I use terraform and such eventual porting is likely to not be impossible, but proving it constantly just isn’t worth my time. There are options to cost control AWS, but they already win on being the lowest cost of the top tier clouds when I last compared prices. If I ever need to shift, I’m sure I will have some months to do so and the cost of those months will likely be lower than the investment in always testing on two clouds. I do not like this reality, but I think it makes the most sense.

                                          1. 8

                                            I agree that “being an AWS specialist is a skill in itself”.

                                            But so is being an Oracle DBA, or a Solaris sysadmin, or a Microsoft ActiveDirectory admin… and I feel strongly that tying my employment skills/identity to a corporation that I don’t even work for has to be a mistake.

                                            It doesn’t stop lots of people from being paid to do it. It’s just wrong for me. My whole life in computing has been about fundamentals lasting while market leaders come and go; that may be the effect of luck, though: IP, routing, switching, DNS, NTP, SMTP, SNMP, UNIX-like operating systems: I think all of these things will still be recognizable and valuable in 2050.

                                            1. 4

                                              I started early enough that I was able to learn the fundamentals and experience things I think younger people are going to miss out on. It will affect their ability to troubleshoot at lower levels for sure. I am sad that I no longer get to work with routing and switching hardware outside of my home. I’ve always had the perspective that I should learn what I’m getting paid to use–and often just use that at home as well. I had Sun workstations at home when that was my life. I run my personal stuff on AWS to practice now, but still pay through the nose for a home switch with SNMP. AWS’ DNS and NTP are just going to be better than mine. I don’t use their SMTP due to cost, but I would love to never touch SMTP again. I run personal servers on DO and Alicloud also due to geo and pricing and honestly the terraform and other processes are not significantly different. If I’m being paid to make the best choices for someone, then I have to be open to everything AWS offers if they are on AWS. And, I’m still doing all of this because I enjoy that there’s always something new to learn. I would never make AWS my only skill.

                                              1. 0

                                                Just apply at tailscale?

                                          2. 6

                                            The entire list seemed right, except for that point.

                                            Yes, it’s hard to design for multiple clouds. Technically it’s very hard. However strategically, the chance of a single provider shutting you down or killing your favorite service or charging too much for you to survive… these are all more likely than the cloud provider going away.

                                            Technically hard. But doing so successfully can make all the difference in a bad spot.

                                            1. 6

                                              Usually this is only worth it if multi-cloud is actually part of your business proposition. If you’re not Hashicorp or Pulumi…probably not.

                                            2. 5

                                              well the whole article is AWS specific (to the point I get the weird urge to run away), so I wouldn’t wonder

                                              modern AWS depending customers are more a cult than anything else

                                              1. 2

                                                It kind of looked like an AWS ad to me.

                                              2. 1

                                                I agree. While they might not go away, all big cloud providers have proven that they are not magically immune to large scale failure (and certainly not small scale), so as with everything in IT it makes sense to invest in a backdrop strategy.

                                                It also can be good to keep things portable, because business decisions (internal and external) might require migrations.

                                                For newer projects it’s also easier to at least aim for being portable in that regard. Using Hashicorp tools, using minio or seaweedfs or anything S3 compatible as well as Kubernetes (or nomad if you want to self manage) significantly reduce the amount of work required compared to only a few years ago

                                                Yes, it’s not zero effort, but this being possible when you have big stakeholders that have huge interest to lock you in isn’t granted. Given this things became relatively simple.

                                                I actually not so long ago had a client that ordered me to make an AWS setup work on GCP (partly technical, partly business reasons). They were not migrating, but required it to run on multiple clouds.

                                                How quick that went surprised both them and me, but it certainly also was a well made setup in first place, so the list of things that had to be changed was short and easy to figure out.

                                                I am sure it would have been a lot more complex only a few years before that, so maybe that recommendation is only true for older setups?

                                                Everything comes with effort, but to anyone thinking about that I’d recommend actually looking into it a bit to see if it’s really as much effort as you initially might think.

                                              3. 11

                                                Don’t write internal cli tools in python

                                                I think this is shortsighted, bad advice. Binaries are simpler to distribute, this is true. But I would never, ever, ever write infra scripts in Go or Rust. At least not the kind that I end up writing. Simple scripts that perform a few commands, a little bit of argument parsing, a few HTTP requests where concurrency doesn’t matter. Nothing crazy.

                                                In this case, a scripting language is so much more productive. I currently use Python for this, and it’s just not a problem with pipenv.

                                                1. 4

                                                  Can you define what you mean by “infra scripts”? If your scripts need to be run by anyone other than yourself, then it seems to be pretty well-established that the productivity granted by Python is outweighed by the efficacy of distribution granted by Go.

                                                  1. 6

                                                    Infra scripts meaning scripts that power a CI / CD pipeline, dev tooling for everyday operations like creating hotfixes / finding which environments commits live in, ops scripts like rolling back to previous versions of containers in ECS.

                                                    Probably important background is that I generally work at product companies with less than 100 engineers. These scripts aren’t open source tools that are distributed to random people. These are just internal tools that are automating parts of our workflow for building and shipping code.

                                                    For that use case, I would never in a million years use Go.

                                                    1. 3

                                                      Gotcha! If you can rely on your consumers having a minimum version of Python installed on their machines then this seems reasonable. Go certainly isn’t a great choice if you’re just doing bash++ 👍

                                                      1. 6

                                                        Yes in this case I can message my “consumers” because they are my coworkers. We also provide a script which keeps everyones dependencies in sync, but all that does is install pipenv in the case of Python. We also use pipenv-shebang so a script can be invoked with no knowledge of python dependencies.

                                                        Yes, distributing a binary is simpler. I would consider that if these were tools that were used by people that I didn’t work with, but our dev environment is already automated so we keep the team in sync that way.

                                                        Actually, the bash++ effect was the main reason to switching to Python. We got relatively far with just bash, but after a while it becomes unmanageable. You can’t compare curl and jq to something like using boto3 to interact with AWS.

                                                      2. 1

                                                        Interesting, would love to hear more how you managed to get all that to work. In one of the previous companies we tried to have python in our CI stack, but it was always a hassle with dependencies, different python versions (as different people used different python locally), and generally pipenv being so slow to download and install dependencies. True, we could have done many things better, but we had elastic workers, so many had to be started from scratch and tools installed on the first run of any pipeline (with many different pipelines).

                                                        1. 2

                                                          We just use pipenv. Like I said, these are relatively simple scripts. Many times the only dependency is something like click, a CLI library. Right now when a new CI worker is started, it does have to install dependencies like you said, but that’s taking ~20 seconds, not minutes. One thing we’re considering is creating a Docker image with all of the dependencies installed and just using that. But it’s honestly not even noticeable at this point.

                                                  2. 9
                                                    [x] Don't migrate an application from the datacenter to the cloud   
                                                    [ ] Don't write your own secrets system   
                                                    [x] Don't run your own Kubernetes cluster   
                                                    [x] Don't Design for Multiple Cloud Providers
                                                    [x] Don't let alerts grow unbounded
                                                    [ ] Don't write internal cli tools in python
                                                    

                                                    The déjà vu feeling is intense, only missing 2.

                                                    1. 3

                                                      Advice to design for one cloud provider – isn’t this advice analogous to ‘Design for Windows only’ , or ‘Design for oracle only’. ?

                                                      I remember that it was certainly ‘faster to market’ to do that. But license cost, broader talent pool, access to broader innovation, reach to less-well-funded clients – were not on the obvious list of advantages.

                                                      1. 2

                                                        I did the shift from datacenter to AWS a long time ago, but things were written with simple server type equipment in mind. So, the biggest shocks were simply that it was going to take much larger instances to replicate the hardware we had. It was a sticker shock. After that the work was identifying the biggest wins for adapting what we had to what AWS offered to cut costs. The big wins came early, and over time we were able to add more and push costs down to close to what they had been before. It is still probably more expensive, but it is also much more agile when I need more or less resources in a way that physical hardware would never offer. It was a mature application, and stopping to rewrite would have not made sense for the company with its resources. So, I disagree with the first point in at least some cases.

                                                        I think the case for not running your own kubernetes cluster doesn’t go far enough. I think EKS is for many users just an option given to them to give them the sense that they aren’t giving up control and knowledge. Most people just don’t need the extended configurability and the technical debt is significant. If you seriously consider things, I think ECS actually makes the most sense for the vast majority of situations. It feels like FM, but that is kind of the point. You’re already buying into AWS to some degree, so why not just give in and use the services they offer. It’s my understanding Amazon runs very large ECS services internally, and efficient use of your time makes the most business sense.

                                                        For alerting, you have to make sure your alerts make sense and that you diligently handle them. The whole point is to automate things so your time goes farther, so if something alerts it should be analyzed to see if it can be prevented in the future. If you start ignoring alerts they are either not necessary or you aren’t fixing them or they actually are rare cases. Also, you should pay attention to the problems you do have that aren’t covered by an alert and look at how that might be a missed metric across other parts of your project. Pager burnout is a serious problem that you cannot afford to have. It signals a failure in at least one place in your organization. This is often where operations has to push back on development to fix root causes. If the error message is indecipherable and you haven’t briefed ops on how things work enough to diagnose the issue, I am going to look at the git log for who is responsible and call them. AWS offers a horrible service (the name I forget), which is ostensibly for automating responses and writing playbooks for errors, but I think is probably most used to automate reboots and encourages bad habits for all but the largest systems.

                                                        Python, Ruby, Rust, and Go all suffer from the dependency problems. I can’t deploy NixOS into production, but one thing I love is being able to write a derivation for some random python thing and fence it in with its dependencies. I think you just have to use shell scripting when it is simple, and standardize on a language for actual tools and stick with it. Configuration management should really handle dependencies, and having everyone just use the same language means that everyone is comfortable with the same style and language. Every language essentially sucks in its own special way anyway. Compiled languages for admin/ops tools have always seemed like the wrong way to go due to the additional complexity of rolling them out in traditional linux environments. Also, your tool may end up being replaced by an industry standard one down the line and you should just migrate to that. I have often had my own tool before someone else’s took off and became better at what I was doing.

                                                        1. 7

                                                          Python, Ruby, Rust, and Go all suffer from the dependency problems

                                                          How do Rust and Go suffer from dependency problems? They both ship statically-linked binaries, right?

                                                          1. 3

                                                            Security issues, large numbers of dependencies for simple tasks if you aren’t careful.

                                                            1. 11

                                                              I don’t really understand. All languages that support importing third-party packages are subject to dependency concerns, but Rust and Go discourage dependencies to the maximum feasible extent… what language/ecosystem has a better stance in this dimension?

                                                              1. 1

                                                                I think OP is referring to the way both make it very easy and normal to pull in dependencies that go ALL the way down, making a simple project with a few dependencies rely upon hundreds of different codebases. This is something we recognize as a problem in nodejs, but have replicated again since it’s just so easy.

                                                                1. 2

                                                                  Ah, I see. Thanks for restating it.

                                                                  To some degree I think this is kind of a natural consequence of the productivity expectations placed on modern software engineers. Those self-contained projects of yesteryear had a lot more time budget to work with, and fewer table stakes features to check off the list.

                                                                  1. 1

                                                                    I think that’s correct, but we could still make heavily nested dependencies less popular, at least in theory.

                                                                    1. 1

                                                                      Sure! You can do that through culture and through tooling. I think Go stakes out a position that’s about as far in this direction as a language can feasibly manage in our zeitgeist. But maybe there’s even more that could be done!

                                                        2. 1

                                                          Out of curiosity, how would you change this article for the BSD community?

                                                          1. 1

                                                            Not saying I completely agree with everything, as like with many things it depends a lot on specific, but what made you think it’s Linux specific?

                                                          2. 1

                                                            I don’t quite understand the issue with secret management. I have never had to manage something like that before, but I’m sure I will at some point. What makes the postgREST solution hard to manage?

                                                            1. 2

                                                              I think the observation is not about an API backend tool (in this case postgREST), but more about the features, nuances and challenges meeting needs of the clients of a ‘Secret’s management service’.

                                                              Perhaps a way to understand various challenges in this space, is to look at one of the open source Secret’s management solutions: for example : https://learn.hashicorp.com/collections/vault/secrets-management

                                                              You will see there various features/challenges that they are addressing. Very difficult to anticipate those things, unless you have lots of experience in this space.

                                                            2. 1

                                                              For the internal CLI tools one, I’ve started feeling like I don’t want internal CLI tools.

                                                              On one hand, I have tools to query the system. On the other hand, I want to write programs to slice and dice stuff about the system. For the former, I would much rather have a web interface that’s easy to change. Then the team can steadily reify the stuff in their heads into particular views and hyperlinks. On the other, I’d rather have an actual programming language, maybe a hyperlink from the web app to a Jupyter notebook or something with a shared library of functions already set up.

                                                              The CLI is a super low friction way to do a quick once off…but when you’ve dealt with similar logistics for the tenth once off, it starts to feel like you should solve the underlying problem.