1. 7

    All of those solutions wait for apt itself. If you want to synchronise the tasks themselves, you can wait for the initial provisioning to finish using:

    systemd-run --property="After=apt-daily.service apt-daily-upgrade.service" --wait /bin/true
    
    1. 1

      Oh that’s neat! Didn’t know you could do that.

      In the specific case I was running into (didn’t include it in the post in the end as it ended up feeling like a weird tangent) I don’t think it would have worked. It turns out one of the AWS-provided packages we’re using runs apt-get commands at startup if it detects you don’t have certain things installed, which I was not thrilled to find out, but sometimes you get what you’re given 😅

      1. 2

        In that case you can likely add a “cloud-init.target” (or the service itself) to the “After” list.

        1. 2

          Okay I have some more reading to do. Thank you for the pointers!

    1. 2

      Something I’ve found really important when recognising and dealing with burnout for myself is getting to know what tends to burn me out.

      I used to think it was just about the number of hours you worked in a week, maybe with some pressure to deliver mixed in. Over time I realised that (at least for me) it’s as much about the content of those hours.

      Give me the occasional week where I spend a few too many hours working on something I’m excited about over a consistent 9-5 where I’m struggling to care.

      Obviously week after week of long hours eventually leads to burnout too, but that one feels much easier to spot and deal with than a general sense of malaise.

      1. 29

        I love Postgresql, and I’m really grateful for posts like this to balance my opinion and present a well-argumented counter-argument. However, when I read posts like this I mention a pattern: all these downsides seem to be relevant to really huge, perfomance-sensitive projects with enormous loads. The kind of projects where you likely have over a hundred of developers, and probably should move away from one master relational database to micro-services and message queues as a backbone of your architecture.

        What I’m saying, this kind of criticism just highlights the fact that Postgresql is probably the best choice if you’re smaller than that, both in terms of load and team size.

        1. 19

          I almost hit the XID wraparound (I was 2-3 days from an outage; it was bad enough that AWS emailed me) when my company only had 4-5 devs and 100 customers. And I’ve hit connections-related performance issues at least four or five times through the last five years, at relatively modest scale (<100 application servers, in an overwhelmingly read-heavy workload). This affected us as recently as yesterday, as we are bringing a Timescale DB (which is a Postgres extension) into production and we are faced with the prospect of tiered pgBouncers or breaking the Timescale access into its own microservice.

          I love Postgres but these are real and they can hit earlier than you expect.

          1. 4

            I love Postgres, and I was strongly considering moving to it…but for our use case, it simply requires too much care and feeding. We ship databases to customers in appliance form, meaning all the maintenance has to be invisible and automatic and Postgres simply isn’t there.

            1. 6

              Having worked for a company that did that and been responsible for Postgres tuning, I say it can be done. In nearly 15 years of shipping out a postgresql db as part of an appliance, I have not seen any of these problems.

              Edit: Except for long upgrade times. That one is a PITA.

              JADP.

              1. 4

                I’d love to hear about your experience if you have time.

                Also, I’m drawing a blank on “JADP”…

                1. 3

                  Just Another Data Point?

                  1. 1

                    Just A Data Point. A weird acronym I picked up from old-timey online fora like USENET, and The Well.

                    I probably can’t say too much more about my experience postgres tuning as.

                    1. It was for a company, and might be considered propietary information.
                    2. It was about 5 years ago and I really don’t recall all that well what I did.

                    sorry, just know that these are really rare problems if you’re dealing with the limited scale inherent in incorporating postgresql as part of an appliance. Most of them deal with syndication, or ginormous tables. They’re web-scale problems, not appliance scale problems

                2. 2

                  what do you use instead?

                  1. 1

                    What are you planning on using instead?

                    1. 1

                      I do this as well. There’s definitely a discovery period but I’ve reached the point that I almost never have to check anything on the database side for roughly 200 customers, running varying versions from 9.6 to 11.9.

                    2. 4

                      Definitely echo that these problems (and others) can hit you way before you get to 100 devs. We were running into the pains mentioned in this article (which admittedly is a more general crritique of SQL databases, and lands on MySQL over Postgres) at more like 10 developers.

                      It absolutely isn’t that hard to run into the pitfalls of SQL databases at relatively small scale, especially if you’re using them for OLTP workloads in services where uptime/response times matter.

                    3. 5

                      all these downsides seem to be relevant to really huge, perfomance-sensitive projects with enormous loads. The kind of projects where you likely have over a hundred of developers

                      A number of these issues affect “normal” users of PostgreSQL as well:

                      • Replication is something you may want even on smaller use cases.

                      • “Wasted” space from the “update is really delete + insert”-paradigm can be a problem even on fairly small use cases (i.e. tens of millions of rows). It can make some simple operations rather painful.

                      • Lack of query hints is pretty annoying, especially for smaller users who don’t have a dedicated DBA with a Ph.D. in the PostgreSQL query planner. It’s also a massive help in development; want to try a new index? Now it’s a drop index, create index, wait, analyse`, wait some more, run some queries, discover that didn’t do what you expected, drop the index, create a different one, wait, etc. It’s very time-consuming and much of the time is spent waiting.

                      1. 5

                        Nice example: “complaining” that it is hard to tune it for a million concurrent connections.

                        Haven’t read it to the end yet, almost hoping to see an ending like “of course I’m happy to have a free DB that gets me in trouble for a million concurrent connections instead of breaking my bank at 1000 connections or when somebody touches advanced debugging like Oracle or

                        1. 5

                          FYI you should have read it through to the end, as the article does end on that note.

                      1. 7

                        One thing I don’t quite get a sense of: is this intended as a generic list or one specific to their product?

                        Some items seem like they’d apply to most software (e.g. easy to install). Advanced theming support seems pretty context-dependant.

                        1. 4

                          I agree. Also, most of the points in the list could be summarized as “Make it not suck” and as the intro says, even us programmers would like to provide them. The issue is complexity, budget, and/or time constraints. Not so much the will to implement them.

                        1. 3

                          Because [Podman] doesn’t need a daemon, and uses user namespacing to simulate root in the container, there’s no need to attach to a socket with root privileges, which was a long-standing concern with Docker.

                          Wait, Docker didn’t use user namespacing? I thought that was the whole point of Linux containers.

                          1. 7

                            There are two different things called user namespaces. CLONE_NEWUSER which creates a namespace that doesn’t share user and groups IDs with the parent namespace. And the kernel configuration option CONFIG_USER_NS, which allows unprivileged user to create new namespaces.

                            Docker and the tools from the article both use user namespaces as in CLONE_NEWUSER.

                            Docker by default runs as privilegued user and can create namespaces without CONFIG_USER_NS, I’m not sure if you can run docker as an unprivilegued user because of other features, but technically it should be able to create namespaces if CONFIG_USER_NS is enabled without root.

                            For the tools described in the article, they just to create a namespace and then exec into the init process of the container. Because they are not daemons and don’t do a lot more than that, they can run unprivileged if CONFIG_USER_NS is enabled.

                            Edit: Another thing worth mentioning in my opinion is, UID and GID maps (which are required if you want to have more than one UID/GID in the container) can only be written by root, and tools like podman use two setuid binaries from shadow (newuidmap(1) and newgidmap(1)) to do that.

                            1. 1

                              It can, but for a long time it was off by default. Not sure if that’s still true.

                            1. 23

                              Put me squarely in the don’t understand the webcam stickers camp. What’s on my screen is 99% more likely to be interesting than what’s in front of it. Like, why try to extort me with a video of me picking my nose when you can just remote drive my browser and empty my bank account. And then there’s the whole microphone thing. It’s hard to imagine a threat model where webcam stickers are relevant.

                              1. 48

                                I was squarely in the same camp… until WebEx started my video on a call when I didn’t want it to, and a nice view of me (and my wife!) in bed wearing pyjamas (I was dialling in from 6 timezones ahead to listen to a town hall meeting) was projected on the wall for everyone to enjoy.

                                I’m not worried about evil malware, I’m worried about WebEx ;-)

                                1. 19

                                  I had that happen with google hangouts while I was listening to a call on the toilet. That was a bad moment. With “continuous deployment” this is bound to happen unpredictably.

                                  1. 10

                                    Yup, IMO badly-written conference calling software is a much more realistic and everyday threat than teh evil hackerz. WebEx and Hangouts and other systems seem to be constantly changing their UI and behavior, yet always seem to really want to broadcast video. And then sometimes pop up some other modal dialog blocking the buttons to stop it. It’s worth it IMO to definitely never ever send out video unless I’ve explicitly okayed it first, no matter what some marketing manager thinks would help them increase their engagement by 1%.

                                    1. 6

                                      I was fortunate enough to be dressed when it happened to me. Conference software has the worst defaults.

                                    2. 42

                                      My threat model isn’t malicious attackers as much as incompetence. I use webcam covers in case (1) a program that I trust has some mindblowing lapse in competence and turns on my webcam unexpectedly or (2) I fat-finger a video call button without noticing.

                                      1. 4

                                        Ah, I hadn’t thought about that too much since I rarely use such software. Also, I think this thread is the first time I’ve seen someone mention that. It’s always the evil hackers that get blamed instead.

                                        1. 2

                                          Do you use a webcam cover on your smartphone too?

                                          1. 5

                                            I removed the front camera in my phone. It was useless for me, and I didn’t like the idea of never knowing if some app was using it.

                                          2. 1

                                            But what’s the big difference compared to disabling the webcam in your BIOS settings?

                                            1. 18

                                              With a piece of electrical tape over the camera I can “re-enable” it in seconds without rebooting for the times when I do actually need it. Disabling it in the BIOS is a good option if you know you’ll really never need it though.

                                              1. 1

                                                Fair enough, but for someone who never needs it, this doesn’t really change a lot…

                                              2. 14

                                                Stickers/covers are simple in every aspect of their operation.

                                                1. 4

                                                  Exactly! Most people’s understanding of stickers/covers allows them a fairly high degree of confidence that it’s working. You can hold the sticker up to the light to confirm that it’s opaque to visible light and you can see that it covers the lens. You can also run a camera application to see what it can see. By comparison, it is incredibly difficult to confirm that a BIOS setting does what it says it does.

                                            2. 14

                                              It’s hard to imagine a threat model where webcam stickers are relevant.

                                              Porn and whacking off to it. I believe one Black Mirror episode was centered on that. I think blackmail on such footage is a credible threat even if you’re not into kinky/illegal stuff. And even if not anything as sleazy as that, there’s something quite disturbing in a random person essentially being inside your house looking around with you having no clue about it.

                                              1. 4

                                                The real threat seems to be people worried about the threat, given all the “I caught you visiting a naughty site, you know which one, pay me bitcoins” spam I get.

                                                1. 4

                                                  You do understand that there’s a pretty big difference between the two situations, right? Someone leaking that you visited a naughty site isn’t really comparable to someone leaking pictures or video of you.

                                                  1. 0

                                                    The scam threat obviously includes “I hacked your webcam” blah blah. Sorry for not posting the entire spam here.

                                                    1. 1

                                                      Right, that makes sense. I’ve never actually read such a spam e-mail; if I get any, they just end up caught in the spam filter.

                                                      You would presumably take the threat more seriously if someone contacted you with some actual proof, such as showing an actual image of you naked taken from your webcam?

                                                      1. 1

                                                        I’ve had this email a few times, and they spoof the sender address to make it look like it came from your own email address. This at least gives the illusion of them having hacked you specifically.

                                                  2. 2

                                                    In a lot of the country, getting caught viewing porn can hurt their career or ability to run for office. It’s hypocritical given lots of people in those same areas watch porn. It’s a reality, though. This is also true for lots of other habits or alternate lifestyles cameras might reveal.

                                                    1. 4

                                                      In some countries, any consumption of anything deemed immoral can have even more devastating consequences. I know a guy from a small Persian Gulf country — a son of a late imam too — who was scammed for a few thousand euros recently by a con-artist he found on Grindr.

                                                      Losing a few thousand euros is not the harsh consequence in this scenario.

                                                2. 10

                                                  I mostly agree, but I don’t think you should need to choose. I’d prefer HW switches for microphone, webcam, wireless and allowing only whitelisted HID device instances being active.

                                                  As I see Microsoft and Apple (even more so) have started to realize that there is a user demand for more privacy. The next windows update will notify you when there is an active microphone recording going on, for example. I think this is not a bad direction, but too little too late for my taste.

                                                  Also I think it is a design flaw that in current windows versions it is still so simple to globally register every keystroke, and that in Windows UWP, and Android there are so many grouped capabilities, and still you have to allow the app to use a capability in advance, or for now and for ever to use these privileges…

                                                  I don’t have much experience with Apple products.

                                                  Edit: regarding webcams:

                                                  You need to take into account that the line between digital and psychical life is getting thinner and blurrier. I often leave my machine running when I leave home, as it is energy efficient, and I might need to log in remotely, or a download is running in the background. A malicious actor could get information about my physical whereabouts, or about an opportunity for home invasion for example, should they deem it profitable.

                                                  1. 1

                                                    started to realize

                                                    This is hardly new. Apple’s 2003 external webcam model, the iSight, included a manual iris shutter/switch that rotated to both disable the device and physically obstruct the camera. Fashions change.

                                                  2. 4

                                                    There’s a mix of bad things people are doing right now and some things they could do with it that they’ll figure out eventually. I’m not writing about the latter since I prefer them to be delayed.

                                                    For now, I’m for being able to totally disable inputs, specific wireless, etc for a simple reason: no access by default until it needs it (POLA). No power by default until it needs it if available. I can try to guess every bad thing that can happen with risky peripherals. Or I can just shut them down when not using them. Covering my webcam is easy way to shut down its vision. My old laptop had a wireless switch, too. My old speakers didn’t act up when I had to turn something down quickly since the knobs actually worked. Killed power with last turn.

                                                    On a related note, I also buy old, dumb appliances without smart anything. They also last longer, are cheaper, and have no smart anything for people to hack. If there’s a risk from hackers, just eliminate it where it’s easy. Then, don’t think about it again.

                                                    1. 3

                                                      Funny, I had never even thought of tape over the webcam as a security measure.

                                                      For me it’s entirely there to make sure I’m not on camera when I join meetings unless I explicitly want to be.

                                                      1. 1

                                                        If you buy a new laptop there is no choice between with or without webcam. I don’t need it and never use it ergo I put it sticker on the camera, a simple and pragmatic solution.

                                                        1. 1

                                                          Well someone can take over your bank account and take your photo.

                                                        1. 5

                                                          While this article looks at safety by analysing outcomes in a medical context, I think a lot of the thinking in there could be ported over to running the kind of software systems that many of us here are responsible for.

                                                          The core idea resonated really strongly with me. We stand to learn a lot from from systems that are quietly successful, rather than focusing mostly on how we fixed the ones that loudly failed.

                                                          It also spoke to an idea that I agree with strongly: approaches that think everything can be solved simply by adding another process or rule for people to follow doom us to the same sub-par outcomes we know today. Or as phrased more eloquently in the article:

                                                          you cannot inspect safety or quality into a process: the people who do the process create safety

                                                          1. 2

                                                            It also suggests that policies don’t create our successes, which is probably not what most people want to hear.

                                                          1. 37

                                                            What about dependencies? If you use python or ruby you’re going to have to install them on the server.

                                                            How much of the appeal of containerization can be boiled directly down to Python/Ruby being catastrophically bad at handling deploying an application and all its dependencies together?

                                                            1. 6

                                                              I feel like this is an underrated point: compiling something down to a static binary and just plopping it on a server seems pretty straightforward. The arguments about upgrades and security and whatnot fail for source-based packages anyway (looking at you, npm).

                                                              1. 10

                                                                It doesn’t really need to be a static binary; if you have a self-contained tarball the extra step of tar xzf really isn’t so bad. It just needs to not be the mess of bundler/virtualenv/whatever.

                                                                1. 1

                                                                  mess of bundler/virtualenv/whatever

                                                                  virtualenv though is all about producing a self-contained directory that you can make a tarball of??

                                                                  1. 4

                                                                    Kind of. It has to be untarred to a directory with precisely the same name or it won’t work. And hilariously enough, the --relocatable flag just plain doesn’t work.

                                                                    1. 2

                                                                      The thing that trips me up is that it requires a shell to work. I end up fighting with systemd to “activate” the VirtualEnv because I can’t make source bin/activate work inside a bash -c invocation, or I can’t figure out if it’s in the right working directory, or something seemingly mundane like that.

                                                                      And god forbid I should ever forget to activate it and Pip spews stuff all over my system. Then I have no idea what I can clean up and what’s depended on by something else/managed by dpkg/etc.

                                                                      1. 4

                                                                        No, you don’t need to activate the environment, this is a misconception I also had before. Instead, you can simply call venv/bin/python script.py or venv/bin/pip install foo which is what I’m doing now.

                                                                      2. 1

                                                                        This is only half of the story because you still need a recent/compatible python interpreter on the target server.

                                                                    2. 8

                                                                      This is 90% of what I like about working with golang.

                                                                      1. 1

                                                                        Sorry, I’m a little lost on what you’re saying about source-based packages. Can you expand?

                                                                        1. 2

                                                                          The arguments I’ve seen against static linking are things like you’ll get security updates etc through shared dynamic libs, or that the size will be gigantic because you’re including all your dependencies in the binary, but with node_packages or bundler etc you’ll end up with the exact same thing anyway.

                                                                          Not digging on that mode, just that it has the same downsides of static linking, without the ease of deployment upsides.

                                                                          EDIT: full disclosure I’m a devops newb, and would much prefer software never left my development machine :D

                                                                          1. 3

                                                                            and would much prefer software never left my development machine

                                                                            Oh god that would be great.

                                                                      2. 2

                                                                        It was most of the reason we started using containers at work a couple of years back.

                                                                        1. 2

                                                                          Working with large C++ services (for example in image processing with OpenCV/FFmpeg/…) is also a pain in the ass for dynamic libraries dependencies. Then you start to fight with packages versions and each time you want to upgrade anything you’re in a constant struggle.

                                                                          1. 1

                                                                            FFmpeg

                                                                            And if you’re unlucky and your distro is affected by the libav fiasco, good luck.

                                                                          2. 2

                                                                            Yeah, dependency locking hasn’t been a (popular) thing in the Python world until pipenv, but honestly I never had any problems with… any language package manager.

                                                                            I guess some of the appeal can be boiled down to depending on system-level libraries like imagemagick and whatnot.

                                                                            1. 3

                                                                              Dependency locking really isn’t a sufficient solution. Firstly, you almost certainly don’t want your production machines all going out and grabbing their dependencies from the internet. And second, as soon as you use e.g. a python module with a C extension you need to pull in all sorts of development tooling that can’t even be expressed in the pipfile or whatever it is.

                                                                            2. 1

                                                                              you can add node.js to that list

                                                                              1. 1

                                                                                A Node.js app, including node_modules, can be tarred up locally, transferred to a server, and untarred, and it will generally work fine no matter where you put it (assuming the Node version on the server is close enough to what you’re using locally). Node/npm does what VirtualEnv does, but by default. (Note if you have native modules you’ll need to npm rebuild but that’s pretty easy too… usually.)

                                                                                I will freely admit that npm has other problems, but I think this aspect is actually a strength. Personally I just npm install -g my deployments which is also pretty nice, everything is self-contained except for a symlink in /usr/bin. I can certainly understand not wanting to do that in a more formal production environment but for just my personal server it usually works great.

                                                                              2. 1

                                                                                Absolutely but it’s not just Ruby/Python. Custom RPM/DEB packages are ridiculously obtuse and difficult to build and distribute. fpm is the only tool that makes it possible. Dockerfiles and images are a breeze by comparison.

                                                                              1. 2

                                                                                Relatedly: the 8 year outstanding bug to make Net::HTTP handle character encodings at all.

                                                                                1. 3

                                                                                  One of the GoCardless SREs here.

                                                                                  Happy to discuss anything and answer questions, though I’m going to bed in the next hour!

                                                                                  1. 2
                                                                                    1. What’s one lesson you learned from this incident that would be useful to share with developers who are not SREs?
                                                                                    2. What’s one assumption you had challenged?
                                                                                    1. 5
                                                                                      1. Cold code is broken code. Code that only exists to handle failure is most susceptible to this. A more common example is an infrequently run (e.g. monthly) cron job. In many cases I’d prefer it to run daily, even if it has to no-op at the last second, so that more of the code is exercised more often. Better still, in some cases it could do its work incrementally! Either way is better than having the job fail on the day it really has to run.
                                                                                      2. Our ability to manually take actions that have been handled by automation for a long time. Turns out that’s not so good, and prolonged the incident even after we’d decided to bring Postgres up by hand.
                                                                                  1. 2

                                                                                    The article’s claim seems bold. Could you not apply this quote to Prolog?

                                                                                    Instead, we specify some constraints on the behavior of a desirable program (e.g., a dataset of input output pairs of examples) and use the computational resources at our disposal to search the program space for a program that satisfies the constraints.

                                                                                    1. 4

                                                                                      Yeah, one of the more common criticisms of this article floating around is that it seems unaware of large parts of the idea not being new. Which is a bit odd since Karpathy is a smart and well-read guy, so maybe it just leaves out the “related work” section for punchiness and rhetorical effect. But since the whole claim is that this is a totally new way of looking at software it makes for a weird read.

                                                                                      Prolog itself doesn’t do exactly that; with standard logic programming, you encode the logic directly by writing clauses, rather than giving input/output examples. But inductive logic programming is a version that does; you give it input/output examples and it induces the program’s clauses. There’s also genetic programming as a somewhat better-known set of techniques. As well as program synthesis, a more formal-methods take on it.

                                                                                      The real new part is the implicit claim that, essentially, “it works now”. GP is notoriously difficult to get anything useful out of, and clearly Karpathy thinks NN-based program induction won’t suffer the same fate. But that to a large extent remains to be seen…

                                                                                      1. 1

                                                                                        Gonna blame tiredness for that one. The bit in parens in my quote definitely doesn’t fit Prolog! The part I’m contesting is the idea that specifying programs in terms of constraints and relying on computers to explore a program space is new.

                                                                                    1. 1

                                                                                      Projects have their own goals, and I don’t see why those should be dictated by distros.

                                                                                      I’m very much in favour of projects setting out their approach to support in a way that works for them. Ultimately, if $distro wants to maintain an ancient version of your work indefinitely, then good luck to them.

                                                                                      One project I’m involved in has a take on this which boils down to “we work on all versions of Ruby and Rails still in security support by upstream”. It felt like a reasonable trade-off to make, considering the finite amount of time we have to work on it.

                                                                                      1. 4

                                                                                        The hard part is gonna be balancing my time between the two!

                                                                                        1. 1

                                                                                          I think with the newer Macs you can use Touch ID to protect keychain entries. Combine the two and you’re getting closer to the security level of the separate hardware key!

                                                                                          1. 5

                                                                                            The behaviour is really surprising when you’ve not run into it before (and hard to reason about even when you have).

                                                                                            The title is super clickbaity though. I can’t think of a single mainstream relational database that defaults to serialisable transactions.

                                                                                            1. 1

                                                                                              It may slip under the radar for many, but I have so much <3 for this commit mentioned in the article.

                                                                                              I have a pretty strong preference for handling failover at Layer 7 rather than Layer 2/3 (i.e. with virtual IPs). This change makes that way easier!

                                                                                              1. 6

                                                                                                I’m starting to find it odd when a service with 2FA doesn’t offer TOTP as the main option.

                                                                                                It’s widely supported. You don’t need a bunch of different physical tokens/separate apps to authenticate. It’s more secure than SMS.

                                                                                                1. 3

                                                                                                  Most embarrassing is the fact that PayPal still only offers SMS. Their 2FA messages are often delayed or dropped, too.

                                                                                                1. 1

                                                                                                  I think deadlines passed with every I/O (including lock acquisition) are the only way out of this.

                                                                                                  https://golang.org/pkg/context/ is the only time I’ve seen it supported at a language level.

                                                                                                  1. 2

                                                                                                    Not really clear to me what the author means. Should everyone just use Spanner? There isn’t anything else out there like Spanner (although CockroachDB is trying).

                                                                                                    1. 2

                                                                                                      I don’t think there’s a production-ready equivalent that you can run yourself (closed or open source).

                                                                                                      FoundationDB had a bunch of the guarantees, minus the SQL interface. Then Apple bought it and shut it down right away (side note: how terrifying is the idea of your database software no longer being available?).

                                                                                                      CockroachDB doesn’t seem quite there yet. I really want it to be.

                                                                                                    1. 1

                                                                                                      Reminder that nothing is tradeoff-free.

                                                                                                      Reminder that you’ll have to structure your data a certain way not to run into a throughput wall (true of many databases).

                                                                                                      Reminder to read the Spanner paper to find these things out.

                                                                                                      That said, Spanner seems dope.