Threads for zimbatm

  1. 2

    This technique is also something we use with Nix. I noticed that setting up integration tests like that tends to pay off quite quickly as it’s easy to execute locally and tends to run pretty quickly.

    1. 2

      100MB is still pretty large to hold a single binary. There might be another 8-10x potential size gain in there.

      1. 2

        Good point, but I didn’t mention that this container has a bunch of frontend assets (probably around 8MBs) and a GeoIP DB embedded in it (about 70MBs). I think these are the ones taking the bulk of the space (outside the rust binary)…

        1. 1

          Now added a section in the article. Thanks a lot for the inspiration :)

      1. 1

        I’m curious why Terragrunt wasn’t good enough. It also has templating capabilities and has been around for a long time now.

        1. 2

          Mostly because of the code injection capabilities.

          We inject >1200 AWS providers (accounts * regions) by default. Stacks also declares variables for you, it injects state backend for all stacks’ layers… It gives us a level of flexibility no other tool in the space does.

          Also, we already had tons of Terraform deployments without Terragrunt so moving to it was a huge effort we didn’t want to make, so we wrote Stacks which is backwards compatible (from Terraform’s POV, nothing changed).

          I mention Terragrunt at 8:00.

        1. 5

          It’s not the only place they do that. And it’s getting worse over time. The whole Desktop is starting to feel like a glorified Edge/Bing page.

          For a while, it was possible to change the microsoft-edge: scheme handler, but it is hard-coded to Edge now. Now the override has to go deeper: https://github.com/rcmaehl/MSEdgeRedirect

          1. 4

            I’m curious what the cause for the performance increase is - is it just that the on-prem hardware is that much better than the cloud hardware?

            1. 8

              Like caius said, the hardware is probably better. But also; overhead and jitter is introduced by the various virtualization layers.

              Here is a good read on the subject: https://www.brendangregg.com/blog/2017-11-29/aws-ec2-virtualization-2017.html

              1. 2

                Probably the hardware is better specced, but also there’ll be less running on it. AWS has multiple customers on the same hardware so they have to ensure nothing leaks cross-tenant. And there’s also noisy neighbour, when you’re the only customer on the box you can tune it much more easily knowing there isn’t someone sat there rinsing CPUs next to you. Not sure what they’re doing for storage too, but that is likely local rather than network attached too.

                Turns out having dedicated hardware running one tenant is superior in performance. Reminds me of the (many) times we (re)discovered that during the days of VMs vs bare servers.

                1. 2

                  This + fewer abstraction layers with on-prem hardware. The closer to the metal you are, the more performance you’d get - always.

                2. 2

                  IIRC Ruby benefits from faster single-core speed so moving to on-prem is going to give you some benefit. Jeff Atwood’s Why Ruby? is old, but cover a lot of points. I haven’t kept up with how Discourse are doing their hosting, but Jeff has mentioned core performance over the years on Twitter.

                  I see other comments about Ruby having a VM, but that’s often only a problem when you have limited awareness of your platform and managing performance on said platform. In Bing’s move to .NET 7 with Principal Engineer Ben Watson you can hear a commentary on how awareness of generations in the .NET GC can help optimize along with the implications when there are modifications to the GC. You can make similar comments on Python GIL conversations that never address the nature of the performance problem.

                  1. 2

                    I’m not sure if they still sell them, but for a while Intel sold chips with 128 MiB of EDDR as last level cache. We got some for FPGA builds (single threaded place and route, CPU bound with fairly random memory access patterns) and they were a lot faster than anything else Intel sold for this workload (lower core counts and lower clock speed, but when you have a single threaded workload that spends a big chunk of its time waiting on cache misses, this doesn’t matter so much). I don’t think any cloud providers offer them but they’d probably be a huge win for a lot of these workloads.

                    1. 1

                      AMD’s 3D V-Cache chips have large amounts of proper SRAM L3 cache. My Ryzen 7 5800X3D has 96MB, and the new Ryzen 9 7950X3D has 128MB. Most of the benchmarking on them has been for gaming. I’d be curious to see web backend benchmarks though.

                1. 11

                  Thanks for posting this. This is much better feedback than a “the filesystem is read-only” type of error. I went ahead and implemented it for nixpkgs as well. https://github.com/NixOS/nixpkgs/pull/229166

                  1. 5

                    I’ve recently evaluated sops-nix and, although I appreciate that managing keys is hard and ssh keys are pretty much the only thing anyone ever manages successfully, and that furthermore I am the only person in the world who likes gpg, I really don’t think reusing the ssh host key as the root of all trust is the right call. It violates principles around key reuse, which at best is sketchy and at worst may lead to vulnerabilities.

                    1. 2

                      Sounds good, but what alternative do you propose?

                      sops-nix is a pragmatic upgrade over storing plain text secrets in the git repo. It’s pretty easy to use and doesn’t require more infrastructure. There is a bit more work to enrol new host keys.

                      For highly sensitive projects you might want to use some sort of KMS, deploy Vault or something like that, but then it either only works in one cloud, or requires more infrastructure.

                      1. 1

                        I have been studying the problem and reached the conclusion that any management of build-time secrets that meets my needs inherently requires impure evaluation. This is pretty unfortunate because of how flakes turn on pure evaluation by default… I’ve been weighing whether to propose a small extension to the semantics, but people let me know about some existing features that haven’t yet been leveraged for secret management, so in my Copious Free Time*, I’ve been figuring out whether there’s a good approach that doesn’t require additional changes to the language or CLI tools.

                        [*] Copious Free Time is a fictional character from oral traditions, and is not subject to copyright. Therefore I can make no guarantees about it actually existing.

                        (edit: syntax)

                    1. 3

                      The voting analogy is interesting. The article says that telemetry is a form of voting; by providing the data, users can influence the development of the product to fit their needs best. This is treating users as passive consumers. In contrast, the older open-source ecosystem is based on voting through patches; we used to all be developers, and the tools we would build would be for us. Depending on the component’s depth in the system, there is probably a gradient between both positions.

                      One aspect the article isn’t speaking to is that it’s impossible not to leak information when using computers on a larger scale picture. If you have ever used packet sniffing, you will see that your computer is constantly talking to services left and right. Each has a particular reason to exist; the clock wants to stay in sync and talk to the NTP server; the printer driver intends to alert you if the toner needs to be replaced; the system seeks to check security updates need to be applied. The result is that your computer is now a firehose of information leakage. Adding more telemetry is just making the problem worse.

                      1. 3

                        Different Bash implementations have subtle differences that make it hard to eliminate inconsistencies and edge cases—and it’s hard to discover those in the first place because Bash is all but untestable.

                        Am I being gaslit here, or where are the other bash impls than https://www.gnu.org/software/bash/ ?

                        1. 6

                          I would guess that part is talking about disparate platform and version number combinations of Bash, unless I am also uninformed on some other indie Bash impl

                          1. 8

                            I agree that it’s speaking a bit loosely about platform/version differences, plus utility/env differences.

                            For example, here’s a dumb edge-case we hit in the official installer around a bug in the bash that ships with macOS: https://github.com/NixOS/nix/pull/5951

                            Another recent example was that the installer was using rsync for ~idempotently copying the store seed into the nix store. Debian, iirc, lacked rsync, so someone changed it to a cp command. But the flags weren’t supporting an idempotent copy, so a lot of people started getting hard errors during partial reinstalls that would’ve otherwise worked.

                            We’ve also run into trouble recently because the platforms we were supporting were all using GNU diffutils. I took advantage of some of its flags for formatting less-programmer-centric diffs for some state-curing actions, and then macOS Ventura promptly dropped gnu diffutils for their own homegrown version without these flags.

                          2. 1

                            Just different versions. macOS ships with Bash 3.0, which is 10+ years old and has subtle bugs around empty arrays and other areas.

                            1. 1

                              My Mac install nags me to move over to zsh. I just use bash as a launcher so don’t really care.

                          1. 5

                            FWIW, I believe Nix solves this by hashing the contents of the archive, and not that archive itself. That depends on having some format to turn the contents (multiple files and directories) into a single stream you can hash, and for Nix that’s the NAR format, which is simple enough.

                            It boils down to repacking the archive in a deterministic format (no timestamps, consistent ordering of files, etc.) and hashing that instead. But if the hash is all you care about, you can just stream the NAR into your hasher, and not actually write anything to disk or keep it in memory. That’s important for dealing with large archives, which I assume is also something Bazel cares about.

                            1. 5

                              Unzipping it first risks getting bitten if there’s some security bug in the unzipper.

                              1. 1

                                True. It would be best if Nix would store the content size along the hash.

                                1. 1

                                  Which would also get broken if the archiver changes.

                                  1. 1

                                    I mean the unpacked content size. If that changes then you have other problems :)

                                    1. 1

                                      There are exploits in unarchiving more dangerous than filing all the space

                              2. 3

                                It makes sense for Nix, but this is a weird problem to have with Git repos specifically, since Git already has deterministic hashes of everything stored in it. Big https://xkcd.com/2021/ energy.

                                Doing the hashing git’s way (i.e. recursively) is also just nice, because it lets you reuse work.

                              1. 1

                                It seems to be not very widely implemented - none of the four extraConfig entries in my configuration.nix seems to have settings equivalents yet.

                                1. 3

                                  This RFC has to be thought of as a basis for new modules first and foremost. By using this approach we can provide a good basis for new modules, with great flexibility for future changes.

                                  For existing modules, it is often not possible to use this settings style without breaking backwards compatibility.

                                  People may not be keen to convert their modules if there’s too many users. It’s not worth the breakage.

                                  1. 1

                                    There hasn’t been an active push to migrate existing modules, so it’s mainly up to the module maintainer to adopt it. nixos-unstable has 1272 entries, vs 1227 in 22.11

                                  1. 3

                                    I wish log levels would disappear because nobody can agree on when to use “warning”, “debug”, “info” or “error”. It depends on the audience that is reading the logs. How can a library decide on the log level if it doesn’t have the full context?

                                    The reason they exist is because of the limitations of old logging systems. With structured logging, we can tag lines with more context. Is it a logic error or a user input error? Who is the audience for the message?

                                    1. 3

                                      In a previous workplace, we had our log levels defined as:

                                      • page on call
                                      • for the morning (aka log and let someone see it when they’re in the office)
                                      • informational
                                      • ddos logging service (aka debug)

                                      Now a days, I wish people would stop implementing logging at all, and rather use traces, you can get much richer information from then, and can define alerting based on more interesting or complex conditions

                                      1. 1

                                        Yeah. It’s a mismatch the person writing the level and the person consuming it can have totally different goals in mind. Dave Cheney argued the only log levels should be info and debug. I think debug needs a further filter so you can say “debug, but only parts the app related to the thing I’m debugging now (a package, a request, whatever)”. Structured logging gives you ways to do that (only show logs with key x=y), but it’s still another step on top, versus saying myapp --log=warn that many apps support.

                                      1. 5

                                        Even better would be to have a step that “activates” the nix environment.

                                        - name: "Load environment"
                                          run: nix develop -c bash -c 'echo $PATH' > "$GITHUB_PATH"
                                        # Now you can use all the dependencies
                                        - name: prettier --write
                                        
                                        1. 7

                                          PATH isn’t the only environment variable nix develop sets, especially when compilers and things like pkg-config are involved

                                          1. 3

                                            Can do something similar for makefile dependencies. I wrote a little about it in https://t-ravis.com/post/nix/nix-make/ (using nix-shell) but basically:

                                            export PATH := $(shell nix-shell -p hello --run 'echo $$PATH')
                                            
                                            .PHONY: all
                                            all:
                                            	@echo nice hello
                                            	hello --version
                                            
                                            1. 1

                                              Oh that’s a nice trick

                                          1. 5

                                            We use nix very conservatively. We only use it for managing local developer environments, ie. build toolchain and other cli tools (ansible, terraform, etc). That has worked out amazingly for us.

                                            I’m in general a lot more skeptical about nix for production. You don’t clearly get the kind of support like you would from, for example, Ubuntu’s packages. There’s no “LTS” as far as I know for nix, merely the stable NixOS release. Though, that being said, nixpkgs tends to be way ahead of other package managers’ versions of software.

                                            We’ve started messing around with using nixpkg’s docker tools for some web projects. That would be the first time that we’d be using nix in our production environment.

                                            In general, it’s really easy to go overboard with nix and start using it really inappropriately. But if you use some discipline, it can be an amazing tool. It’s completely solved our python problems related to installing ansible. That’s invaluable.

                                            1. 6

                                              LTS is something that comes up regularly, and I sincerely don’t know if it should exist or not.

                                              On one hand, it seems like it’s something that corporations want to have. Looking at the reason deeper than “because that’s what other distros do”, it seems to be a mixed bag of reasons.

                                              Upgrades are much less risky in NixOS. Most issues are generally caught at eval or build time. And if it fails at runtime, it’s easy to roll back. Something that was a milestone on another distro becomes a ticket.

                                              The company has to pay the upgrade price more often, but it’s also a benefit to them not to be stuck behind old versions. In that regard, it might be possible that having NixOS LTS releases becomes a disservice to corporations.

                                              1. 4

                                                This is a super interesting topic to me. I can give you some more context on the “corporations want it” part. We’re a federally regulated business. I need to be able to say to regulators, “yes, when this CVE comes out, I’ll be able to upgrade this package in no-time.” Often that implies that I need to be able to point at another company that violated their SLA if they didn’t upgrade the package (eg. Canonical for Ubuntu LTS). I’m very confident in practice that nixpkgs often will get the pkg upgrades faster than Canonical can bust it out, at least for unstable, but it’s really that legal infrastructure that I need. There are companies like Twaeg and such that provide support packages? But it still seems really shakey to me.

                                                I hope that provides some more insight into what’s going on. Honestly, we’re still exploring it, and maybe it’s a solved problem, but I just don’t know.

                                                Also, things like rollbacks would imply rolling back security updates. If everything gets changed with a rollback, then you’re taking away important changes. I often want to just rollback application code, but not dependent packages. This is pretty straightforward to setup with nix, afaik, but it’s still non-trivial.

                                                1. 3

                                                  Can you point to the capability and reality of applying overrides and patches as evidence of rapid response capability?

                                                  1. 1

                                                    Good question. That certainly helps! But we’d be doing everything ourselves. Also, you can eventually accumulate a lot of overrides/overlays such that it’s quite hairy to mess with stuff.

                                                    1. 2

                                                      It’s fairly ergonomic to pull some specific packages from a different release channel, this site gives convenient copypasta: https://lazamar.co.uk/nix-versions/

                                                      One approach I’ve used in the past is maintain a separate generated TOML/JSON file with overrides, and pull that in from nix. The overrides file can be managed by some other script/process. In this particular case, I’d mainly want an expiration time for an override to get removed once it’s no longer valid.

                                                      Also you might like this recent post about all the various ways to override a package, with your own patches and such: https://bobvanderlinden.me/customizing-packages-in-nix/

                                                      And one more note that might bring comfort: You probably know this, but the only distinction between unstable and stable nixpkgs channels is that unstable is rolling releases but stable is discrete releases. Aside from that, I don’t think there’s any increased “stability” guarantee – they both pass the same automated testing suites. (Someone please correct me if I’m mistaken.)

                                                2. 3

                                                  If don’t company says “we support rhel 7” (so effectively lts), that’s what they support and can stay on those versions of dependencies for ages. It doesn’t matter if the upgrades are risky or not. It’s not a technical issue.

                                                  1. 2

                                                    On the subject of LTS: Upgrades can be less risky in Nix but I have been burned several times by now when upstream introduces a bug in a release which I proceed to hit the next day when installing a new system. I hate rolling-release systems as a consequence.

                                                    I work in robotics, so for my work the benefit of riding the bleeding edge of software is pretty low, the potential costs of failure are very high (expensive hardware destroyed), and the cost of upgrading is also high. Even if the actual upgrade doesn’t take much work, there’s lots of integration, simulation, and real-world testing that has to be repeated. We also end up having to use hardware that has fairly limited software support, not all of it open-source, so in that case we are really stuck with the particular OS release that a vendor provides. (Fuck you, NVidia.)

                                                    That said, we could potentially still benefit from Nix quite a lot, and I should play with it more someday. But we’d still end up essentially cutting our own LTS releases.

                                                1. 1

                                                  I’m the author of the post, if anyone has questions.

                                                  1. 5

                                                    I’m not sure that I understand the problem that you’re trying to solve. If I am the user of a F/OSS project, I either buy an SLA with the maintainer, employ them to consult, or accept that anything that I want is best-effort with no obligation. As a maintainer, I will prioritise things that I think are interesting or where someone is paying me to do the work. Adding a layer of indirection to that doesn’t seem to address any of the problems that I actually have.

                                                    The thing that I really want is a micropayment and escrow system, so if ten thousand people all want to pledge 10¢ for a particular issue to be fixed, the person who raises a PR that fixes the issue can get $1,000.

                                                    1. 3

                                                      I think turbosrc is meant for bigger projects.

                                                      For example in nixpkgs we have a lot of contributors that help with code review. From an external member sending their first PR, it’s not always obvious which review they should consider the most. They don’t know who is influential in the project. Some point system like that might help make it more clear.

                                                      1. 1

                                                        Actually, nixpkgs is one of the archetypes of the many use cases that we studied. Sorta shocked you picked up on that so quick.

                                                        Smaller projects, however, will also find Turbosrc useful because, all things equal, a potential contributor will prefer to get something rather than nothing. VotePower is more than just something. Imagine two identical forks with the same level of community, for argument sake. But only one offers VotePower - they’ll grow community faster and deeper.

                                                        1. 2

                                                          It’s probably just a coincidence. I have spent quite a bit of time contributing to nixpkgs and trying to better identify the various pains I encountered or saw in the process.

                                                      2. 2

                                                        The thing that I really want is a micropayment and escrow system, so if ten thousand people all want to pledge 10¢ for a particular issue to be fixed, the person who raises a PR that fixes the issue can get $1,000.

                                                        There have been multiple attempts at this, like bountysource. I wonder why they never really took off, perhaps because it wasn’t integrated enough with forges like Github?

                                                        1. 4

                                                          I’ve not seen one that made it trivial for me to contribute a token amount. There are a very small number of bugs (mostly feature requests) that I’d be happy to throw a few hundred dollars at. There are a few that I’d throw a couple of dollars at. There are a lot that I’d happily throw a few cents at, but which probably also affect thousands of other people who might be willing to pitch in a similar tiny token amount and have it add up to enough to motivate someone to do the work. All of the ones that I’ve seen have been very high friction. I’d like something where I could just click something in a GitHub issue to pledge the amount, have a single credit card transaction each month to collect all of the payments, and then have the money verifiably available for someone who wants to start working on the issue.

                                                          1. 3

                                                            It’s because the decision-making process inside companies is incompatible with it.

                                                            A top-level executive relies on connections with people to make spending decisions. They don’t have the time to go on some platform or algorithm. Even if they delegate the fund lower, they need somebody they can blame if it doesn’t work out as expected.

                                                            1. 1

                                                              It’s because the decision-making process inside companies is incompatible with it.

                                                              The process doesn’t seem very different from regular bug bounties though, and a lot of companies participate in that.

                                                              1. 2

                                                                Even bug bounties are hard to sell to companies. If it doesn’t directly impact their bottom line, they have difficulty convincing themselves that security matters. And they are probably right, given the small amount of blowback they get when that happens.

                                                                Since the main driver is money, you need to demonstrate that they will spend X and get back Y, where Y > X. The easiest is if the problem you’re solving is directly impacting a sale. Further, it becomes fuzzy. Then very low on the line, you have the discretionary and feel-good budgets that exist but will always be pretty small.

                                                          2. 1

                                                            Actually, think your idea about micropayment and escrow system is cool. Why couldn’t VotePower holders escrow VotePower to fix PRs, too?

                                                            And what about getting more people motivated to work on your projects by offering VotePower? When choosing between two identical projects (e.g. a fork) with the same communities, for argument sake, I’d choose the one that gave me VotePower for contributing than the other one without. Something is better than nothing. It’s an edge.

                                                            Turbosrc for bigger projects with a lot contributors makes it possible to do all sorts of things not imagined yet, per blog post. Smaller projects can get momentum by incentivizing contributions to get to critical mass like that and enjoy ‘social-driven automation’. Of course, some projects are extremely specialized or don’t benefit from large communities.

                                                            About SLAs. We can all agree I’m sure that the driver of FOSS is that it’s free to users. SLAs are exceptions if someone is demanding some customization not useful to others, needs increased guarantees, or they want to sue somebody if things go wrong. Grant you that. However, most companies will do SLAs for only very limited things. There is a huge gap as most users are there because it’s reliable and free software. Most prefer to just have a feature added with a pull request if possible than some customization on the side if possible. That way you’re not managing conflicting versions as upstream advances. Turbosrc drives pull requests.

                                                            1. 2

                                                              Why couldn’t VotePower holders escrow VotePower to fix PRs, too?

                                                              You could but now you’re now building a parallel currency with all of the problems that this entails. Worse, you’re actually building an ecosystem of parallel currencies, one per project. This leads to someone building derivatives markets (is 1 VotePower on the Linux kernel more valuable than one in Chrome? If I can assign them to third parties then I can trade one for the other, and now you have exchanges and so on). At this point, why not use actual currency? The system that I want would use a temporary token only for aggregation: when I offer five cents to a project, I don’t want to have to spend five cents to buy some of that project’s scrip because the transaction fees would be too high, I want to have a central entity aggregate all of these amounts and process a single credit card transaction for me at the end of the month and make the money available in large chunks to the recipients. I want currency to be fungible.

                                                              Any time you have an idea that involves reinventing corporate scrip, it’s a good idea to talk to an economist.

                                                              1. 1

                                                                I know exactly what you mean. We totally understand. VotePower isn’t a crypto token. So that solves all that right there. It’s not a currency. If it’s not Web3, there is no exchanges or anything for any crypto or cash. They’ll just be super-useful points, like Github Stars, Stack Overflow or Lobste.rs points, but with way more power to them - VotePower on pull requests.

                                                                For a future Web3 fork of Turbosrc, we already understand capabilities there and the economics better than anyone. Because we had to look at it and know else has. Not because we’re the smartest or anything. However, we’ll learn more, and others, on a Web3 alpha. There is a clear path.

                                                                Turbosrc isn’t blockchain so I don’t want to confuse people who don’t care about Web3 (most of lobsters) by getting into blockchain stuff here.

                                                          3. 3

                                                            Has there been an open source project helped by Turbosrc? How did that work?

                                                            1. 2

                                                              This is alpha launch. So we can’t wait to see the examples. At the moment, can only speak about ourselves.

                                                              Reibase, the original creator, is dogfooding Turbosrc. Everyone involved is incentized by owning VotePower on projects we launch. So yes, it has helped us at Reibase. We have two full-time developers, including myself, and many others that helped in less than full-time ways to get where we are, without salary. That’s basically 99% of open source, free work, but few have full-time ‘free workers’ and others for months to get off the ground. And we started to go through the VC funding process.They see value in VotePower. Sponsors, backers, or contributors will be motivated by getting VotePower on your projects if they’re good. Giving people anything is incalculably more motivating than nothing.

                                                          1. 15

                                                            The post would be more convincing if it included a list of CVEs as evidence. The number of open issues shows that the project is quite large; it doesn’t necessarily indicate security holes.

                                                            The line count argument isn’t very strong either since each component usually replaces other software pieces. Pieces that were usually written in the 90ies and also lacked good coding practice.

                                                            That being said, I think the author is right that the systemd project might be tempted to take shortcuts and not define clear interfaces between all the components, creating unnecessary coupling in the process.

                                                            1. 10

                                                              I took a look at Debian’s bug trackers for source packages systemd and sysvinit.

                                                              The first question is: should all issues in initscripts be attributed to sysvinit? On the one hand, that’s a reasonable interpretation: if you have a problem with your init system, it doesn’t matter all that much to you if it’s a core problem or in the standard script for that subsystem. Some bugs for systemd are basically of the same order: only appears if you are using that subsystem. But the SV initscripts bugs are usually isolated: you don’t need to fix anything in init itself to solve the problem, just fix the script. Most of the systemd equivalent bugs look (to me, in a brief survey) to be problems that arise in one or more of the systemd executables. I suppose that’s the difference between code-as-config and declarative config.

                                                              sysvinit is also older than systemd, so there has been more time to find and report and fix bugs. Still, about 80% of the bugs reported against sysvinit are initscripts bugs, not in an executable. About half of the systemd bugs look like they are in a similar category.

                                                              Looking at CVEs: perhaps 43 of the 73 CVEs attributed to systemd (and it should be 72, one of them is about an unrelated package with the same name…) are issues in the systemd executables rather than attributable to specific services managed by systemd.

                                                              I am unable to find a CVE related to sysvinit except one in Red Hat from 1999; “init” turns up 343 entries but I couldn’t find any (in a brief visual scan, and several other keyword attempts) that involved sysvinit.

                                                            1. 2

                                                              From the writer’s perspective, sometimes it’s easier to write a Twitter thread than a blog post. With a blog post, there are many ways to arrange the text, length, choice of tone and words. Once a Twitter thread has been started, there is no way back; each paragraph is locked in, and you must keep going. This might be one reason some people use it, especially if they are eternal procrastinators.

                                                              1. 3

                                                                Next, cross the edge and add browser rendering for Content-type: text/markdown :)

                                                                1. 3

                                                                  I did that already! Gemtext too. Not sure if it is still working, and not sure if it’s still working on chrome (it definitely won’t once mv2 is gone) https://github.com/easrng/txtpage

                                                                  1. 1

                                                                    I wish!

                                                                  1. 6

                                                                    I really wish systemd didn’t insist on being PID 1. Then it would be the perfect answer to “how do I run multiple processes in a single container?”

                                                                    1. 7

                                                                      Yes, you don’t know what you are missing until you’ve used an init system that supports arbitrary recursion, like the Genode init.

                                                                      1. 3

                                                                        If you’d like, could you say a little more about what value you’ve found in that sort of thing?

                                                                        1. 12

                                                                          In Genode the init component basically enforces security policies, so you can create a tree of subinits that are successively more locked down, and there isn’t any escape hatch to escalate privilege. File-system and networking is in userspace, so managed by init, and you can arbitrarily namespace networking and file-systems by isolating instances in different inits.

                                                                          1. 4

                                                                            This means the same process manager can be used on a per-project level. You could write your systemd units for development, which would be pretty close to those for the system.

                                                                        2. 3

                                                                          What semantics of systemd do you think are better suited inside containers than other (perhaps less opinionated) supervisor inits?

                                                                          1. 4

                                                                            Familiarity, and the ability to write daemons for both in-container and out-of-container use.

                                                                            1. 2

                                                                              And being able to use existing software which expects to be launched by systemd!

                                                                          2. 3

                                                                            It’s also worth noting that podman just has a ‘–systemd=true|false|always’ flag that allows this behaviour.

                                                                            1. 1

                                                                              From the RHEL containers manual (emphasis mine):

                                                                              The UBI init images, named ubi-init, contain the systemd initialization system, making them useful for building images in which you want to run systemd services, such as a web server or file server. […]

                                                                              Because the ubi8-init image builds on top of the ubi8 image, their contents are mostly the same. However, there are a few critical differences:

                                                                              ubi8-init:

                                                                              • CMD is set to /sbin/init to start the systemd Init service by default
                                                                              • includes ps and process related commands (procps-ng package)
                                                                              • sets SIGRTMIN+3 as the StopSignal, as systemd in ubi8-init ignores normal signals to exit (SIGTERM and SIGKILL), but will terminate if it receives SIGRTMIN+3

                                                                              ubi8/ubi-init in the Red Hat Container Catalog. Red Hat’s UBI images are free for everyone. I am not affiliated with Red Hat.

                                                                              1. 1

                                                                                …I am affiliated with Red Hat and didn’t know this.

                                                                                Whoops. Thank you!

                                                                            1. 1

                                                                              Does anybody backup sqlite files with restic? I noticed that it’s not just as a simple as pointing restic to the folder that contains the DB, as the content might get corrupted.

                                                                              1. 3

                                                                                it’s not just as a simple as pointing restic to the folder that contains the DB

                                                                                Definitely don’t do that.

                                                                                https://www.sqlite.org/backup.html

                                                                                1. 3

                                                                                  You should be backing up file system snapshots if you want to avoid backup smearing, the backup tool can’t coordinate with modifications in progress.

                                                                                  1. 1

                                                                                    That’s in general a very bad idea. There’s situations when you can backup a running database, when both the database make sure that a finished write won’t leave the database in an inconsistent state (most serious databases) and the files system is able to make snapshots in a certain point, not in the middle of a write (ZFS can do that for example).

                                                                                    And good software tends to have ways related to backups documented. I’d strongly recommend reading that for SQLite, but also for Postgres (it’s way too common to just go with an SQL dump, which has a great number of downsides).

                                                                                    Don’t blindly back up anything that might write. It could mean that your backup is worthless.