1. 3

    I will be interested to see if this is the only notice which customers receive.

    On the bright side, we just got the final incentive kick to complete our migrations to GitHub Actions.

    1. 6

      Fun fact: If you try to do the same trick on Github Actions, it will actually run to completion without complaining and give you a heart attack, but secrets lookups will all turn out to have quietly returned the empty string. >_>

      1. 1

        I can imagine a world where failing to access a variable discloses the existence of that secret in the first place—returning the empty string seems like a decent enough compromise.

        1. 3

          They could just say “that secret either doesn’t exist or you don’t have access to it”, which is a pretty standard approach. It would be useful if it were raised as a flag in the UI on a forked PR’s run.

          1. 1

            To be able to try this, you need to be able to fork the repo or submit a PR. Which means that you can look inside .github/workflows/ inside the repo. Which means that you can see YAML directives such as:

                env:
                  FOO_TOKEN: ${{ secrets.FOO_TOKEN }}
            

            At which point, you know the secret (probably) exists.

            1. 1

              You can define organization-wide secrets, some of which may be used in private repositories.

              1. 1

                The basis for this OP is leaking of secrets from public repositories. So in the context of where someone could look at the configuration and try to attack, the variable existence is already disclosed.

                It’s probably worth referencing https://blog.teddykatz.com/2021/03/17/github-actions-write-access.html (which I’m pretty sure was on Lobste.rs at the time) for gritty details on which secrets are available in which contexts for pull-requests:

                How does GitHub Actions handle pull requests?
                […]
                However, it’s important that the author of a pull request can’t access the repository’s secrets (e.g. by updating a workflow file to print out the secrets instead of running tests). To address this issue, GitHub provides two different ways to trigger Actions workflows from pull requests:

                • The pull_request event simulates a merge of the pull request, and triggers Actions workflows based on the configuration and code at the merge commit. This is intended for e.g. running tests, and verifying that the code would still work if the pull request was merged. However, since the code in the pull request is potentially malicious, workflows triggered by the pull_request event are run without access to the repository’s secrets.
                • The pull_request_target event triggers Actions workflows based on the configuration and code at the base branch of the pull request. Since the base branch is part of the base repository itself and not part of a fork, workflows triggered by pull_request_target are trusted and run with access to secrets. This is intended for e.g. adding comments and labels to new pull requests (which requires a GitHub API token).
      1. 8

        I agree, but we just do it by username, every app should have it’s own access controls, so you need a diff. user for each app anyway.

        i.e. we would just make the username: currency-conversion-app or stock-exchange-rate-importer

        for multiple processes for a given app, again each process generally wants it’s own ACL’s, so usernames might be currency-conversion-app-web and currency-conversion-app-fetcher` or something.

        no extra training required, teaching devops how to name a connection. But it’s neat that PG and friends let you do that!

        The only upside to naming the connection would be if you put in the remote host and PID# maybe, in case you have more than 1 webserver, but you would basically get that information anyway based on the source IP. Is it easier to get that info from the source IP or from the connection name? shrugs For all other programs, you are pretty much forced to get it from the source IP. So lowest common denominator wins?

        1. 2

          Author from the article here.

          I agree with you, what you describe should be the standard, and it has even the security benefits. Using your own username is also the workaround, once a system doesn’t support connection naming at all. However, worked in several companies has seen many more systems, I can tell you that this is even often considered as “overhead”.. Why? Because here often Dev and Ops don’t play together. Dev is adjusting their DB calls. Ops want to avoid adjusting the permission of the user every release, …. You know the game. Sad, but true.

          Admitting that connection naming is not implemented in these environments either.

          But I am with you. I would even go one step further and advocate for every application document their DB commands. Like currency-conversion-app-web is doing only SELECT and INSERT. This would enable (dev-)ops to limit the permission of the single user to exactly these operations. Rarely, I have seen this, even in Open Source Software.

          1. 2

            In an ideal world, people would design their DBs to support multiple users with ACLs correctly used between them.

            I am happy that so many systems let us cope with not living in an ideal world, by providing an alternative label which can be client-supplied.

            1. 1

              Author here.

              I share your happiness. The design of the database is on one side. The design of the application is another. I have seen many systems that have several different use cases for the database connection (like @zie describes with -web and -fetcher. However, I have not seen different database connections to the same database with different users. Often they share the same database connection pool.

          1. 2

            20 years ago when 64 MB was a nice amount of RAM for a sysadmin’s desktop, using Fvwm as a window manager meant I could use the FvwmM4 module, and have desktop menus which could open terminals providing hostnames and which would open windows which ssh’d to those hosts, etc etc. DRY across my SSH and window manager settings.

            1. 4

              Nit: the QotD example is missing a systemctl start fortune.socket (tested with systemd 245, per current Ubuntu LTS).

              It is a bit annoying that inetd stuck to fd 0 for wait-mode stream services, requiring a FD song-and-dance to adjust to something more “normal” for each connection. DJB got this right, IIRC, though I never actually wrote to that API.

              1. 3

                Thanks! You’re quite right. Fixed now.

              1. 1

                Some years ago, a hardware engineer at Apple showed off to me their pet (personal, home) project, which was VAX on an FPGA, booting and running the Incompatible Time-Sharing System and viewing some parts of the source code. It was eye-opening at many levels.

                I still blink at learning about the richness of the instruction set.

                1. 2

                  Aren’t env variables the normal recommended way to inject values from hashicorp vault?

                  1. 1

                    how so? vault doesn’t inject anything into the ENV for you, that’s on you.

                    It can read the vault token from VAULT_TOKEN, but by default it uses ~/.vault-token for it’s own token.

                    It does use VAULT_ADDR for where vault should connect to, but that’s not exactly a secret.

                    1. 1

                      Generally you are using Vault to store secrets for apps that are not vault-enabled. Thus env vars.

                      1. 1

                        Some people do that, but the ENV is not the only way to do it. Writing out to files, injecting via stdin, etc.

                    2. 1

                      It’s also the recommended way for 12-factor applications if I understand correctly. However you don’t write PASSWORD=hunter2 ./my_app.exe --yolo, but you source .env; ./my_app.exe --yolo where .env contains a bunch of export statements (which won’t appear in ps ever). It seems reasonable to me.

                      1. 2

                        Or

                        export PASSWORD=hunter2
                        ./my_app --yolo
                        

                        which does differently.

                        I just wonder what operating systems treat env vars as world readable.

                        1. 2

                          I think it works too, but now your secret is in your bash history. That can still be fine :)

                          1. 1

                            Historically: all Unix/POSIX systems did. The switch to “only same userid” is a 21st century change.

                            1. 1

                              It’s worth noting that exporting like this is something the post recommends against, since now any child process of my_app can read your secrets.

                              1. 1

                                Depends on if my_app forks off with the same env. It’s far from default in the languages I’ve used.

                                1. 1

                                  It looks like Python inherits by default using subprocess.run (the best practices API), so that’s a pretty popular language that inherits by default.

                        1. 4

                          The article read fine to me. I might mention overlay ports trees to manage meta-ports but that’s a preference thing.

                          As to Poudriere itself: I’d really love saner ways to completely disable X11 dependencies than to have to use a Poudriere /usr/local/etc/poudriere.d/make.conf which has ${CURDIR...} guards to set FORBIDDEN (with exemptions for the blocks because of things like irssi-themes being in x11-themes/).

                          Also a way to get all the dependency paths leading to a given port, rather than just the first one. I wrote a poudriere_status.py which can report to CLI or generate graphviz directives but the dependency graph becomes a tree because only one inbound link is reported.

                          1. 3

                            Don’t use select() anymore in 2021. Use poll(), epoll, iouring, …, but for heaven’s sake don’t use select().

                            select() is significantly faster than poll() and if you need to accept() new connections, it’s faster than epoll() as well.

                            For many applications you are almost certainly much better off using multiple processes and SO_REUSEPORT.

                            iouring is very exciting on the other hand, and I welcome the 1990’s Windows NT IO model finally coming to Linux, I just wish the interface was nicer.

                            1. 4

                              select() is significantly faster than poll() and if you need to accept() new connections, it’s faster than epoll() as well.

                              I’ve not heard that before, do you know why? I learned about these at about the time kqueue was introduced in FreeBSD, so I never used select or poll in anger, but even in my undergrad UNIX course I was told ‘don’t use select’. Both were in POSIX2001 and so poll was the lowest-common-denominator for all of the UNIX systems I’ve used and the thing I’ve used for fall-back code when kqueue wasn’t available.

                              1. 3

                                Would it be a good guess that you took your undergrad UNIX course in the… very late 90s, or more likely very early 00s :-D?

                                I suspect this is a curious case of system culture at work here. I was also told “don’t use select” but I didn’t learn network programming on Linux. On the other hand, I’ve seen at least a generation of fresh grads who only knew about select and poll because Linux’ epoll has a pretty troubled history that lots of universities with Linux-only labs didn’t want to expose undergrads to. At best, they knew there’s also epoll, which is faster, but has some problems, and they’d also maybe heard about kqueue and I/O completion ports, but they’d never used either.

                                There’s a whole generation of programmers that grew up “knowing” select is the only reliable choice. I don’t want to dispute the performance claims in this thread (I don’t think I’ve ever written software that had to handle more than 100 connections, let alone 10,000 :-) so I don’t know enough about the performance implications of either) – what I can say is that, for every person who’s heard “don’t use select” in their undergrad course, there’s at least one person who’s heard “don’t use kqueue/epoll/whatever unless you know what you’re doing” in their undergrad course.

                                1. 2

                                  I’ve not heard that before, do you know why?

                                  I presume you’re asking why is it faster? I know some of the reasons, but maybe not all of the reasons:

                                  1. You have a savings on syscall counts.

                                  2. Scanning a bit-array is faster than chasing a linked-list. A lot faster.

                                  Benchmarking this stuff is pretty tricky.

                                  But maybe you meant something else?

                                  1. 1

                                    I presume you’re asking why is it faster? I know some of the reasons, but maybe not all of the reasons:

                                    Yes, why select is faster.

                                    You have a savings on syscall counts.

                                    Not for select vs poll, these have the same number of calls. For select vs kqueue, you have fewer syscalls for kqueue but the select call has to recreate a load of state (which involves acquiring multiple kernel locks) on every call, whereas this state is persistent in the kernel with kqueue.

                                    Scanning a bit-array is faster than chasing a linked-list. A lot faster.

                                    None of these mechanisms involve a linked list. Select uses a bitmap, poll uses an array, kqueue involves an array for registering and then it’s up to the kernel to pick the optimal data structure for maintaining the state. With both poll and select, you need to look up the files in the file descriptor table (which involves lock acquisitions or RCU things, depending on the kernel), then lock the file structure, then query it. With kqueue, the pending events are registered in the kqueue object in the kernel when they appear and the kernel doesn’t need to acquire any locks to check for events on objects that don’t have pending events.

                                    Even between select and poll, it’s not clear that walking the data structure passed in from userspace is faster, unless the occupancy for select is high. Select will hit your branch predictor pretty high if you’re scanning each bit and branching on it, so you’re likely to see some mispredictions, whereas the bottleneck from parsing the poll structure is more likely to be from extra cache misses.

                                    1. 1

                                      For select vs kqueue, you have fewer syscalls for kqueue

                                      Incorrect. After accept() you need to add it to the kqueue with EV_SET and kevent call to get notified. If you have a lot of new connections then this approaches double the number of syscals, but if you don’t, you’re also likely to have few fds.

                                      None of these mechanisms involve a linked list.

                                      Incorrect. Look at what the kernel does around poll: https://elixir.bootlin.com/linux/v5.7/source/fs/select.c#L138

                                2. 4

                                  This claim is missing indicators of scale.

                                  For a very few FDs, I can well believe that select() is faster than the others. But once you start dealing with hundreds or thousands of FDs, the repeated copying across the user/kernel boundary of the complete list of FDs on every select() call drowns out everything else and your performance suffers.

                                  So there is a fixed overhead to using an epoll()/kqueue() setup for the extra system calls for state management, but after that they scale significantly better.

                                  1. 2

                                    This claim is missing indicators of scale.

                                    At what “scale” do you think what I said doesn’t hold true? Do you have benchmarks explaining exactly what you mean?

                                    For a very few FDs, I can well believe that select() is faster than the others. But once you start dealing with hundreds or thousands of FDs, the repeated copying across the user/kernel boundary of the complete list of FDs on every select() call drowns out everything else and your performance suffers

                                    select() doesn’t copy “the complete list of FDs”: 1024 file descriptors is 128 bytes because each bit is given a position in the fd_set. poll() on the other hand, chases 1024 pointers through a linked list.

                                    So there is a fixed overhead to using an epoll()/kqueue() setup for the extra system calls for state management,

                                    If you are dealing with a large number of accept() calls, the “fixed” overhead isn’t fixed at all.

                                    but after that they scale significantly better.

                                    There’s that word again. What do you mean by “scale”?

                                    The only benchmark I’m aware of that has epoll() beating select() is when you have mostly-idle connections, and it’s from 2004, and I’ve never been able to reproduce it:

                                    https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.538.1749

                                    1. 4

                                      What linked list? poll(2) takes a flat array of struct pollfds. struct pollfd doesn’t have any pointers in it. There isn’t a linked list here on the userland side of this interface. (I have no idea about the kernel.)

                                      It’s a little bit of a shame that it isn’t SoA layout so walking the array looking at revents fields isn’t as dense as a bitset. Arguably a waste of cache line fill rate. But it’s not a “throughput dropped to 1/memory latency” hell like a linked list puts you into.

                                      I suspect the limitation of fds having to be under 1024 (FD_SETSIZE) is a large reason why people who write or teach networking tutorials say you might not want to use it. That lesson then gets half remembered for years.

                                      An example I’ve heard of that causing crashes in production: https://rachelbythebay.com/w/2011/06/02/fdsetsize/ Process has about 1020 fds open for different files that it wants to access. It accept()s a connection, which gets fd 1025. It then tries to select(), which goes poorly. (And it ideally should have segfaulted when it tried to FD_SET() just before the select() call.)

                                      If it had been written with poll(2) instead, then…

                                      struct pollfd pfd;
                                      pfd.fd=1025;
                                      pfd.events=POLLIN | POLLOUT;
                                      int result=poll(&pfd, 1, -1);
                                      

                                      … would probably have been fine.

                                      Aside, I’ve used kevent/kqueue once or twice and as far as I remember it’s pleasant enough. Kind of a shame Linux didn’t copy it, then we could have the same interface on every unix-like.

                                      1. 3
                                        1. 1

                                          I’m surprised that cargo has that many fds open at once. Thank you for the interesting tidbit

                                        2. 1

                                          What linked list?

                                          The one the kernel uses to implement poll() around https://elixir.bootlin.com/linux/v5.7/source/fs/select.c#L138

                                          If it had been written with poll(2) instead, then…

                                          Another option would simply be to allocate the fd_set* on the heap according to the number of file descriptors used instead of stack-allocating it.

                                          This is sadly a very common mistake.

                                          1. 1

                                            Ew. But eh it’s probably fine, the first sixteen or so entries are allocated in a single inline chunk. This interface isn’t great for large numbers of mostly idle sockets anyway.

                                            1. 1

                                              No it is not, and to the best of my knowledge, if you have a large number of idle sockets, epoll/kqueue will beat select().

                                            2. 1

                                              Another option would simply be to allocate the fd_set* on the heap according to the number of file descriptors used instead of stack-allocating it.

                                              That’s not legal per the documentation. e.g. FD_ZERO doesn’t take a size parameter, just a pointer and nobody’s actually supplying you with a guarantee that the number of bytes you need to allocate is (nfds + CHAR_BIT - 1) / CHAR_BIT.

                                              1. 1

                                                That’s not legal per the documentation

                                                It’s perfectly legal: Ask any lawyer if you’ll get arrested for doing it, and they’ll wonder why you’re even bothering to ask. :)

                                                POSIX allows an implementation to define an upper limit, advertised via the constant FD_SETSIZE, on the range of file descriptors that can be specified in a file descriptor set, but it does not require an implementation to define an upper limit, and in fact, no unixish system you’re likely to run into does. On BSDs (including OSX), you can even #define FD_SETSIZE before including <sys/select.h> to get fd_set_t objects the desired size.

                                                I wouldn’t worry too much about hypothetically POSIX-confirming systems like Windows NT 4 because POSIX conformance doesn’t mean it will work, or it will work fast, and especially for stuff like this: You’re not even guaranteed to be able to get 1024 file descriptors in POSIX.

                                                FD_ZERO doesn’t take a size parameter,

                                                Correct. You should use memset instead of FD_ZERO if you are doing this.

                                                1. 1

                                                  You still don’t actually have a macro or function or anything that is documented to work for getting you the right size for the fd_set allocations. This seems like a lot of unportablility to put up with for something that isn’t even going to be particularly fast.

                                                  1. 1

                                                    This seems like a lot of unportablility to put up with for something that isn’t even going to be particularly fast.

                                                    Yes. Un-portable to systems that don’t exist, and “particularly fast” is always relative. Benchmark don’t speculate.

                                                    For my use case it adds about 10% on qps, which is basically a 10% cost savings.

                                                    You don’t have to like it, and heck, I don’t like it either, but some people have work to do, and this shit matters.

                                          2. 3

                                            I don’t have current numbers, only the classic C10K problem paper and any updates linked from there. (Sorry, I don’t mean to be dismissive, I’m just very busy today and dashed off a quick reply earlier without spending time on it.)

                                            The poll(2) system call doesn’t work well and it’s been decades since I last saw anyone recommend it (1990s). The epoll(2) and kqueue(2) calls, on the other hand, don’t have those problems. But managing all this is why we have libevent/libev2/whatever-new-hotness.

                                            I confess, I haven’t re-tested for myself in a very long time. I recall seeing select() scale poorly to even a couple of thousand connections and kqueue() fixing it.

                                            I think I was wrong to ascribe the cost of the complete list to the actual copying across the userland/kernel boundary. Now that I pause to try harder to remember, it’s the cost of the setting up of the kernel’s data structures that was dominant. The earlier design approaches tried to avoid keeping state inside the kernel across multiple system calls for select, feeling that state “on the other side” was wrong. Later work admitted that no, it’s better to keep state and reference a handle which you can use in add/subtract operations. Which feels much like some of the changes in NFS: people try hard to avoid remote-side state before discovering that the lack of state is causing more problems than it solves.

                                      1. 3

                                        Until reading this post, my shell startup raised the soft limit to the hard-limit for descriptors, and a few other things. This is a well-written well-reasoned post which led me to change my configs.

                                        1. 17

                                          Unfortunately, OpenRC maintenance has stagnated: the last release was over a year ago.

                                          I don’t really see this as a bad thing.

                                          1. 12

                                            Also, wouldn’t the obvious choice be to pick up maintenance of OpenRC rather than writing something brand new that will need to be maintained?

                                            1. 10

                                              There is nothing really desirable about openrc and it simply does not support the required features like supervision. Sometimes its better to start fresh, or in this case with the already existing s6/s6-rc which is build on a better design.

                                              1. 3

                                                There is nothing really desirable about openrc

                                                I’d say this is a matter of opinion, because there’s inherent value in simplicity and systemd isn’t simple.

                                                1. 5

                                                  But why compare the “simplicity” to systemd instead of something actually simple, openrcs design choices with its shell wrapping instead of a simple supervision design and a way to express dependencies outside of the shell script is a lot simpler. The daemontool like supervision systems simply have no boilerplate in shell scripts and provide good features like tracking pids without pid files and therefor reliably signaling the right processes, they are able to restart services if they get down and they provide a nice and reliable way to collect stdout/stderr logs of those services.

                                                  Edit: this is really what the post is about, taking the better design and making it more user friendly and implementing the missing parts.

                                              2. 3

                                                the 4th paragraph

                                                This work will also build on the work we’ve done with ifupdown-ng, as ifupdown-ng will be able to reflect its own state into the service manager allowing it to start services or stop them as the network state changes. OpenRC does not support reacting to arbitrary events, which is why this functionality is not yet available.

                                                also, the second to last graf

                                                Alpine has gotten a lot of mileage out of OpenRC, and we are open to contributing to its future maintenance while Alpine releases still include it as part of the base system, but our long-term goal is to adopt the s6-based solution.

                                                so, they are continuing to maintain OpenRC while alpine still requires it, but it doesn’t meet their needs, hence they are designing something new

                                              3. 3

                                                I was thinking the same thing.

                                                I have no sources, but when was the last time OpenBSD or FreeBSD had a substantial change to their init systems?

                                                I don’t know enough to know why there’s a need to iterate so I won’t comment on the quality of the changes or existing system.

                                                1. 13

                                                  To my knowledge, there’s serious discussion in the FreeBSD community about replacing their init system (for example, see this talk from FreeBSD contributor and previous Core Team member Benno Rice: The Tragedy of systemd).

                                                  And then there’s the FreeBSD-based Darwin, whose launchd is much more similar to systemd than to either BSD init or SysVinit to my knowledge.

                                                  1. 4

                                                    this talk from FreeBSD Core Team member Benno Rice: The Tragedy of systemd).

                                                    This was well worth the watch/listen. Thanks for the link.

                                                  2. 8

                                                    I believe the last major change on FreeBSD was adding the rc-order stuff (from NetBSD?) that allowed expressing dependencies between services and sorting their launch order so that dependencies were fulfilled.

                                                    That said, writing a replacement for the FreeBSD service manager infrastructure is something I’d really, really like to do. Currently devd, inetd, and cron are completely separate things and so you have different (but similar) infrastructure for running a service:

                                                    • At system start / shutdown
                                                    • At a specific time
                                                    • In response to a kernel-generated event
                                                    • In response to a network connection

                                                    I really like the way that Launchd unifies these (though I hate the fact that it uses XML property lists, which are fine as a human-readable serialisation of a machine format, but are not very human-writeable). I’d love to have something that uses libucl to provide a nice composable configuration for all of these. I’d also like an init system that plays nicely with the sandboxing infrastructure on FreeBSD. In particular, I’d like to be able to manage services that run inside a jail, without needing to run a service manager inside the jail. I’d also like something that can set up services in Capsicum sandboxes with libpreopen-style behaviour.

                                                    1. 1

                                                      I believe the last major change on FreeBSD was adding the rc-order stuff (from NetBSD?) that allowed expressing dependencies between services and sorting their launch order so that dependencies were fulfilled.

                                                      Yep, The Design and Implementation of the NetBSD rc.d system, Luke Mewburn, 2000. One of the earlier designs of a post-sysvinit dependency based init for Unix.

                                                      1. 1

                                                        I’ve been able to manage standalone services to run inside a jail, but it’s more than a little hacky. For fun a while back, I wrote a finger daemon in Go, so I could keep my PGP keys available without needing to run something written in C. This runs inside a bare-jail with a RO mount of the homedirs and not much else and lots of FS restrictions. So jail.conf ended up with this in the stanza:

                                                        finger {
                                                                # ip4.addr, ip6.addr go here; also mount and allow overrides
                                                                exec.start = "";
                                                                exec.stop = "";
                                                                persist;
                                                                exec.poststart = "service fingerd start";
                                                                exec.prestop = "service fingerd stop";
                                                        }
                                                        

                                                        and then the service file does daemon -c jexec -u ${runtime_user_nonjail} ${jail_name} ${jail_fingerd} ...; the tricky bit was messing inside the internals of rc.subr to make sure that pidfile management worked correctly, with the process finding handling that the jail is not “our” jail:

                                                        jail_name="finger"
                                                        jail_root="$(jls -j "${jail_name}" path)"
                                                        JID=$(jls -j ${jail_name} jid)
                                                        jailed_pidfile="/log/pids/fingerd.pid"
                                                        pidfile="${jail_root}${jailed_pidfile}"
                                                        

                                                        It works, but I suspect that stuff like $JID can change without notice to me as an implementation detail of rc.subr. Something properly supported would be nice.

                                                      2. 2

                                                        I think the core issue is that desktops have very different requirements than servers. Servers generally have fixed hardware, and thus a hard-coded boot order can be sufficient.

                                                        Modern desktops have to deal with many changes like: USB disks being plugged in (mounting and unmounting), Wi-Fi going in and out, changing networks, multiple networks, Bluetooth audio, etc. It’s a very different problem

                                                        I do think there should be some “server only” init systems, and I think there are a few meant for containers but I haven’t looked into them. If anyone has pointers I’d be interested. Desktop is a complex space but I don’t think that it needs to infect the design for servers (or maybe I’m wrong).

                                                        Alpine has a mix of requirements I imagine. I would only use it for servers, and its original use case was routers, but I’m guessing the core devs also use it as their desktops.

                                                    1. 1

                                                      That’s a lot of corrosion for a three-year-old battery. For some reason, I’m thinking it’s closer to 13 years and that there’s a Y2K style roll-over problem with using a single digit to represent the manufacturing year.

                                                      [And yes, I know it’s a 2010 Camry, but I’m cynic enough to think that the battery going in might not have been the freshest, particularly if bought second-hand.]

                                                      1. 1

                                                        Corrosion patterns are very different in different climates. I’ve seen worse close to the Chesapeake bay on younger batteries.

                                                        1. 1

                                                          I live in Pittsburgh PA, as does the author of the article (my home is within the area shown on the map in the article). I don’t see these corrosion patterns.

                                                          1. 2

                                                            Hey neighbor! I’ll have to check out some of your writing. :)

                                                        2. 1

                                                          It is a genuine Toyota battery. This was my parent’s car before I inherited it, and my mom is one of those people who insisted on taking the car to the dealership. So, it could still be a Toyota replacement from 2018. It’s a tough call. It would just blow my mind if this battery actually lasted 13 years past the manufacturing date… But I also can’t tell you why it would be so bad in 3 years of normal driving conditions in Pennsylvania.

                                                        1. 1

                                                          This covers the laptop scenarios nicely. For Tailscale between laptops and servers, then there’s also netplan, and then stuff like Kubernetes installations which look at the IPs in /etc/resolv.conf and if they don’t like them, generate a new /tmp/resolv.conf pointing to external services and set that as the default for pod creation.

                                                          I love that systemd-resolved has a fairly sane conceptual approach to managing roaming and VPNs. I loath that it repeatedly breaks DNSSEC whether roaming or not and I end up having to manually edit a config file when I leave home or get back, so as to have the least amount of breakage. And generally a lot of other things systemd-resolved does in DNS protocol land, rather than administration land, end up causing frustration.

                                                          Which is why on servers which don’t roam, I nuke it with prejudice and use Unbound. For laptops, there’s unbound-anchor which tries to manage the admin side of roaming, but in practice on Ubuntu tends to be an unreliable core-dumping mess which makes systemd-resolved look good.

                                                          1. 3

                                                            Perhaps this is one a decent argument for using something like scons for building a project if you have a team of programmers and a CI build system: if you switch your rebuild requirement from timestamps to a content hash (even one not cryptographically secure, since you’re presumably not defending against team-mates finding content collisions in source files to mess with you … usually) then you can have your project state file live in a cache which can be remounted into build containers often, so that the mainline trunk of development stays as the baseline and branch builds just compile whatever’s different from current mainline. This also adds an incentive to rebase fairly often, to keep compile times low.

                                                            1. 3

                                                              if you switch your rebuild requirement from timestamps to a content hash (even one not cryptographically secure, since you’re presumably not defending against team-mates finding content collisions in source files to mess with you … usually)

                                                              Note that Blake2b is faster than MD5, and Blake3 is potentially even faster. In practice, there is no practical speed difference between a cryptographically secure hash and a mere CRC: the bottleneck is going to be reading to disk anyway (well, except maybe an M2 drive).

                                                              1. 2

                                                                Right. I don’t think I made any claim about speed of hashes, only speed of builds and how to identify artifacts. Any mainstream hash performance is going to be negligible here.

                                                                My point was that the fact that scons uses MD5 is not a blocker, the hash algorithm still works for build artifact caching … in the direction of “main trunk” -> “dev branch”, at least.

                                                                Looks like scons now supports switching the hash algorithm, with code merged in 2020. After the next round of LTS OS releases are out, you might be able to start relying upon that. :D

                                                            1. 2

                                                              On the point re managing file uploads and clicking on file-names in listings: the macOS terminal emulator iTerm has some interesting escape sequences which can be used with shell integration and apps to set state visible to other things and usable to help here, so perhaps supporting those escape sequences would help.

                                                              Eg, in zsh:

                                                              print -nP '\e]1337;RemoteHost=%n@%M\a\e]1337;CurrentDir=%~\a'
                                                              

                                                              If you support that and clearly communicate what environment variables are exported to support triggering this, then it should move into “feasible” to let drag&drop work in a few ways.

                                                              1. 1

                                                                From the release notes:

                                                                Removed the obsolete binutils 2.17 and gcc(1) 4.2.1 from the tree. All supported architectures now use the LLVM/clang toolchain.

                                                                Does this mean that FreeBSD can be used in a completely GNU-less way now?

                                                                1. 4

                                                                  Not completely, but mostly. If you want to debug a kernel crash you still need gdb.

                                                                  1. 3

                                                                    There’s a wiki page that tracks the [L]GPL removal from base. It looks as if the only two things in base that are [L]GPL’d are:

                                                                    • diff3, which (I think) is required by etcupdate (and possibly pkg?), so makes it difficult to upgrade the system if it’s not there. There’s been some work to replace this with the OpenBSD version, but it doesn’t look as if there’s been much progress for a long time.
                                                                    • dialog, (LGPL, not GNU, I belive) which is used for a bunch of the system administration commands. You can build the base system without this if you’re building an appliance that doesn’t need these tools.

                                                                    I generally end up installing bash though, because I learned some bashisms 20 years ago and never got around to retraining my fingers in zsh. I also install vim, so have a big GPL’d thing that I spend a lot of my time in on pretty much every FreeBSD system.

                                                                    1. 2

                                                                      nit: vim isn’t GPL’d, it’s Charityware.

                                                                      1. 2

                                                                        Huh, I thought it was GPL + a suggestion to donate, but it looks as if it’s its own license. Thanks!

                                                                  1. 3

                                                                    I thought it was to prevent unwanted behavior in test when $var is not set and/or to test if $var is set.

                                                                    1. 4

                                                                      That was almost the context in which I first encountered the idiom: when $var expands to the empty string, whether because unset or because explicitly empty.

                                                                      Running [ "$foo" = "needle" ] would somehow lose the empty parameter to the left of the = even though it’s explicitly still there as an argv item, so [ "x$foo" = "xneedle" ] was needed.

                                                                      I think I tend to use the x form when writing single-square brackets tests in portable shell, but skip it when using the conditional expressions [[ ... ]] feature of bash/zsh.

                                                                      1. 4

                                                                        As far as I’m concerned this is the only correct answer. If you didn’t use “x$foo” you could get an error about one side of the comparison being empty.

                                                                        Thankfully test got smarter, but for those of us who’ve been around the block a few times old habits die hard.

                                                                    1. 1

                                                                      Quite surprised by this piece of common lore which seems to have passed me by entirely at the time. I used cheap ne2000 clones preferentially and almost exclusively for building my small linux networks through the mid nineties and I can’t really think of any problems. Most of my cursed networking from that era was struggling with linux NFS implementations.

                                                                      1. 2

                                                                        Ditto to the former (but I didn’t build out Linux networks). When switching to PC from Amiga and building out my first box, I followed sage advice and went with an NE2000 because “everything supports it” and the alternatives realistically available in my price budget didn’t have Linux support, or had worse support than the NE2000. I never noticed any problems with it; the two other students I shared a house with that year were also compsci students and we had a household network for our machines.

                                                                        Linux NFS was so bad that discovering it actually worked under FreeBSD was a delight. (I mean, later at ISP postmaster scale, I got too familiar with quirks of FreeBSD/SunOS/NetApp and all the wonderful NFS bugs which could still come up, but nobody was seriously proposing we try to add Linux into the mix: we later added Linux to the mail setup for malware scanning with a commercial product, but since the scanner was closed source we kept it away from the filer network anyway).

                                                                        1. 1

                                                                          Ha ha, I had exactly the same FreeBSD epiphany. Wait, NFS works on this one? Mind…blown.

                                                                      1. 6

                                                                        I can imagine CSAM is how Bitcoin inevitably dies, or the relevance of government inevitably dies.

                                                                        1. 4

                                                                          This is grim, but wouldn’t the blockchain have to be able to store large data for that to play out? I think most blockchains just store pointers to data (hashes). So then law enforcement can take down the data that is pointed to, rather than the entire blockchain? You should be left with a bunch of dangling pointers.

                                                                          I don’t know the details, but my understanding is that Bitcoin only stores a sequence of transaction records, and anything that’s “encoded in the blockchain” has to be done with a bunch of hacks / custom encodings. Other blockchains may be different.

                                                                          In the case of a Botnet, I imagine the control data is pretty small. Actually it could be really tiny, i.e. “the current IP address of the live master”. The bots just need to be able to “call home”.

                                                                          edit: following my own thoughts, I guess what this really means is that Bitcoin allows you to have a “site that never goes down”, so yes I see your point :-/ The continuously- and distributedly- updated pointer is enough.

                                                                          1. 4

                                                                            Even if Bitcoin only had enough degrees of freedom to allow miners to mine for nonce values, then there would be enough room for a dedicated mining group to offer a premium block-signing service, where one bit of each nonce encodes some plaintext.

                                                                            And if that were taken away, then folks could revert to using the traditional technique of storing messages in the payment amount; there is not much difference between 99¢ and $1.01 on average, but it encodes a trit.

                                                                            1. 3

                                                                              Never even thought about the idea of storing data in transaction values. $1 right now is 1,721 satoshi.. there’s a lot of ways to use that to encode data.

                                                                          2. 3

                                                                            What is CSAM in this context?

                                                                            1. 14

                                                                              Child sexual abuse material. A more apt term than “child pornography”, which somewhat applies a more professional, more consensual sounding term by association to what is child abuse.

                                                                              1. 2

                                                                                I like DKG’s terminology in the PGP keystore abuse-resistance RFC drafts: “toxic data”.

                                                                                What data is toxic can vary from jurisdiction to jurisdiction, although there are some near-universal constants.

                                                                                https://tools.ietf.org/html/draft-dkg-openpgp-abuse-resistant-keystore-04

                                                                              2. 4

                                                                                I suspect “Child sexual abuse material”

                                                                              3. 2

                                                                                And taxes.