Threads for bfiedler

  1. 4

    Great post!

    Shameless plug of my collection “types are harder than you think”: https://3fx.ch/typing-is-hard.html (which needs an update for Go after generics were released :D)

    1. 6

      Playing my second chess game in the Swiss Team Championship, which are my first officially rated games ever. The first one was a success, I hope to keep it up!

      Edit: Here’s the first one, if you are interested.

      1. 22

        I think “No Third Party” in the case of magic links is wrong: the user’s e-mail provider is a trusted third party. Pwn the mail provider (or even something on the mail path), pwn the user.

        The same goes for the mail provider being down e.g. for maintenance.

        1. 5

          Oops, thanks for the correction. It should be third-party but decentralized. (Ignoring the case of running your own mail server or telephone network.)

          1. 3

            Even if it is your server, it might not be an encrypted email, or in your DC, or delivered to the right server first… easy enough to say you are ignoring those cases but they may not be that far from the norm.

            1. 1

              That is what I meant. Let me try to rephrase to be more clear:

              I will mark as third-party because the case of running your own tightly controlled mail server is rare and still depends on secure sending from the source server so there are likely third-parties involved.

            2. 2

              I don’t really see why you assume a login link is “leak proof” either - at its core a “login link” is just a one time password delivered out of band (via email).

              The server needs to (a) generate and send you the OTP and (b) have some way to verify that the OTP is correct, so any “leaking” issues that passwords have, can apply to email links just the same, and due to the “if someone can fuck it up they will” law of computer security, I wouldn’t want to assume that people implementing email links are treating them anywhere nearly as “securely” as we expect people to treat regular passwords.

              1. 1

                Yeah. I thought about this for a while. But I came to the conclusion that these hopefully short-lived OTPs didn’t really have the same impact as leaking a long-lived password.

                Really everything is a spectrum and I had to pick yes or no. For a well implemented system it seemed more like a yes. Although I agree it is easy to get wrong.

                1. 2

                  Saying an OTP cannot leak means you’re assuming the the best case scenario for OTP handling, while simultaneously assuming the worst case scenario for password storage.

                  If we consider “might” to mean “has the theoretical possibility, depending on implementation and other factors”, then realistically they both can potentially leak, and I think if you’re going to assume long lived password storage might leak, you have to assume OTP’s might leak too.

                  Yes, an OTP should be short lived, but we don’t know that it’s implemented that way, and that it’s short lived (assuming it’s actually expunged after a short time) is almost certainly going to give people a false sense of security that a less secure implementation is “safe” (i.e. storing them in clear text).

                  Yes an OTP shouldn’t be vulnerable to variants of replay attacks (i.e. a login link explicitly has to be a GET request, which means the OTP is potentially recorded in a lot of intermediate places), but we don’t know that it’s implemented that way.

                  Yes both should be hashed strongly in a manner that nullifies the dangers if the hashes themselves were to leak, but we don’t know that it’s implemented that way.

            3. 4

              The same goes for password recovery emails, FWIW.

              1. 18

                I think that is the best argument in favour of email login links. “We are going to allow auth resets by email anyways, may as well make that the only way in.”

                1. 5

                  Password recovery emails aren’t used anywhere nearly as often as a login email may be needed though, so downtime/maintenance issues are much less of a concern, as is non-instant delivery.

                  Additionally, there’s a security aspect here: if your mailbox is compromised and someone uses it to retrieve a login link, you won’t necessarily know, ever.

                  If your mailbox is compromised and someone uses it to reset your password, you’ll know the next time you try to login.

              1. 2

                Stabilized Not for !.

                Interesting that any trait with only methods and no functions can be implemented for the never type since they can’t be called: Primitive Type never.

                1. 3

                  Interesting that any trait with only methods and no functions can be implemented for the never type

                  In other contexts, that is called the bottom type, because it sits at the bottom of the type lattice, and is therefore a subtype of every other type. So I find this fairly unsurprising.

                  1. 1

                    Isn’t that the initial type? I thought the bottom type was () since we can always reach it

                    1. 4

                      No. ‘bottom’ refers to subtyping relationships, not compositional relationships. It’s true you can make anything from (), |, and *. But () isn’t a subtype of ()|().

                      E: this is general nomenclature. It’s possible rust has its own vocabulary I don’t know about.

                1. 1

                  And now the optimal strategy of your adversary uses this strategy…

                  1. 2

                    A nice result from game theory is that there exists a (potentially mixed) optimal strategy for both players. But that doesn’t tell us what the optimal strategy is. Since OP allows ships of length one the hardest part is hitting the ships of size one. A good strategy (at least to me) seems to be placing the large ships on the rim and then randomly distributing the size one ships. Your opponent then spends most of their shots searching for the small ones.

                  1. 13

                    I’d like to attach my personal experience of attempting to contribute to Debian here because it seems somewhat relevant to the author’s assertion that “Debian has long been experiencing a decline in the amount of people willing to participate in the project.”

                    I’ve tried to contribute on three different occasions; two were plain old bug reports, one was a patch. A common thread for each of these contributions was a demoralizing absence of feedback. One of the two bug reports has remained unanswered for over a year now, and my patch submission for about nine months.

                    Now, I sent the patch I wanted to contribute directly to the relevant maintainer mailing list. I wasn’t sure whether this was the correct way of submitting a patch for a downstream package, so I apologized preemptively in case this was not proper form and asked cordially to be pointed in the right direction if this wasn’t it.

                    No “thanks for your contribution,” no requests for additional information, no human response whatsoever.

                    What I did get was spam. Some of it was addressed to the alias I’d used for my bug report, and some of it came straight through the nginx package maintainer list I’d eventually unsubscribe from after giving up hope of receiving feedback and getting sick of receiving spam every couple of days.

                    Credit where it’s due, my other bug report was responded to and fixed, though also after a several months long period of radio silence.

                    I have no idea what to take away from this experience other than the impression that contributions are generally unwelcome, and that I should probably not bother trying to contribute in the future. Maybe this experience is not entirely uncommon, or maybe it’s just me. I really don’t know.

                    1. 11

                      I have no idea what to take away from this experience other than the impression that contributions are generally unwelcome, and that I should probably not bother trying to contribute in the future.

                      What I would take away from this (and from my similar experiences) is that Debian maintainers are overworked volunteers with too little time and we need more of them. Of course, getting to be one involves somehow getting time from existing ones, so it’s a vicious cycle.

                      1. 1

                        I was a Debian developer in the late 90s (I think from 1999-2003?) and the original application process was quite quick. I later went into hiatus for a period and tried to reactivate my access after about ten years away but the whole process was so onerous that I’ve just not taken it further. A pity, as I’d love to contribute again.

                        1. 1

                          Yeah the spam is definitely real. I get at least five messages a day (might need to adjust my filter as well, but oh well). I’m happy I used a semi-throwaway address for that list - I can always delete the address and be rid of that spam forever.

                          1. 1

                            I wanted to become a Debian Developer around 2003-2005 and back then you could expect the whole process to take about a year. Spoiler alert: I never even officially started the process.

                            From what I heard this was improved immensely but I never followed up and instead have been contributing to other projects. But personally Github has actually been a huge boon here, the friction is just so much less, I’ve also had problems contributing to other older projects before, at one time an important patch of mine hung for so long that someone else with commit rights fixed it before the turnaround time…

                            On the other hand I don’t think this can be solved, but the more open the project is (and yes, accepting PRs on Github or public other forge) usually makes it a 10x better experience…

                          1. 5

                            If only there were a way to find out what the result of a program’s execution were…

                            1. 3

                              Why add two numbers together when you can multiply two billion-parameter matrices?

                            1. 37

                              This one isn’t really a “this week”, though there will be plenty to do every week for a while, but…

                              I’M GONNA BE A NEW FATHER!!

                              Finally hit the 12-week mark and told friends/family the news, and my wife and I are absolutely ecstatic still about it. Due date is in August. So much to buy, so much to prepare, so much to learn!

                              1. 8

                                First of all, congrats!

                                So much to buy,

                                My advice, if you want it, is holding back on buying too many things. Buy the minimum and adjust. In the first weeks/month children really do not need that much stuff. They need their parents and care. They do not need all these “OMG you need all these things” on baby lists. With online ordering you can always get something quickly, if you see a need for it. We def. bought things we never needed or could get by without them easily. Less is more!

                                1. 2

                                  Yes this. The stuff can take on an oppressive weight of its own.

                                  1. 1

                                    Fantastic advice thank you! Are there any things you can think of that are the “must buy” things right away? Don’t assume anything is “too obvious” to suggest please!

                                    1. 3

                                      Diapers+wipes. Your child will either need (from my local sizing guide) “n” or “1”. Don’t overbuy the “n” but have the “1” available.

                                      Infant clothes.

                                      Onezies.

                                      “Burp cloths” (though any fabric that can be washed regularly as well as thrown over your shoulder will do)

                                      Any soft place to put down a child on the floor. (any towel or rug or carpet or possibly even hardwood will do this job.) They can roll, accidentally, long before they can intentionally roll.

                                      That’s it. Oh, and if you will be driving back from the place of birth, you will need an infant car seat.

                                      1. 6

                                        One secret is that you very very very badly need onesies with zippers or snaps, not buttons. The people who put buttons on infant clothes are sadists.

                                        1. 3

                                          There needs to be an ISO on the number and placement of onezie snaps because every single one puts them in different places and it’s impossible to keep straight.

                                        2. 4

                                          I would add a sleeping bag - for babies, not for camping (better 2 so you can wash/dry one) and a heat lamp for changing diapers/clothes.

                                          You def. do not need any sort of toy, plush animals, books, fully furnished children room or any of that stuff. That will all come a lot later. Remember the first 3 months your child is basically doing its fourth trimester, just outside the body of its mother.

                                          1. 1

                                            Hospitals give you diapers and blankets, so there’s no point in buying those now. Definitely do stock up on baby blankets when you’re in the hospital though. You’re paying for them!

                                            1. 1

                                              Not if you don’t give birth in a hospital. Home births, birthing centers, and hospitals, are all valid choices people make eyes wide open to the risks and rewards of each approach.

                                              1. 1

                                                That depends on your country/city/hospital etc. We did not get any take-home blankets or diapers from the hospital, but that may have been due to the pandemic. Not sure.

                                                1. 1

                                                  We had a pandemic baby and took home a bunch of blankets again. They don’t give you the blankets; you just take them when you leave because, again, you’re totally paying for them.

                                                  1. 1

                                                    In your country. In my country I am not paying the hospital one cent out of my own pocket. It is all covered by public health insurance.

                                                    1. 1

                                                      It’s nice that you live in a civilized country, but you’re still paying for it, just on tax day. Anyway this is not worth arguing about. Either the hospital has a bunch of extra blankets or they don’t.

                                          2. 1

                                            Yeah, we were given so many hand-me-downs I don’t think we have actually bought any clothes for our kids ourselves still, three years later. We did have to break down and buy some shoes though, alas.

                                          3. 4

                                            Congrats! It’s off topic for this forum, but I think we are really living in the golden age of parenting. Really, congrats!

                                            1. 1

                                              Why a golden age of parenting?

                                              1. 7

                                                In the past we’ve seen two extremes of “parenting” (this pseudo science to describe being responsible for little people). One extreme is Control, where you make kids do things and they will do it. The other is “Whatever kids do, they do.” We now are re-learning the lessons of time immemorial that children are little people.

                                                These lessons have become mainstreamed in recent years, making this the golden age of parenting.

                                                Show them empathy. Be the responsible one, setting bounds. Prune negative behavior, cultivate positive behavior, but don’t pretend you can declare what positive or negative behavior a child will or will not exhibit. State things the child can accomplish (You did a good job putting that puzzle together) rather than the things a child can’t change (You finished the puzzle? You are so smart!) Let children help. They will want to unless you stop them (You brought me a dozen eggs? Thank you! as opposed to Stop! Put down the eggs!) Let children play. Danger of mild bodily harm during play is good for children over time, even if occasionally individual children will need stitches or casts.

                                                These “respectful” styles of parenting that also don’t allow children to run roughshod over all the adults in their lives are out there. Parenting is no longer about self sacrifice or discipline above all else, but about building real relationships with little people that survive while those little people become bigger. (and of course real relationships involve sacrifice)

                                                Not everybody subscribes to a single worldview. But this worldview is available, with support groups for those who want them. A golden age.

                                                1. 2

                                                  I presume because it was never easier to work from home and spend time with your children.

                                                  1. 3

                                                    if you are in tech…

                                                    1. 1

                                                      As the majority of people on here are, I think :)

                                                    2. 2

                                                      Not quite :) explanation

                                                      1. 2

                                                        ? You can’t watch a child and get any work done. (I guess if your job is doing social media. Paternity leave was a great time for Twitter for me.) People act like working from home makes child care easier, but all it means is you have a short commute so you can “get home” faster.

                                                        1. 2

                                                          People act like working from home makes child care easier, but all it means is you have a short commute so you can “get home” faster.

                                                          That is huge though. That means saving multiple hours each day for many people.

                                                          1. 2

                                                            Oh yes, it’s definitely a plus, but it seems like there’s some common misconception I run into that you can somehow both watch a baby and do work, and that is just not possible for a job with any mental demands whatsoever.

                                                            1. 1

                                                              Though you could hold a child and type, if you had a one handed keyboard.

                                                  2. 1

                                                    Congratulations!

                                                    1. 1

                                                      Congrats!!

                                                      1. 1

                                                        Congrats!!

                                                        1. 1

                                                          Congratulations to your family!

                                                          1. 1

                                                            Mazel tov!

                                                            1. 1

                                                              Congrats! If you’re not part of it, the /r/daddit community on Reddit is pretty nice, especially for those early “are they supposed to be this exhausting?” days :)

                                                              1. 2

                                                                I feel like you just sent me down the deepest rabbit hole of my life…😂

                                                                1. 2

                                                                  The answer to that question is, of course, always an emphatic yes.

                                                              1. 2

                                                                Correct me if I’m wrong, but this shouldn’t be too bad for system stability/resource consumption since Arch doesn’t automatically start and enable most services when installed, contrary to Debian for example.

                                                                1. 2

                                                                  One thing this solver doesn’t seem to try, but is useful in Wordle is trying a word without the previous matches in order to find more possible letters. But now I wonder what’s the threshold where it’s useful? For example if you have 2 letters known, it’s probably useful to next guess a word without any of them to maximise the chances of finding more letters. But if you have 4 letters in the right place, it’s likely better to try a specific word (if there are only 1-2 options).

                                                                  1. 2

                                                                    With four letters but five possible left over words it is better to guess words that do not contain letters you already know. Assume you find a word w containing three of the candidate letters (which is not unreasonable). Then the average number of guesses you get is w, then either solved, or 1.5 more guesses (since two words are left, for a total average of (3 * 2 + 2 * 2.5) / 5 = 2.2

                                                                    Guessing the five words in sequence gives you 2.5 guesses on average.

                                                                    1. 1

                                                                      On “hard mode” you’re not allowed to do that, you have to include everything you’ve discovered up to that point in the next guess.

                                                                    1. 15

                                                                      You lost me at “the great work from homebrew”

                                                                      Ignoring UNIX security best practices of the last. I dunno, 30, 40 years, and then actively preventing people from using the tool in any fashion that might be more secure, and refusing to acknowledge any such concerns is hardly “great work”.

                                                                      I could go on about their abysmal dependency resolution logic, but really if the security shit show wasn’t enough to convince you, the other failings won’t either.

                                                                      But also - suggesting Apple ship a first party container management tool because “other solutions use a VM”, suggests that either you think a lot of people want macOS containers (I’m pretty sure they don’t) or that you don’t understand what a container is/how it works.

                                                                      The “WSL is great because now I don’t need a VM” is either ridiculously good sarcasm, or yet again, evidence that you don’t know how something works. (For those unaware, WSL2 is just a VM. Yes it’s prettied up to make it more seamless, but it’s a VM.).

                                                                      1. 23

                                                                        I don’t know what’s SO wrong about Homebrew that every time it’s mentioned someone has to come and say that it sucks.

                                                                        For the use case of a personal computer, Homebrew is great. The packages are simple, it’s possible and easy to install packages locally (I install mine in ~/.Homebrew) and all my dependencies are always up to date. What would a « proper » package manager do better than Homebrew that I care about? Be specific please because I have no idea what you’re talking about in terms of security « shit show » or « abysmal » dependency resolution.

                                                                        1. 12
                                                                          • A proper package manager wouldn’t allow unauthenticated installs into a global (from a $PATH perspective) location.
                                                                          • A proper package manager wouldn’t actively prevent the user from removing the “WTF DILIGAF” permissions Homebrew sets and requiring authenticated installs.
                                                                          • A proper package manager that has some form of “install binaries from source” would support and actively encourage building as an untrusted user, and requiring authentication to install.
                                                                          • A proper package manager would resolve dynamic dependencies at install time not at build time.
                                                                          • A proper open source community wouldn’t close down any conversation that dares to criticise their shit.
                                                                          1. 11

                                                                            Literally none of those things have ever had any impact on me after what, like a decade of using Homebrew? I’m sorry if you’ve run into problems in the past, but it’s never a good idea to project your experience onto an entire community of people. That way lies frustration.

                                                                            1. 5

                                                                              Who knew that people would have different experiences using software.

                                                                              it’s never a good idea to project your experience onto an entire community of people

                                                                              You should take your own advice. The things I stated are objective facts. I didn’t comment on how they will affect you as an individual, I stated what the core underlying issue is.

                                                                              1. 6

                                                                                You summarized your opinion on “proper” package managers and presented it as an authoritative standpoint. I don’t see objectiveness anywhere.

                                                                            2. 3

                                                                              I don’t really understand the fuss about point 1. The vast majority of developer machines are single user systems. If an attacker manages to get into the user account it barely matters if they can or cannot install packages since they can already read your bank passwords, SSH keys and so on. Mandatory relevant xkcd.

                                                                              Surely, having the package manager require root to install packages would be useful in many scenarios but most users of Homebrew rightfully don’t care.

                                                                            3. 8

                                                                              As an occasional Python developer, I dislike that Homebrew breaks old versions of Python, including old virtualenvs, when a new version comes out. I get that the system is designed to always get you the latest version of stuff and have it all work together, but in the case of Python, Node, Ruby, etc. it should really be designed that it gets you the latest point releases, but leaves the 3.X versions to be installed side by side, since too much breaks from 3.6 to 3.7 or whatever.

                                                                              1. 8

                                                                                In my opinion for languages that can break between minor releases you should use a version manager (python seems to have pyenv). That’s what I do with node: I use Homebrew to install nvm and I use nvm to manage my node versions. For Go in comparison I just use the latest version from Homebrew because I know their goal is retro compatibility.

                                                                                1. 5

                                                                                  Yeah, I eventually switched to Pyenv, but like, why? Homebrew is a package manager. Pyenv is a package manager… just for Python. Why can’t homebrew just do this for me instead of requiring me to use another tool?

                                                                                  1. 1

                                                                                    Or you could use asdf for managing python and node.

                                                                                  2. 7

                                                                                    FWIW I treat Homebrew’s Python as a dependency for other apps installed via Homebrew. I avoid using it for my own projects. I can’t speak on behalf of Homebrew officially, but that’s generally how Homebrew treats the compilers and runtimes. That is, you can use what Homebrew installs if you’re willing to accept that Homebrew is a rolling package manager that strives always to be up-to-date with the latest releases.

                                                                                    If you’re building software that needs to support a version of Python that is not Homebrew’s favored version, you’re best off using pyenv with brew install pyenv or a similar tool. Getting my teams at work off of brewed Python and onto pyenv-managed Python was a short work that’s saved a good bit of troubleshooting time.

                                                                                    1. 2

                                                                                      This is how I have started treating Homebrew as well, but I wish it were different and suitable for use as pyenv replacement.

                                                                                      1. 2

                                                                                        asdf is another decent option too.

                                                                                      2. 5

                                                                                        I’m a Python developer, and I use virtual environments, and I use Homebrew, and I understand how this could theoretically happen… yet I’ve literally never experienced it.

                                                                                        it should really be designed that it gets you the latest point releases, but leaves the 3.X versions to be installed side by side, since too much breaks from 3.6 to 3.7 or whatever.

                                                                                        Yep, that’s what it does. Install python@3.7 and you’ve got Python 3.7.x forever.

                                                                                        1. 1

                                                                                          Maybe I’m just holding it wrong. :-/

                                                                                        2. 3

                                                                                          I found this article helpful that was floating around a few months ago: https://justinmayer.com/posts/homebrew-python-is-not-for-you/

                                                                                          I use macports btw where I have python 3.8, 3.9 and 3.10 installed side by side and it works reasonably well.

                                                                                          For node I gave up (only need it for small things) and I use nvm now.

                                                                                        3. 8

                                                                                          Homebrew is decent, but Nix for Darwin is usually available. There are in-depth comparisons between them, but in ten words or less: atomic upgrade and rollback; also, reproducibility by default.

                                                                                          1. 9

                                                                                            And Apple causes tons of grief for the Nix team every macOS release. It would be nice if they stopped doing that.

                                                                                            1. 2

                                                                                              I stopped using Nix on macOS after it is required to create an separate unencrypted volume just for Nix. Fortunately, NixOS works great on VM.

                                                                                              1. 2

                                                                                                It seems to work on an encrypted volume now at least!

                                                                                          2. 4

                                                                                            I really really hate how homebrew never ask me for confirmation. If I run brew upgrade it just does it. I have zero control over it.

                                                                                            I come from zypper and dnf, which are both great examples of really good UX. I guess if all you know is homebrew or .dmg files, homebrew is amazing. Compared to other package managers, it might even be worse than winget….

                                                                                            1. 2

                                                                                              If I run brew upgrade it just does it

                                                                                              … yeah? Can we agree that this is a weird criticism or is it just me?

                                                                                            2. 2

                                                                                              Overall I like it a lot and I’m very grateful brew exists. It’s smooth sailing the vast majority of the time.

                                                                                              The only downside I get is: upgrades are not perfectly reliable. I’ve seen it break software on upgrades, with nasty dynamic linker errors.

                                                                                              Aside from that it works great. IME it works very reliably if I install all the applications I want in one go from a clean slate and then don’t poke brew again.

                                                                                            3. 4

                                                                                              you think a lot of people want macOS containers (I’m pretty sure they don’t)

                                                                                              I would LOVE macOS containers! Right now, in order to run a build on a macOS in CI I have to accept whatever the machine I’m given has installed (and the version of the OS) and just hope that’s good enough, or I have to script a bunch of install / configuration stuff (and I still can’t change the OS version) that has to run every single time.

                                                                                              Basically, I’d love to be able to use macOS containers in the exact same way I use Linux containers for CI.

                                                                                              1. 1

                                                                                                Yes!!

                                                                                                1. Headless macos would be wonderful
                                                                                                2. Containers would be fantastic. Even without the docker-like incremental builds, something like FreeBSD jails or LXC containers would be very empowering for build environments, dev servers, etc
                                                                                                1. 1

                                                                                                  Containers would be fantastic. Even without the docker-like incremental builds, something like FreeBSD jails or LXC containers would be very empowering for build environments, dev servers, etc

                                                                                                  These days, Docker (well, Moby) delegates to containerd for managing both isolation environments and image management.

                                                                                                  Docker originally used a union filesystem abstraction and tried to emulate that everywhere. Containerd provides a snapshot abstraction and tries to emulate that everywhere. This works a lot better because you can trivially implement snapshots with union mounts (each snapshot is a separate directory that you union mount on top of another one) but the converse is hard. APFS has ZFS-like snapshot support and so adding an APFS snapshotter to containerd is ‘just work’ - it doesn’t require anything else.

                                                                                                  If the OS provides a filesystem with snapshotting and a isolation mechanism then it’s relatively easy to add a containerd snapshotter and shim to use them (at least, in comparison with writing a container management system from scratch).

                                                                                                  Even without a shared-kernel virtualisation system, you could probably use xhyve[1] to run macOS VMs for each container. As far as I recall, the macOS EULA allows you to run as many macOS VMs on Apple hardware as you want.

                                                                                                  [1] xhyve is a port of FreeBSD’s bhyve to run on top of the XNU hypervisor framework, which is used by the Mac version of Docker to run Linux VMs.

                                                                                              2. 2

                                                                                                Ignoring which particular bits of Unix security practices is problematic? There are functionally no Macs in use today that are multi-user systems.

                                                                                                1. 3

                                                                                                  All of my macs and my families macs are multi-user.

                                                                                                  1. 2

                                                                                                    The different services in OS are running as different users. It is in general good thing to run different services with minimal required privileges, different OS provided services run with different privileges, different Homebrew services run with different privileges, etc. So reducing the blast radius, even if there is only one human user is a pro, as there are often more users at once, just not all users are meatbags.

                                                                                                  2. 1

                                                                                                    I’ve been a homebrew user since my latest mac (2018) but my previous one (2011) I used macports, given you seem to have more of an understanding of what a package manager should do than I have, do you have any thoughts on macports?

                                                                                                    1. 4

                                                                                                      I believe MacPorts does a better job of things, but I can’t speak to it specifically, as I haven’t used it in a very long time.

                                                                                                      1. 1

                                                                                                        Thanks for the response, it does seem like it’s lost its popularity and I’m not quite sure why. I went with brew simply because it seemed to be what most articles/docs I looked at were using.

                                                                                                        1. 3

                                                                                                          I went with brew simply because it seemed to be what most articles/docs I looked at were using.

                                                                                                          Pretty much this reason. Homebrew came out when macports was still source-only installs and had some other subtle gotchas. Since then, those have been cleared up but homebrew had already snowballed into “it’s what my friends are all using”

                                                                                                          I will always install MP on every Mac I use, but I’ve known I’ve been in the minority for quite awhile.

                                                                                                          1. 1

                                                                                                            Do you find the number of packages to be comparable to brew? I don’t have a good enough reason to switch but would potentially use it again when I get another mac in the future.

                                                                                                            1. 3

                                                                                                              I’ve usually been able to find something unless it’s extremely new, obscure, or has bulky dependencies like gtk/qt or specific versions of llvm/gcc. The other nice thing is that if the build is relatively standard, uses ‘configure’ or fits into an existing PortGroup, it’s usually pretty quick to whip up a local Portfile(which are TCL-based so it’s easy to copy a similar package config and modify to fit).

                                                                                                              Disclaimer: I don’t work on web frontends so I usually don’t deal with node or JS/TS-specific tools.

                                                                                                              1. 3

                                                                                                                On MacPorts vs Homebrew I usually blame popularity first and irrational fear of the term ‘Ports’ as in “BSD Ports System”, second. On the second cause, a lot of people just don’t seem to know that what started off as a way to have ‘configure; make; make install’ more maintainable across multiple machines has turned into a binary package creation system. I don’t anything about Homebrew so I can’t comment there.

                                                                                                    1. 3

                                                                                                      I think that CGO is implicitly enabled when using networking stuff in Go programs, but I might be misremembering something.

                                                                                                      1. 1

                                                                                                        You need it for non-sucking getaddrinfo() on Linux, it enables NSS modules.

                                                                                                      1. 7

                                                                                                        Hey that was my mistake! Funny to see it pop up again :)

                                                                                                        1. 6

                                                                                                          A critique of fork() from microsoft is disappointing since these systems abstractions aren’t universal truths and can only be judged meaningfully within the context of the cultures that created them. UNIX was designed by a culture of people who liked to create and compose lots of small programs and fork() works fine if that’s your preferred development methodology. Microsoft culture is to build big sophisticated monolithic programs like Word. Since Linux became so successful, all the people who come from those different cultures are now being asked to use it, and they’re not happy, because it was designed to accommodate a style that differs from their own.

                                                                                                          1. 22

                                                                                                            A critique of fork() from microsoft is disappointing since these systems abstractions aren’t universal truths and can only be judged meaningfully within the context of the cultures that created them.

                                                                                                            I don’t think an ad hominem is a very helpful start to a post, but I partially agree with the second part of this sentence. Systems abstractions are not universal truths but their value is universal within the context of the hardware on which they need to run. This changes over time and across market segments (abstractions for rack-scale computing don’t work well on embedded systems with significantly less than 1 MiB of RAM, for example).

                                                                                                            I work for Microsoft Research, but I held the opinion that fork is a terrible abstraction for at least 10, probably closer to 15, years before I joined Microsoft and it has absolutely nothing to do with Windows or the VMS / Mainframe culture. Prior to joining Microsoft I served two terms on the FreeBSD Core Team, so I hope that establishes my credentials as someone who understands the ‘UNIX culture’. I’ve also worked on a port of Linux to an environment with no MMU that aimed to run existing Linux software.

                                                                                                            These folks are not the only people to object to fork. Mothy has been complaining about it for years and there’s a huge section in the UNIX Haters’ Handbook (1994) about it.

                                                                                                            It’s easy to understand fork if you understand the hardware context in which it was created, which has nothing to do with the culture of the programmers. On PDP machines, you had one process executing in core memory and when you did a context switch you wrote it out to drum (or similar) memory and read another process in. On this hardware, fork is an obvious abstraction: you write out the process and then create a new copy of the kernel data structures. You now have a copy of the process that you’ve just written out sitting in memory for free, doing anything other than fork is more work.

                                                                                                            The fork abstraction started showing its age quite early. Once UNIX ran on systems with MMUs, you could keep two or more processes in core (a term that remains, even though core memory is a historical curiosity) at a time and context switching became just a matter of updating the segment descriptor table (or pointing to a different one). At this point, fork required copying the entire process and was expensive. With paged MMUs, it became a bit cheaper as you could ‘just’ mark every page as copy-on-write, but that complicated the VM subsystem. Two other advances made it even worse:

                                                                                                            • Threads were added to UNIX. Only the calling thread is copied by fork, so you many end up with locks held by other threads that you can’t unlock (especially if they’re error-checking pthread mutexes and abort if you try to unlock them from a thread that doesn’t own them), so if you want to actually use the forked child rather than just call execve then you need every library that creates threads to be aware of this and release locks then recreate its threads in the child, which is basically impossible to get right.
                                                                                                            • SMP came along and now updating page tables required cross-core synchronisation. This is particularly bad on x86 (except AMD Milan) or RISC-V, where you need an IPI to do TLB invalidates, so the cost of fork scales badly with both the size of the process and the number of cores. Ona system with 128 cores and 1TiB of RAM, fork does a phenomenal amount of work to create a virtual memory environment that often lives for less than 1ms and which doesn’t touch more than a couple of pages.

                                                                                                            Awareness of the pain of fork is not new. The vfork system call was introduced somewhere around 2.9BSD or 3BSD, putting it in or before 1980. The FreeBSD 1.0 (1992) man page says that vfork will be removed in a future release. In FreeBSD 10 (2014), we quietly removed that comment because vfork remains the best way of creating a child process on *NIX, in spite of being incredibly hard to use correctly (it can, at least, be used to create APIs that can be used easily).

                                                                                                            There are basically two ways of creating a process that don’t suffer from the problems of fork.

                                                                                                            The first (which is the direction VMS / NT went) is conceptually cleaner. You have a kernel API that creates a new empty process and returns a capability to it (which Windows calls a handle). You can use this capability to do things like map files or anonymous memory objects into the process, write initial state into that memory (e.g. stack contents), inject threads, and so on. The big disadvantage of this is that it requires remote versions of all of your APIs. On *NIX, mmap is how you modify your process’s address space but it doesn’t let you modify another process’s address space. Similarly, pthread_create lets you create a thread but it doesn’t let you create a thread in another process.

                                                                                                            In most *NIX systems, there’s a separation of concerns around threading where the kernel doesn’t know much about the thread other than that it is a schedulable entity and it has a register file that needs to be context switched when it is [de]scheduled. The threading library builds the pthread abstractions in userspace and the same kernel can run multiple different threading libraries. That kind of design makes it a bit more difficult to implement this model because even if there were a remote form of mmap and newthr / clone + CLONE_THREAD, the userspace threading library would have to be restructured to use these to inject remote threads and do things like copy TLS segments from the binary. This is simpler with Windows where there is precisely one threading library but that simplicity comes at the cost of flexibility.

                                                                                                            Designing these APIs to avoid confused deputy vulnerabilities is also nontrivial. If I create a new process owned by a different user, or with reduced privileges in some other way, then I need to ensure that any OS handles that I’ve created for the child were either with the reduced rights or were intentionally created with elevated privileges.

                                                                                                            The other approach is embodied by vfork. You let a process run with a different set of kernel state other than the memory mapping and then launch the new process once it’s modified the state that it wants to. This means that you don’t need remote versions of open, close, socket, and so on, because you’re setting up the kernel state using the local versions of the calls. The BSD incarnation of this, via vfork is a bit limited in that the only way to end a vfork context is via execve, which creates a completely new memory map, so you can’t use this to set up shared memory regions between the parent and child. You also can’t create threads in the child. These things could be built with cooperation from the run-time linker, but they aren’t in any *NIX systems that I’m aware of.

                                                                                                            The advantage of vfork is that there’s a clear separation between things done with the parent’s rights and things done with the child’s rights. You can do things like call setuid or cap_enter in the vfork context and then anything done after this is done with the child’s permissions context. Copying the file descriptor table can still be quite costly, especially given that most child processes immediately call closefrom to close most of them after putting the small number that they want to explicitly inherit in the right places in the file descriptor table.

                                                                                                            The big downside of fork is that it can affect process state. For example, if you accidentally call malloc in the vfork context and then don’t free the memory, it’s leaked in the parent. This isn’t so bad with C++ and RAII, where you can allocate space on the stack for the arguments to execve and then do everything else in a nested block so that destructors all run before the execve call, but it’s painful to use in C, which is the language that UNIX was co-designed with. It’s probably fine in a garbage-collected language, as long as you don’t accidentally acquire any locks in the vfork context.

                                                                                                            Both of these approaches work well with any kind of process-isolation technology and even without an MMU. In the CHERI project, we’ve extended FreeBSD with a coexecve system call that creates a new OS process within the same address space as the parent, isolated using CHERI capabilities. This works well with vfork (we added a sysctl that turns any execve in a vfork context into a coexecve and most things seem to just work) but would be impossible to support with fork.

                                                                                                            TL;DR: The fact that fork is a bad design is fairly uncontroversial among systems researchers and has been well-known to UNIX kernel developers since before I was born.

                                                                                                            1. 1

                                                                                                              Two other ideas rattling around in my head, either of which could be implemented in user space:

                                                                                                              1. There’s a spawn() function which takes a pointer to some bytecode that will change kernel state. And while we’re at it, a pointer to a list of which file descriptors to share with the parent. Increment refcounts only for the listed descriptors.

                                                                                                              2. Have a spawn() syscall and also keep execve(). You spawn a wrapper process that will do all the kernel state manipulation and then execve() the final target.

                                                                                                              1. 3

                                                                                                                There’s a spawn() function which takes a pointer to some bytecode that will change kernel state

                                                                                                                This is more or less how posix_spawn is implemented on platforms that have a native implementation. The XNU implementation has some non-standard extensions that let you do things like change security context. It’s not really clear to me that this would be more ergonomic than allowing arbitrary code to run in a vfork context. FreeBSD doesn’t bother implementing posix_spawn in the kernel because it’s easy to implement in userspace: each of the spawn actions corresponds to a system call, you just do them all in a row.

                                                                                                                And while we’re at it, a pointer to a list of which file descriptors to share with the parent. Increment refcounts only for the listed descriptors.

                                                                                                                Something like this is top of my list for unprivileged process creation: being able to provide an array of pairs of file descriptors and where they should go in the child process’s FD table. This would avoid the little sequence I have to do today of dup all of the file descriptors until none of them are in the range I want to set, dup2 them into the right place, and then closefrom all of the others.

                                                                                                                Have a spawn() syscall and also keep execve(). You spawn a wrapper process that will do all the kernel state manipulation and then execve() the final target.

                                                                                                                I think the thing missing from this is an madvise (or similar) flag to preserve some bits of the memory map on exec. I would love to be able to create a shared mapping by mapping anonymous memory in the parent, doing something like mshare(base, length) and having that range preserved across execve.

                                                                                                                That said, I’m not a huge fan of execve as a kernel API. Requiring the kernel to understand ELF and map segments of a binary feels like a poor separation of concerns. I’d love to move all of this into userspace. This would require extending procctl to set the other bits of the ABI (signal frame layout, system call table to use) and an equivalent of closefrom for virtual memory: something to unmap everything that wasn’t explicitly set as preserved in the child context. You might want to be able to set some mappings as preserved (or, ideally, created) only within the child context, so that you didn’t need to map things and then unmap them after vfork returned and so that you could map some things in addresses that are already in use. With this, you could then set up the newly-created child’s stack, load its binary (and the run-time linker if it’s dynamically linked), and then drop mappings that the parent had and run from there.

                                                                                                                1. 1

                                                                                                                  That said, I’m not a huge fan of execve as a kernel API

                                                                                                                  Agree with you on this, but I think you have to lose a bunch of weird Unix features like suid and sgid executables and setcap, in which the kernel taking part is effectively mandatory?

                                                                                                                  I’m open to arguments that suid/sgid and friends are not things we want to keep in future anyway.

                                                                                                                  Other than that it would be cute to do all the loading of the new program into memory in user space instead.

                                                                                                              2. 1

                                                                                                                The first (which is the direction VMS / NT went) is conceptually cleaner. You have a kernel API that creates a new empty process and returns a capability to it (which Windows calls a handle). You can use this capability to do things like map files or anonymous memory objects into the process, write initial state into that memory (e.g. stack contents), inject threads, and so on. The big disadvantage of this is that it requires remote versions of all of your APIs

                                                                                                                This is what I’ve built for Linux in https://github.com/catern/rsyscall

                                                                                                                1. 3

                                                                                                                  I don’t see the kernel module. Are you just executing the syscall via an RPC in the child? This doesn’t address the problem, because you want to be able to do this in a completely empty process (for bootstrapping the first bit of code and initial stack) and you need to be able to do it with a different set of privileges. The main context in which the NT-style interfaces are useful today is sandboxing, where a more-privileged process is acting on behalf of the process that wants to do the system call. You can kind-of do this with *NIX APIs (in fact, I have) by using the ability to send file descriptors over UNIX domain sockets, but it’s not portable. In post-5.13 kernels, Linux has a mechanism in seccomp-bpf that lets you intercept a syscall that returns a file descriptor from another process, handle it, and provide the returned file descriptor, but it’s incredibly difficult to do securely (avoiding time-of-check-to-time-of-use errors requires a copy, which requires multiple domain transitions and ends up being slower with kernel help than my pure-userspace code, it’s only useful if you want to run unmodified programs under seccomp-bpf).

                                                                                                                  Even with all of this hoop-jumping you can’t, for example, create a read-write mapping of a 4 KiB window into a file that the unprivileged process shouldn’t have access to. You have to pass it the file descriptor with read-write permissions and then do mmap in the unprivileged process. At this point it can map any location and size within the file. In contrast, Win32’s MapViewOfFile3 lets you do exactly this, as long as you have a handle to the unprivileged process that you can pass to the second argument. Similarly, VirtualAlloc2 allows you to map anonymous memory into a process’ address space so that you can create the initial mapping for the first thread’s stack.

                                                                                                                  1. 3

                                                                                                                    Are you just executing the syscall via an RPC in the child?

                                                                                                                    Yes. That’s what a kernel module would do too, effectively (send a syscall to a process and wait for that process to execute it, possibly yielding to it). That’s the only viable way to add this to a Unix kernel - the kernel just isn’t set up for the alternative (modifying a process structure without running kernel code in that process’s context).

                                                                                                                    This doesn’t address the problem, because you want to be able to do this in a completely empty process (for bootstrapping the first bit of code and initial stack)

                                                                                                                    Not necessarily. What you can do is create a process which starts out sharing everything with its parent, and then gradually unshare things as you set them up. That’s equivalent; note that even on NT the new process is not “completely empty”, it inherits many things from the parent process like security contexts.

                                                                                                                    For initial address space set up, there’s exec. I agree it would be neat if there was a Unix API that allowed you to switch between and create fresh address spaces other than with exec; then you wouldn’t need exec, you could do it in userspace. But exec is the API that Unix has for switching and creating fresh address spaces - something more flexible isn’t absolutely necessary.

                                                                                                                    Even with all of this hoop-jumping you can’t, for example, create a read-write mapping of a 4 KiB window into a file that the unprivileged process shouldn’t have access to. You have to pass it the file descriptor with read-write permissions and then do mmap in the unprivileged process.

                                                                                                                    Sure, of course. The Unix API is based around files. If you want to give a process access to a piece of memory, you have to give it access to that file. There’s no ability to give it access to only part of a file - that’s a different feature, and if we want that, it should be an orthogonal feature. Does the Win32 API let the process then send that mapping on to child processes or other processes? If yes, then it’s just a feature that Unix lacks, since Unix doesn’t have only-part-of-file capability file descriptors. If no, then it’s not a good feature - delegation is fundamental for allowing abstraction, and this is undelegatable.

                                                                                                              3. 12

                                                                                                                This isn’t a critique of fork() from Microsoft – it’s a critique of fork() of four operating systems researchers, one of whom is affiliated by Microsoft. Would your comment be different if the submitter used a different URL to the paper, like this one (ETH is an university in Switzerland where the fourth author works): https://people.inf.ethz.ch/troscoe/pubs/hotos_fork.pdf

                                                                                                                The paper discusses how fork came to be, providing insights what the small community though was good at the time.

                                                                                                                1. 6

                                                                                                                  It’s also a style that differs wildly from a lot of real world Linux/modern *nix usage (including most cases that are meaningfully sensitive to process spawning overhead). The authors do acknowledge (in section 3) that this was a reasonable design for its original use case in its original context. But that was a context with different hardware, different available libraries/other os abstractions, different demands from user level programs, etc.

                                                                                                                  But this also isn’t otherwise a lazy internet rant; it’s a pretty through exploration of the topic, including alternatives (they point out that rfork()/clone() solves some problems, but not others). I find it disappointing to see a pretty thoughtful paper dismissed with “bah, Microsoft.”

                                                                                                                  Which, fwiw, only even applies to one of the authors. Jonathan Appavoo was my advisor as a grad student and I later worked for Orran Krieger for a couple years; I can attest that they are not “Microsoft people.”

                                                                                                                  1. 0

                                                                                                                    You’re missing the point. fork() is a privilege. It can’t be used if you work in an environment where devs are drowning in technical debt that was shoveled downstream for decades, because your privilege was taken away. Don’t let them tell you that’s modern or that it’s good for us.

                                                                                                                    1. 9

                                                                                                                      fork() is a privilege.

                                                                                                                      If the paper is correct in arguing that fork() isn’t an inspired design, but merely a clever hack that was good for its time, then it follows that it’s not a privilege to be enshrined and defended against encroaching complexity, but merely a tool to be used when it made sense and dropped now that it doesn’t.

                                                                                                                      I think the last time I used fork() and the exec family was in a moderately complex C program in the early 2000s. Granted, this program was multithreaded, so it probably doesn’t live up to the platonic Unix ideal. But hey, we were trying to write a working program in a reasonable time (as a side project), not show true devotion to the Unix Way. And I’m pretty sure we had a few bugs related to process spawning. So I don’t miss fork() in the least; I’m happy to use a higher-level process spawning API that avoids all the footguns of fork() and exec and works smoothly on Windows.

                                                                                                                      See also: Free Your Technical Aesthetic from the 1970s

                                                                                                                  2. 4

                                                                                                                    I just want to note that this isn’t only a Microsoft “extinguish” thing. OS research in recent years is very Unix (even primarily Linux)-centric, seeming like we have forgotten other OSes than Linux. Mothy addressed this at this years’ OSDI as well. I’m not saying that Microsoft didn’t have any sneaky intentions with this paper, but there is a valid critique here.

                                                                                                                  1. 12

                                                                                                                    Yes, cat | grep is a “useless use of cat”, but it allows me to quickly change the grep regex, which is much more annoying if I have to navigate past the filename first.

                                                                                                                    1. 8

                                                                                                                      I agree, and I think it’s kind of stupid to try to avoid cat outside of scripts, but that said, there is a way to solve this specific problem:

                                                                                                                      <file grep regex
                                                                                                                      
                                                                                                                      1. 1

                                                                                                                        That’s cool. I think I’ve read about this shell syntax before in man pages but never saw the use for it. Now I do!

                                                                                                                        1. 1

                                                                                                                          I’ve always liked that syntax, especially if you have a redirection at the end of a follow-on pipeline (<foo | bar >baz ) but it can break in bash with some shell constructs in the pipeline (some uses of while)

                                                                                                                          1. 2

                                                                                                                            I think you mean

                                                                                                                            <foo bar | baz >qux
                                                                                                                            

                                                                                                                            I too find this pattern useful, specifically for troff pipelines.

                                                                                                                        2. 4

                                                                                                                          “Useless use of cat” is usually bad advice, anyhow.

                                                                                                                          As far as I can tell, it only matters for performance-sensitive scripts. If someone’s just hacking around on the command line, cat ... | ... is extremely useful since it allows quick switching out of inputs. Let’s say I’ve got some_expensive_script.sh | ... – better to capture the output to a file, run cat output.txt | ... and build up the pipeline until it works, then swap back in the expensive script call when I’m ready.

                                                                                                                          (Also, I very often end up accidentally writing < output.txt | grep ... because I forget to delete the pipe, and so grep gets nothing! This leads to a great deal of frustrated debugging until I notice it. Best to just keep it a pipeline while developing rather than adhering to some random internet person’s notions of terminal purity.)

                                                                                                                        1. 4

                                                                                                                          lib/pq: An early Postgres frontrunner in the Go ecosystem. It was good for its time and place, but has fallen behind, and is no longer actively maintained.

                                                                                                                          Latest release was 6 days ago: https://github.com/lib/pq/releases/tag/v1.10.3, so it seems that it is still maintained.

                                                                                                                          1. 6

                                                                                                                            The README explicitly says:

                                                                                                                            This package is currently in maintenance mode. For users that require new features or reliable resolution of reported bugs, we recommend using pgx which is under active development.

                                                                                                                            1. 8

                                                                                                                              “In maintenance mode” does not mean that it is not actively maintained.

                                                                                                                              1. 5

                                                                                                                                I would argue that the statement “Maintainers usually do not resolve reported issues” does mean it’s not actively maintained.

                                                                                                                                That being said, this is probably getting into the semantics of what “actively maintained” means. For me, it means there’s active development and that reported issues will be resolved, neither of which seem to be the case for lib/pq at the moment.

                                                                                                                          1. 16

                                                                                                                            I am a bit reassured that I am not the only one thinking that the numerous claims of “blazingly fast” Rust lib/apps can be a bit annoying. ^^

                                                                                                                            1. 27

                                                                                                                              93°C on all cores is certainly blazing.

                                                                                                                              1. 2

                                                                                                                                They might run fast, but compiling is painfully slow. My 2017 (I think) dual core XPS takes ages to compile anything with GUI elements or even moderately complicated. On my desktop machine (5900X) it’s bearable.

                                                                                                                              1. 7

                                                                                                                                I think one lesson is “Don’t trust any performance comparison that doesn’t accept pull requests.”

                                                                                                                                1. 9

                                                                                                                                  “Don’t trust a statistic which you haven’t manipulated yourself”, is a (paraphrased) quote attributed to Churchill (decidedly not German, as I thought before).

                                                                                                                                  Doing benchmarks correctly is so hard that I would default to “suspicious until proven otherwise” when looking at literally any result.

                                                                                                                                1. 2

                                                                                                                                  First idea that popped into my head is that this is only applicable if the thing in question is constructible. Non-existence is not constructible by its nature: I can prove to you that no decision procedure for the halting problem exists, but I cannot show this constructively.

                                                                                                                                  1. 1

                                                                                                                                    I would really like to learn more about this, aren’t coq proofs constructive by nature?

                                                                                                                                    Here is a proof that the halting problem for turing machines is undecidable: https://github.com/uds-psl/coq-library-undecidability/blob/30d773c57f79e1c5868fd369cd87c9e194902dee/theories/TM/TM_undec.v#L8-L12

                                                                                                                                    There are other proofs in that repo that show other problems as being undecidable.

                                                                                                                                    1. 2

                                                                                                                                      Huh, TIL. Indeed, it seems that there exist proofs of the undecidability of the halting problem that are constructive.

                                                                                                                                      ¬ (A ∧ ¬ A) is apparently provable in intuitionistic logic, which suffices for diagonalization arguments:

                                                                                                                                      lemma "¬ (A ∧ ¬ A)"
                                                                                                                                      proof (rule notI)
                                                                                                                                        assume 1: "A ∧ ¬ A"
                                                                                                                                        from 1 have "A" by (rule conjE)
                                                                                                                                        from 1 have "¬ A" by (rule conjE)
                                                                                                                                        from `¬ A` `A` show False by (rule notE)
                                                                                                                                      qed
                                                                                                                                      

                                                                                                                                      The general point still stands though, as there are other examples, such as non-constructive existence proofs.


                                                                                                                                      In general Coq proofs are not necessarily constructive, since one can define the law of the excluded middle as an axiom in Coq (and this is done in e.g. Coq.Classical). I can’t say anything about the proofs you linked though, as my Coq-foo is very limited.

                                                                                                                                      1. 1

                                                                                                                                        I think, the way the definitions expand in that repo, that undecidable (HaltTM 1) is decidable (HaltTM 1) -> decidable (HaltTM 1), which is trivially true. That is, it’s taken as an axiom that Turing machines are undecidable. (I think? I might be misreading)

                                                                                                                                      2. 2

                                                                                                                                        while Coq proofs are constructive, proving “~ exists t, t determines halting” does not construct any Turing machines, what it constructs is something that takes a hypothetical existence proof and builds a proof of false from it.

                                                                                                                                        i.e. in construct math, ~ P is a function P -> False. this function can never be invoked as you can never build a P object.

                                                                                                                                      3. 1

                                                                                                                                        As this post was tagged practices (not math) and the article addresses the difficulty of convincing people, I suspect you’ve misunderstood the author’s use of the phrase “constructive proof.”

                                                                                                                                        Mathematical (constructive proof) vs rhetorical (constructive criticism).

                                                                                                                                        1. 1

                                                                                                                                          I don’t feel like the author talks about constructive criticism at all, how did you come to that conclusion?

                                                                                                                                          1. 1

                                                                                                                                            As this post was tagged practices (not math) and the article addresses the difficulty of convincing people

                                                                                                                                            ☝🏾

                                                                                                                                            1. 2

                                                                                                                                              Just for future reference, “constructive proof” is a term from math and I meant my usage of the term to be an analogy to math. Math also involves convincing people. But maybe I should have tagged it computer science or math, sorry.

                                                                                                                                      1. 5

                                                                                                                                        Some thoughts on horizontal overflow:

                                                                                                                                        If you ever have trouble tracking down what element is causing overflow, add this rule in the browser inspector:

                                                                                                                                        * { outline: thin solid red }
                                                                                                                                        

                                                                                                                                        This makes the bounding boxes of every single element visible. Very helpful.

                                                                                                                                        The author mentioned using work-break: break-word. I have an open question on that rule: Is there any reason not to set it on the body on most sites? I’ve gone on quests to remove scrolling from user-entered content, and I end up setting word-break in a ton of different places. I’m not sure of the trade-offs of just setting it globally.

                                                                                                                                        1. 2

                                                                                                                                          Some googling reveals that break-word is not CJK-aware, but I’m no expert myself. I only publish English content, so this is fine for me :)