Threads for saturn

  1. 10

    This is very hard to do because of liability. Any hiring decision is going to involve some subjective component but a candidate who is rejected may be able to take you to court if there is bias in the process and this bias is the reason that they are rejected. The more information that you give them, the easier it is for them to substantiate a claim of bias especially if you put it in writing.

    In my ideal world, we’d be able to define a set of objective criteria for the people that we want to hire and design an objective assessment that let us rank our candidates based against this metric with a reasonably high level of confidence. The industry has a very long way to go before we’re there: even the first step of informally defining what we want is hard (what three characteristics are most important for a good software engineer?). As a result, almost every hiring decision is going to be open to accusations of bias.

    Given how much companies spend on hiring, I wish they’d spend more on identifying the measurable skills and characteristics that correlate with good performance. This is hard to do though because the current set of employees are there as the result of a biassed selection process and so it’s hard to find the things with a causal relationship to performance. A few years back, Amazon tried to filter submitted CVs based on an AI model trained on their existing staff and found that it was rejecting qualified women because they don’t have many women in their engineering staff.

    1. 12

      I tried giving feedback about ten years ago. Almost every ex-candidate took it as an invitation to demonstrate that I was wrong. The worse the interviewed candidate was, the more insistent they became.

      On the other hand: I workshopped a want ad for a sysadmin with a bunch of current and former sysadmins who did not work for me. That was an incredibly helpful process.

      1. 7

        A few candidates do take rejection rather badly, and with a few I felt they were baiting our patient HR person so that she would lose her temper and let something slip, and then they could have something over us. Fortunately, she vented to us (the engg team doing the interviews) about their rudeness instead.

        “So hard not to engage,” was how she put it, and she was right, because the emails were full of button pushing.

        Thing is, it made us even more convinced that the rejection was correct, because we would be working with that person. Lack of technical knowledge you can train, but a bullying character is harder to accommodate and more harmful to the team.

        That said, feedback is extremely useful, not only to newer candidates, and it is a signal both ways.

        I’m not tech genius, but I have experience working with version control systems, continuous integration, debuggers etc. - things you just get to know when you’ve been coding for a decade or two. I once got feedback from a third party recruiter about an interview (this is from the time when the job market was hot, and recruiters were pampering candidates a bit by calling back after rejections too). The feedback was that I was terrible at everything related to software.

        If I was more insecure/inexperienced, I would have panicked and questioned my life choices, but having been around the block a bit, I realized it meant that a) either it was a terrible fit or b) the person doing the interview was not someone I’d want to work with (same as a, actually).

        1. 3

          “So hard not to engage,” was how she put it

          And on the other side: I’ve only received feedback after one rejection, and it was useful, but it also made me realize an interviewer had misinterpreted one situation. It was so hard to not try to correct the record!

        2. 1

          Yeah people who can receive the feedback are the ones you probably didn’t reject..

        3. 1

          what three characteristics are most important for a good software engineer?

          Patience, simplicity and compassion… it’s in the dao.

        1. 2

          Picture quality isn’t my biggest complaint about video conferencing. I bet there’s big money to be made with a video transform filter that makes it look like you’re looking into the cam as if the cam were in the middle of your screen.

          1. 6

            Apple has this in their facetime (called Eye contact)

            1. 1

              Microsoft also has this on their Surface devices.

            2. 2

              There was a successful crowdfunding project for The Center Cam in 2021 to solve this problem with hardware but yeah, a software filter would be cool but I think it’d also be pretty creepy because it won’t look quite right.

              1. 1

                It’s funny, I hear people talk about this, but… I can’t even tell the difference, unless someone has a wildly different camera setup like those weird laptops with the camera on the main body.

              1. 5

                Did you consider peaches, which are also different from apples?

                1. 17

                  I hear fuzzing is even easier with peaches than with apples.

                  1. 7

                    No, a teammate said that they read that peaches have huge seeds, which sounds like a worse apple.

                    1. 14

                      Depending on your problem, a large peach monoseed might be much more manageable than apple style microseeds. YMMV

                  1. 20

                    I imagine this means that if you had e.g. Disqus embedded on a bunch of sites, you’d need to log into Disqus in each one. Is that correct?

                    (I think I’d be fine with that. Just curious what the user-visible effects are.)

                    1. 15

                      Yes. And Like-buttons will also break.

                      1. 6

                        Thankfully those seem to have gone out of fashion somewhat?

                        It’s kind of ironic that the centralization / silo-ification of the web (“people just stay on facebook all the time and don’t care about interacting with facebook from embedded widgets on random articles”) is making this amazing privacy improvement palatable for mainstream users.

                      2. 10

                        i admittedly have a very limited understanding of browser technologies, but everything described in the section on what they’re doing was how i imagined cookies already working in my head. i’m kind of … used to being horrified by browsers, by now, but yeah, learning how things used to work was an eye-opening lesson in how awful most browsers are. holy shit.

                        1. 10

                          In a better world, the way “things used to work” is how you’d want them to work. Shareable cookies do add value, they’re just very easy to abuse. I also don’t think this technically limits the tracking, though it may require it to make more network requests; it’s hard to stop two cooperating websites from communicating in order to track you, and adtech tracking is hosted by cooperating websites.

                          1. 1

                            I don’t understand why it couldn’t be a permission. « xxx.com wants to access some of your data from yyy.com [Review][Allow][Block] »

                            1. 1

                              Well, it’s more that xxx.com wants to access your data from xxx.com, but one xxx.com is direct and one is embedded in yyy.com’s page. The point I’m making on is that this is impossible to block if yyy.com and xxx.com are working together, which in the context of ads they always are. As one possible “total cookie protection” break, yyy.com could set a cookie with a unique tracking ID specific to yyy.com, redirect to xxx.com with the unique tracking ID as a URL parameter, and have xxx.com redirect back to it. Your xxx.com and yyy.com identities are now correlated, and neither site had to do anything browsers could reasonably block.

                        2. 9

                          As a developer working for a company that makes an embedded video player that’s used across the internet: this semi-breaks some user preferences, like remembering volume and preferred caption language — now they have to be set per embedding domain instead of applying globally when they’ve been set once.

                          And it thoroughly breaks our debug flags: during a tech support conversation, we can have users enable or disable certain features to track down where a bug is coming from. The UI for that is a page on our domain (the domain the embed is served from). Now users can set those flags, but they won’t actually do anything, because they won’t be readable on the domain where they’re really needed.

                          We could possibly move the UI for that inside of the embed to make it work again, but A) it would look and feel bad, and B) it probably won’t happen for a browser with a 3% share.

                          The Storage Access API offers very little help in this context: we can’t have the player pop up a permission request dialog for every user on every player load just to check whether they even have a debug flag set, so there would have to be some kind of hidden “make debugging work right” UI element that would trigger the request.

                          1. 6

                            Disclaimer: I trust you know this far better than I do, I’m just curious.

                            I can see how this Firefox feature breaks that functionality, and it sounds like unfortunate collateral damage.

                            For volume control, is that better handled by either the browser or the OS anyway?

                            For their preferred caption language, can the browser’s language be inferred from headers?

                            If a user wishes to override their browser’s language, it sounds plausible that this should be at the domain-level anyway. Perhaps I want native captions on one site, and foreign captions on a site I’m learning language from?

                            And it thoroughly breaks our debug flags: during a tech support conversation, we can have users enable or disable certain features to track down where a bug is coming from. The UI for that is a page on our domain (the domain the embed is served from). Now users can set those flags, but they won’t actually do anything, because they won’t be readable on the domain where they’re really needed.

                            How does Safari handle this?

                            1. 2

                              For volume control, is that better handled by either the browser or the OS anyway?

                              Arguable. Browsers don’t do anything helpful that I know of, and the OS sees the browser as one application.

                              For their preferred caption language, can the browser’s language be inferred from headers?

                              We default to the browser language (which generally defaults to the OS language) but there are reasons why some users tend to select something different for captions. It’s not the end of the world, it’s just annoying.

                              How does Safari handle this?

                              I’m unsure, sorry. I don’t see a ticket about it, and I don’t have any Safari-capable devices on hand.

                            2. 1

                              Interesting, thank you. The caption and volume preferences thing sounds annoying. But on the other hand, it won’t be any worse for you than it is for your competitors which is… something, at least.

                              You may want to take a look at how YouTube and Brightcove (off the top of my head) handle the debug part of this – right-clicking on a video provides all sorts of debug and troubleshooting information.

                              1. 2

                                We have that too, but it’s a different feature. We didn’t put the controls in there because we can give them a nicer presentation if they’re not stuck inside of an iframe :)

                          1. 3

                            If the CNCF’s landscape is becoming increasingly and predominantly declarative, shouldn’t YAML validation (maybe json too) become a part of its remit to support the landscape across the board?

                            1. 8

                              I think the real solution is to move away from YAML all together. It’s not actually human writable. Every time I need to produce some YAML, I have to begin by copy pasting a block because its rules are so inscrutable. A better solution is something like what Caddy does: use JSON as the source of truth language and then write adapters that translate some DSL into JSON. That gives you a better DX on both sides: as a producer, you get an actually human writable syntax, and as a consumer, you can consume JSON in any language with the standard library with no weird language quirks and security gotchas.

                              1. 8

                                Maybe we should all go back to XML? Hard to get wrong. :-)

                                1. 2

                                  XML has its own problems. CDATA making it hard to parse on one end of the spectrum and a lot of ways to say very similar things. Presence/absence of a node, having something as an attribute or child, everyone doing booleans differently, etc. And then there is standards stuff, like being sometimes UTF-8, sometimes UTF-16, and sometimes implementations not allowing the latter (XMPP, etc.).

                                  1. 1

                                    React JSX is proof that XML is a good language for writing a DOM, with some helpers. It’s just a terrible language for data serialization.

                                  2. 4

                                    I think the real solution is to move away from YAML all together. It’s not actually human writable.

                                    Guess I’m not human.

                                    (It did take me a couple weeks before I got comfortable with the indentation model.)

                                    1. 3

                                      Maybe I just need to sit down and actually read the spec for indenting instead of guessing how it works based on prior examples. But for a configuration language, why should you need to read something to figure it out? Why not have something that’s obvious?

                                    2. 3

                                      What bits of YAML do you find inscrutable? I hand-write syntactically correct YAML all the time. The YAML syntax that maps directly to JSON syntax (lists, objects, primitive types) doesn’t seem too inscrutable to me; it only gets weird if I use things like anchors.

                                      1. 3

                                        If you have a nested list of objects and a pipe, how many spaces does the string need? How many spaces are in the result?

                                        1. 3

                                          You mean like this?

                                          a:
                                            b:
                                              c: |
                                                some
                                                 text
                                                here
                                              d: 1234
                                          

                                          Off the cuff, I believe the block needs to be indented at least one space more than the line with the pipe (c: |) and it will strip leading whitespace according to the indentation of the first line. So the JSON string equivalent of c would be "some\n text\nhere\n".

                                          I admit I have to glance at the docs for some of the other text block modes, but pipe is the one I use 99% of the time when I use text blocks at all, and I don’t remember its behavior ever surprising me.

                                      2. 2

                                        At work we use the AWS CDK (Java in our case) [0] to do something along those lines. This is infinetely better than hand-editing YAML files. Even the terraform folks have a CDK now. I think this is the right way forward.

                                        [0] https://aws.amazon.com/cdk/ [1] https://www.terraform.io/cdktf

                                        1. 1

                                          Working with CDKTF after working with AWS CDK is fairly annoying (you begin to understand how broken types in terraform are after you encounter your third integer typed as a string), but it’s still so much better than having to write HCL files. Having a programming language under you is just better than trying to cobble the config together by hand.

                                        2. 2

                                          And there’s TOML on one side and HCL/UCL (which translate to JSON) on the other, with the latter working just fine for DevOps people.

                                          YAML has good use cases. It’s great for static blogs and also seems to be nice to add some metadata to packages, etc.

                                          Judging by how often it comes up, it doesn’t seem to work so well for a lot of configuration scenarios, unless it’s super simple and straight forward in which case space separated key value stuff like in most UNIX rc files, etc. as well as most other languages work as well.

                                          I think YAML was basically the first thing people found when they looked for JSON with comments.

                                          1. 1

                                            It would be nice to have a minimalist yaml. Something with comments, nested datastructures, literal string blocks.

                                            But with a simpler subset of primitives (ex: no implied dates, or string/numeric encoded variants.

                                        1. 9

                                          The article praises the decision to expose buffers to the end-user, and I agree that it’s powerful to be able to use the same commands in basically every context… but it leads to a lot of confusion and awkwardness that I think needs to be addressed as well. My case in point: Several times a week, I need to rename a file I’m editing. Here’s what it looks like:

                                          1. M-x rename-file
                                          2. Choose the file I want to rename
                                          3. Type in the new name and confirm
                                          4. M-x rename-buffer
                                          5. Enter the new name

                                          In a different text editor, it looks like this:

                                          1. (some keybinding or sequence)
                                          2. Enter the new name

                                          It could look like that in Emacs as well, but if you go looking around you find that it’s not in the core command set, and you have to dig up some script someone has already put together and then shove that into your init.el. Only then can you have workflow #2.

                                          Emacs knows which buffers are file-backed, and could very well offer a command like rename-file-buffer. I don’t know why it doesn’t, in the Year of Our Lord 2022. Maybe some bikeshedding over what counts as a file-backed buffer, or naming the function, or some internals thing I don’t know about. But it probably has something to do with “everything’s a buffer” and not trying too hard to smooth over that.

                                          1. 7

                                            While I agree with you about frustrations on awkward interfaces surrounding buffers, I’m not sure that I follow your example. For your example, I’d normally do

                                            1. C-x C-w (or M-x write-file)
                                            2. Enter the new name

                                            It seems like it follows your desired path, accomplishes your objectives, and only uses built-in commands and default keybindings? Is there something that I’m missing?

                                            1. 3

                                              This was my first thought. I gave saturn the benefit of the doubt here because C-x C-w copies the file. It doesn’t rename it. But both dired-rename-file and write-file accomplish what you want: changing the name of both the file and the buffer.

                                              1. 5

                                                The abundance of options is not necessarily a good thing. It hampers discoverability. I realize that saying things like that arguably makes me a bad emacs user, but we do exist.

                                                1. 2

                                                  True but in this is a case where the functionality is all of obvious, easy to code, and absent from the core product. I figure that the reason this reature is absent is because core emacs already has two better ways to get the workflow done. I don’t remember when I discovered the write-file method. I’d bet that it was early on in my use of emacs though so we’re talking early ’90s. I came to dired mode pretty late but learned very quickly how powerful it was.

                                                2. 2

                                                  write-file is good to know about! I still have to then use delete-file, but it is shorter.

                                                  1. 2

                                                    I agree. I used write-file for years before I discovered dired mode. I have to admit that in my case, the extra file hanging aroung is usually not a problem for me but I use emacs as a IDE/code editor. Emacs is not a lifestyle for me.

                                                    1. 1

                                                      I always keep dired buffers of my working directories and clean up from there. Best dired key command might be ~ (flag all backup files).

                                                3. 4

                                                  That’s absolutely true and it’s interesting that they haven’t done this already.

                                                  How much do you want to bet there there aren’t a million and one “rename-buffer-and-file” functions floating around in an equal number of .emacs files? :)

                                                  For me, while I really truly do appreciate the power and wisdom of having my editor also be an capable programming environment unto itself, I think exactly this kind of lack of polish is going to continue to erode emacs adoption over the long haul.

                                                  1. 7

                                                    Emacs not only knows when a buffer is attached to a file, it also does the right thing when it performs operations on the file from dired mode. I have a file I want to rename. I open the directory that it’s in with dired mode by pressing: c-x c-f enter from the buffer visiting the file. I press 'R' then fill out the new_filename in dialog. After the rename is finished, i press 'F' and I’m taken back to the buffer visiting the file. Emacs renames the file and automatically renames the buffer as you intended. Also note that the buffer never changes. Your new position is exactly the same as the old.

                                                  2. 4

                                                    That got me thinking. I use dired to rename files (with the r keybinding) and that does update the buffer names.

                                                    r is bound to dired-do-rename which calls dired-rename-file which calls set-visited-file-name on buffers that are visiting the file.

                                                    1. 1

                                                      Ah! It sounds like dired is the thing I should have been using. I always wrote it off as a “power tool” for when you need to do heavier rearranging of files and directories – multi-renames, whatever – but maybe that’s what all the experience users are actually doing for renames?

                                                      1. 1

                                                        dired is how I browse the filesystem to see what’s there.

                                                    2. 2

                                                      This doesn’t address the larger point, but it does the pain in your workflow. You can achieve the same in one logical step by using dired’s write mode (Wdired) to do the rename in the dired buffer.

                                                      1. C-x C-j (opens dired in the directory the current buffer)
                                                      2. C-x C-q
                                                      3. The file in question inside the dired buffer
                                                      4. C-c C-c

                                                      As to why rename-file-buffer doesn’t also rename the buffers that are visiting that file, I’m guessing it is because it is written in C, and the code is already hairy enough to complicate it further with additional responsibilities.

                                                      Especially as there are edge cases. Buffers don’t have to have the same name as the file they are visiting. For example when you use clone-indirect-buffer-other-window, which some people use heavily in conjunction with narrowing. Should we rename all buffers visiting the file only where there is an exact match between the buffer and file name? what about when the file is part of the buffer name ej. foo.txt<3> or foo.txt/fun-name<2>? I think it is a reasonable choice to have rename-file do only one thing and let users implement a more dwim version by themselves.

                                                      1. 2

                                                        I wrote a function to do that (“some script someone has already put together”). Once my work signs Emacs’s employer disclaimer of rights, I’m going to try to get this into Emacs proper.

                                                        1. 1

                                                          This doesn’t address your actual point, but adding just in case it’s useful to someone. Pretty sure I stole this from Steve Yegge years ago

                                                          (defun rename-file-and-buffer (new-name)
                                                            "Rename both file and buffer to NEW-NAME simultaneously."
                                                            (interactive "sNew name: ")
                                                            (let ((name (buffer-name))
                                                                  (filename (buffer-file-name)))
                                                              (if (not filename)
                                                                  (message "Buffer '%s' is not visiting a file." name)
                                                                (if (get-buffer new-name)
                                                                    (message "A buffer named '%s' already exists." new-name)
                                                                  (progn
                                                                    (rename-file name new-name 1)
                                                                    (rename-buffer new-name)
                                                                    (set-visited-file-name new-name)
                                                                    (set-buffer-modified-p nil))))))
                                                          
                                                          1. 1

                                                            Reading this made me realize that I can add a little function to my .emacs to do this (my strategy has tended to be to do it in a vterm session and then re-open the file when I need to do this once every blue moon).

                                                            I do think there should be “a thing” (though stuff like editing of remote files have to be answered). I do wonder how open GNU Emacs is to simple QoL improvements like that.

                                                          1. 7

                                                            So, given this is signed almost exclusively by FOSSphone devs, I’d assume this is in response to some WIP-shipping frustration in the FOSSphone community. Is there a particular event here?

                                                            1. 2

                                                              There’s gotta be, right? They’ve very carefully avoided naming the events and parties involved. I imagine this is for one of the following reasons:

                                                              1. Preserve goodwill with the involved parties
                                                              2. Avoid an internet shitstorm raining down on someone, unproductively
                                                              3. Focus attention on the general picture, rather than the specifics

                                                              But I think it’s a misstep to speak about all of this abstractly and not give examples, even fictional ones,

                                                              1. 2

                                                                I don’t know. The op was quizzed about that on twitter and gave a non-committal hand-wavy answer, sadly. https://twitter.com/calebccff/status/1533095367784505345

                                                                1. 6

                                                                  Well, the hand-wavy answer, and lack of name calling in the letter, is 100% on purpose.

                                                                  If there continue to be instances of distro(s) shipping WIP patches and users complaining to upstream about it, those will be called out more specifically in the future.

                                                                2. 2

                                                                  Perhaps it’s distros adopting experimental patches to e.g. GTK that FOSSphone devs are working on for better mobile input support?

                                                                  1. 1

                                                                    Hehe, I don’t know whether it’s related, but a recent apt-get upgrade on my Mobian-running PinePhone broke Chatty - the SMS / MMS client.

                                                                    1. 1

                                                                      It is not. IIRC that was a libary issue, it was not caused by this (sorry, I can’t find the exact issue and fix now)

                                                                  1. 1

                                                                    I seem to recall a nice little Linux panel applet that would prevent screen lock for as long as the mouse was positioned over it. It would be nice to find that again to use when I’m cooking.

                                                                    Or this. I guess this works too!

                                                                    1. 2

                                                                      And similarly, this is why you can’t just transfer a repo to someone in GitHub – they have to agree to accept it as well.

                                                                      1. 3

                                                                        Similarly, on Reddit you can only invite someone to be a moderator of a subreddit; it’s up to them whether to accept the invitation. So you can’t just forcibly associate someone with a subreddit they don’t want to be part of.

                                                                      1. 2

                                                                        I’d say it’s also important to try to avoid breaking changes in the first place. Deprecate all you want, print warnings, all of that… but if you make a breaking change in a library, you’re always risking dependency hell for your dependents (and their dependents).

                                                                        Rich Hickey had a bold suggestion: If you want to change what do_foo does, just add a do_foo2 next to it and deprecate the old one. Not always applicable, but an approach I always keep in mind as an option.

                                                                        1. 1

                                                                          if you make a breaking change in a library, you’re always risking dependency hell for your dependents (and their dependents).

                                                                          As long as you bump the major version with each breaking change, there is no consequent hell for any consumer. Anyone depending on major version N remains entirely unaffected. When consumers upgrade from major version N to major version N+1, they must, of course, verify that their code continues to work as expected, which always requires explicit attention.

                                                                          If you want to change what do_foo does, just add a do_foo2 next to it and deprecate the old one. Not always applicable, but an approach I always keep in mind as an option.

                                                                          This approach optimizes for minimizing breaking changes, and therefore minimizing major version bumps. But it also results in APIs which are strictly worse than the alternative. If foolib v1 has 100 consumers, and barlib v1 has 100k consumers, is the cost of a breaking change equivalent for them both? If you are a new consumer, how much better is a foolib with only a single, “correct” do_foo function, versus one with a deprecated do_foo and a “correct” do_foo2”?

                                                                          The cost or benefit a breaking change is a function of a lot of variables. It’s not strictly negative.

                                                                          1. 1

                                                                            As long as you bump the major version with each breaking change, there is no consequent hell for any consumer.

                                                                            This is incorrect for many large codebases with complicated dependency trees. Take the situation where you have two dependencies, A and B, each of which depends on C. A starts depending on a newer major version of C due to a newer feature, but B still requires the old major version due to breaking changes. Now you’re stuck on this version of A, and getting farther out of date. Or worse, you’re trying to set up your dependencies in the first place and can’t find a compatible combination.

                                                                            This is a simplified situation. It gets worse in larger, more complicated dependency trees.

                                                                            (And what’s even worse is when you have pinned some dependency somewhere because of breakage, and then other things get pinned as a result, and then three years later you have to start unwinding the trail of pins and going through multiple version upgrades just to get everything back to latest.)

                                                                            So yeah, there are reasons to not make breaking changes in the first place, regardless of versioning schemes.

                                                                            (Maybe if you use Nix or something you can get out of this, since A and B can each have their own C…)

                                                                            1. 1

                                                                              I understand this scenario intimately 😉 but this is (a) an uncommon hell, and (b) of the consumer’s making. IMO it’s something that should influence the major-version-bump cost calculus, absolutely, but not dominate it.

                                                                              1. 1

                                                                                I’m not sure how you can say it’s of the consumer’s making. If I need to depend on A and B, it’s not my fault that they at some point each depend on incompatible versions of C.

                                                                                Anyway, there’s a middle ground: Announce the deprecation well in advance, emit warnings, allow a good amount of calendar time (and a few versions) to pass, and finally make the breaking change. This reduces the chances of problems.

                                                                                1. 1

                                                                                  I’m not sure how you can say it’s of the consumer’s making. If I need to depend on A and B, it’s not my fault that they at some point each depend on incompatible versions of C.

                                                                                  That’s true. But, in practice, when this happens, there are usually a lot of mitigating factors. There’s almost always some (prior) versions of A and B which have compatible requirements against C, which you can fall back to. If you’re prevented from using those versions for whatever reason — security vulnerabilities, etc. — then it’s usually just a short period of time before the maintainers of the dependencies update them to a compatible state. And even in the situation where this dep graph is truly unsolvable, you always have the fall-back option of forking and patching.

                                                                                  My point isn’t to diminish the real pain introduced by the points you raise, it’s real! But I think these conditions are basically always pathological, usually quite rare, generally ephemeral, and can often be solved in a variety of ways.

                                                                                  I think it’s important that software authors can write and release software that delivers value for other people, without necessarily binding them to support each released version into perpetuity. Software is a living thing, no API is perfect, and no package can be expected to be perfect. The cost of a breaking change isn’t determined solely by the impact on consumers, it’s a complex equation that involves consumers, producers, scope, impact, domain, and a bunch of other variables.

                                                                                  1. 1

                                                                                    True, at work we’ve had to fork and patch a few times. It usually does work as a final escape valve.

                                                                        1. 9

                                                                          Either SemVer means something, or it doesn’t. I choose to believe that it does.

                                                                          I choose to believe it doesn’t.

                                                                          Or, well, more precisely, I believe that semver at most communicates developer intent about the changes that went into a release. Bumping a minor version number is equivalent to the claim that “we think this is backwards compatible”. Actually proving that a change is backwards compatible in practice can get rather difficult and I don’t believe it’s ever done.

                                                                          I had an idea a few years ago for a Go tool that would estimate a lower bound on the semver version bump between two version of the same codebase, by looking at the exported types and functions and comparing signatures. I then wanted to run it on the claimed public releases of some open source projects and file issues arguing that their versions should be much higher than the official ones. In the end I decided that the effort outweighed the pleasure to be had from the shitposting.

                                                                          1. 8

                                                                            The canonical example here is all the people who got extremely upset when a popular Python cryptography package switched from implementing its critical stuff in a C extension, to implementing it in a Rust extension. Many angry commenters felt this was a violation of SemVer (which, to be clear, the project had never claimed to follow in the first place) and should have required a major version bump despite the fact that the public API of the module had not changed.

                                                                            I’m obviously biased, but also personally a fan of the approach Django settled on, which is that the version number is not the thing you care about. Instead it’s whether the releases you’re looking at are designated for long-term support or not. Django’s deprecation and release cycles are set up so that:

                                                                            • Every third feature release (every couple years at the current pace) is an LTS with a longer support period (currently averaging around 30 months from initial release, compared to around 18 for a non-LTS).
                                                                            • All API that exists and doesn’t raise deprecation warnings in the current LTS will also exist in the next LTS.

                                                                            So if you want infrequent upgrades and maximum stability, run on the current LTS, and your upgrade process when the next one comes out is to clear deprecation warnings. Or if you want to always have the latest features — which may come at the cost of dealing with deprecation/removal more often — you can jump to every release as it comes out.

                                                                            1. 1

                                                                              The Python cryptography thing happens every time any library adds a dependency (or upgrades a dependency) that can be conflicted with. In Python (and most other languages) that’s every dependency, because the entire dependency tree can contain at most one version of a thing. It just so happens that adding a Rust compiler as a dependency is way more incompatible than average.

                                                                              It would be hard to prove that a release of something was backward-compatible, in general, but it’s not hard to prove that these changes aren’t. I think this suggests that major version changes are a big deal and should be avoided: They’re likely to cause a huge amount of churn in your transitive dependants.

                                                                              More generally, as long as SemVer works on the meh/meh/you’re-screwed model, people are going to want major version bumps every time any of them is screwed, and major version bumps are going to screw people.

                                                                              1. 3

                                                                                It just so happens that adding a Rust compiler as a dependency is way more incompatible than average.

                                                                                As far as I’m aware, the cryptography module has always provided pre-compiled binary packages on PyPI. They certainly were providing them when they made the switch to doing the extension in Rust.

                                                                                Part of the outrage was people claiming that this broke support for certain platforms Rust doesn’t support (though I believe most or all of those are also platforms Python itself does not officially support). Part of it was just people being angry because being anti-Rust is a culture-war thing for certain segments of the programming population.

                                                                                I have never actually heard from someone who was a user of the module, on a platform and architecture that Python itself supports, and who was unable to install and use it after the switch.

                                                                                1. 2

                                                                                  It hit me on Debian stable at some point. For whatever reason there wasn’t a prebuilt wheel pip wanted to install, and when I tried rustc memory pressure slowed the system to a crawl (would it have finished eventually? I have no idea; I couldn’t afford to lag out the other services that were swapping, so I chickened out). I didn’t particularly blame the package maintainers for this situation, but I did swear at the Python ecosystem collectively a bit for not managing to install a binary. I don’t know if it was missing or if pip just didn’t find it. We didn’t wait to see if the problem would be resolved, just dropped the functionality that depended on cryptography and moved on with our lives.

                                                                                  1. 2
                                                                              2. 1

                                                                                despite the fact that the public API of the module had not changed

                                                                                Did it change which OSes the package could run on? I know that Rust isn’t supported everywhere yet…

                                                                                1. 2

                                                                                  Did it change which OSes the package could run on?

                                                                                  This was the crux of the disagreement. It changed what platforms the package could potentially compile on.

                                                                                  As far as the package’s authors were concerned, the rust replacement was capable of compiling on every platform they had previously claimed to support, so nothing had changed. As before they distributed precompiled binaries for supported platforms.

                                                                                  But users of, say, a completely unsupported AIX (I don’t remember the actual platform in question, just picked AIX out of the air, it was on that plane of obscurity) port of python complained that this broke the cryptography package for them.

                                                                                  The disagreement was fundamentally between: the user base assumption that if they could convince something to compile, this meant it worked and therefore anything that broke their ability to continue to convince it to compile was a breaking change, and the maintainers belief that SemVer applies to what you actually claim to support, and that just because something happens to compile on an unsupported and untested platform doesn’t mean it will meaningfully work, except by accident (and that “works by accident” is a horrible thing to depend on for cryptography, since for all anyone knew the AIX compiler might be screwing up security critical code).

                                                                                  (I’m not very sympathetic to the users’ camp in the argument; SemVer is only meaningfully feasible if the notion of support for compatibility exists within well-defined, maintainer-determined boundaries)

                                                                            1. 1

                                                                              the nice thing about *nix is that you don’t have to restart the machine unless the misbehaving process is teh kernel or init (there is kexec etc. but that doesn’t have the well-testedness avantage)

                                                                              1. 8

                                                                                The whole point of this article is that this statement is untrue.

                                                                                It is not uncommon for machines regardless of OS[1] to all get into a state where they’re globally pantsed - obviously some are worse at this than others (or is that better at it?). Oftentimes the result is just terrible performance, sometimes complete inability to make forward progress. It is possible there is a single faulty process, and an OS that has robust process management can deal with that. However often times you can’t isolate the fault to a single process, so you start having to restart increasingly large amounts of user space. At some point, restarting the system means is the most sensible path forward as it guarantee-ably gets your entire system into a non-pantsed state.

                                                                                A lot of system reliability engineering is the misery of debugging systems once they’re stuck to try and work out how they got into that stuck state, while also being continuously pinged by people wanting things to be running again.

                                                                                [1] I’ve worked with, and encountered “just reboot it” level problems with a variety of linuxes over the years (1.x,2.x,3.x I don’t think I’ve used 4+ in any work situation), macOS 7.* (and people complain about windows), all of the OSX/macOSs at varying levels of stability, windows (weirdly one of the most stable machines I ever had was this compaq windows Me thing), VAX/VMS, freeBSD, and I’m sure at least a couple of others in a general mucking around during uni setting

                                                                                1. 5

                                                                                  Ooooh, did I ever tell you that thing about the uptime log :-D?

                                                                                  So my first serious computer gig, back in 2002, eventually had me also helping the sysadmin who ran things at $part_time_job, which he graciously agreed to when I told him I wanted to learn a thing or two about Unix. One of the things we ran was a server – aptly called scrapheap – which ran all the things that people liked, but which management could not be convinced to pay for, or in any case, could not be convinced to pay enough. It ran a public-ish (invite-only) mailing list with a few hundred subscriber, a local mirror for distributed updates and packages, and a bunch of other things, all on a beige box that had been cobbled together out of whatever hardware was laying around.

                                                                                  Since a lot of people spread around four or five offices in two cities ended up depending on it being around, it had excellent uptime (in fact I think it was rebooted only five or six times between 1999-ish when it was assembled and 2005 – a few times for hardware failure/upgrades, once to migrate it to Linux, and once because of some Debian shenanigans).

                                                                                  On the desk under which it was parked laid a clipboard with what we affectionately called “the uptime log”. The uptime log listed all the things that had been done in order to keep the thing running without rebooting it, because past experience had taught us you never know how one of these is going to impact the system on the next boot, and nobody remembers what they did six months ago. Since the system was cobbled together from various parts it was particularly important because hardware failure was always a possibility, too.

                                                                                  The uptime log was not pretty. It included things like:

                                                                                  • Periodic restart of samba daemon done 04.11.2002 21:30 (whatever), next one scheduled 09.11.2002 because $colleague has a deadline on 11.11 and they really need it running. I don’t recall why but we had to restart it pretty much weekly for a while, otherwise it bogged down. Restarts were scheduled not necessarily so as not to bother people (they didn’t take long and they were easy to do early in the morning) but mostly so as to ensure that they were performed shortly before anyone really needed it, when it was the fastest.
                                                                                  • Changed majordomo resend params (looked innocuous, turned out to be relevant: restarting sendmail after an update presumably carried over some state, and it worked prior to the reboot, but not afterwards. That’s how I discovered Pepsi and instant coffee are a bad mix).
                                                                                  • Updated system $somepackage (I don’t remember what it was, some email magic daemon thing). Separately installed old version under /opt/whatever. Amended init script to run both correctly but I haven’t tested it, doh.

                                                                                  It was a really nasty thing. We called scrapheap various endearing names like “stubborn”, “quirky” or “prickly” but truth is everyone kindda dreaded the thing. I was the only one who liked it, mainly because – being so entangled, and me being a complete noob at these things – I rarely touched it.

                                                                                  You could say well, Unix and servers were never meant to be used like that, you should’ve been running a real machine first of all and second of all it was obviously really messy and you could’ve easily solved all that by partitioning services better and whatnot. Thing is we weren’t exactly sh%&ing money, the choice was between this and running mailing lists by aviary carriers so I bet anyone who has to do similar things today, on a budget that’s not exactly a YC-funded startup or investment bank budget, is really super glad for Docker or jails or whatever they’re using.

                                                                                  1. 2

                                                                                    It is not uncommon for machines regardless of OS[1] to all get into a state where they’re globally pantsed - obviously some are worse at this than others (or is that better at it?)

                                                                                    You’re absolutely right, though it’s also the case that Unixes have a lot more scopes that you can restart from initial conditions than other common OSes. Graphical program doesn’t work, and restarting it doesn’t help? Log out and log back in. That doesn’t fix it? Go to console and restart your window system. That doesn’t fix it? Go to single-user mode, then bring it back up to multi-user. Once that’s exhausted is when you need to reboot…

                                                                                    Of course, just rebooting would be faster than trying all of these one after another. Usually.

                                                                                    1. 1

                                                                                      The whole point of this article is that this statement is untrue.

                                                                                      I can tell you that I almost never restart my whole computer. Certainly, the software I write has been much better tested when restarting just the service, and the “whole computer restart” has not. An easy example is that service ordering may not be properly specified, which is OK when everything is already running, but not OK when booting up.

                                                                                      Unix ain’t Windows. If you aren’t working on the kernel or PID 1 you almost never have to restart.

                                                                                      1. 3

                                                                                        Back in early 2000s, when I had win2k/new XP, and linux systems. All of them went similarly long periods between reboots, measured in weeks. But even then, manually rebooting any of those was not an uncommon event.

                                                                                        Now these days of course, most systems - including nixes - have security updates requiring reboots at that kind of cadence, thus requiring reboots which presumably mitigates any potential “reboot fixed” issues.

                                                                                        1. 2

                                                                                          You’d think so, but I have managed to bring GPU’s into a broken state, where anything trying to communicate with them just hangs. Restart was the only way out.

                                                                                      2. 1

                                                                                        About 70% of the time that I upgrade libc at least one thing is totally hosed until reboot. And that thing might be my desktop environment, in which case “restarting that process” is exactly the same level of interruption as rebooting, just less thorough.

                                                                                        1. 1

                                                                                          Are you, by any chance, from Ontario? [I ask because I’ve only every heard Ontarians use the term “hosed” that way & very much want to know if you are an exception to this pattern.]

                                                                                          1. 1

                                                                                            Nope! I’m from Virginia and have lived in Massachusetts for the past 10-ish years. It’s a term I hear people use from time to time, but I haven’t happened to notice any pattern to who. It’s likely that it spread in some subcultures or even in just some social subgraphs.

                                                                                            1. 1

                                                                                              Good to know! Thanks :)

                                                                                      1. 16

                                                                                        Stop using laptops. For the same money you can get a kickassssss workstation.

                                                                                        1. 27

                                                                                          But then for the time you want to work away from the desk you need an extra laptop. Not everyone needs that of course, but if you want to work remotely away from home or if you do on-call, then laptop’s a requirement.

                                                                                          1. 6

                                                                                            Laptops also have a built-in UPS! My iMac runs a few servers on the LAN and they all go down when there’s a blackout.

                                                                                            1. 2

                                                                                              Curious in which country you live that this is a significant enough problem to design for it?

                                                                                              1. 5

                                                                                                Can’t speak about the other poster, but I think power distribution in the US would qualify as risky. And not only in rural areas. consider that even Chicago burbs don’t have buried power lines. And every summer there’s the blackout due to AC surges. I’d naively expect at least 4 or 5 (brief) blackouts per year

                                                                                          2. 8

                                                                                            i get that, but it’s also not a very productive framework for discussion. i like my laptop because i work remotely – 16GB is personally enough for me to do anything i want from my living room, local coffee shop, on the road, etc. i do junior full-stack work, so that’s likely why i can get away with it. obviously, DS types and other power hungry development environments are better off with a workhorse workstation. it’s my goal to settle down somewhere and build one eventually, but it’s just not on the cards right now; i’m moving around quite a bit!

                                                                                            my solution? my work laptop is a work laptop – that’s it. my personal laptop is my personal laptop – that’s it. my raspberry pi is for one-off experiments and self-hosted stuff – that’s it. in the past, i’ve used a single laptop for everything, and frequently found it working way too hard. i even tried out mighty for a while to see if that helped ((hint: only a little)). separation of concerns fixed it for me! obviously, this only works if your company supplies a laptop, but i would go as far as to say that even if they don’t it’s a good alternative solution, and might end up cheaper.

                                                                                            my personal laptop is a thinkpad i found whilst trash-hopping in the bins of the mathematics building at my uni. my raspberry pi was a christmas gift, and my work laptop was supplied to me. i spend most of my money on software, not really on the hardware.

                                                                                            edit: it’s also hard; since i have to keep things synced up. tmux and chezmoi are the only reasonable way i’ve been able to manage!

                                                                                            1. 6

                                                                                              Agree. The ergonomics of laptops are seriously terrible.

                                                                                              1. 7

                                                                                                Unfortunately I don’t think this is well known to most programmers. Recently a fairly visible blogger posted his workstation setup and the screen was positioned such that he would have to look downward just like with a laptop. It baffled many that someone who is clearly a skilled programmer could be so uninformed on proper working ergonomics and the disastrous effects it can have on one’s posture and long-term health.

                                                                                                Anyone who regularly sits at a desk for an extended period of time should be using an eye-level monitor. The logical consequence of that is that laptop screens should only be used sparingly or in exceptional circumstances. In that case, it’s not really necessary to have a laptop as your daily driver.

                                                                                                1. 6

                                                                                                  After many years of using computers I don’t see a big harm of using a slightly tilted display. If anything a regular breaks and stretches/exercises make a lot more difference, especially in long term.

                                                                                                  If you check out jcs’ setup more carefully you’ll see that the top line is not that much lower from the “default” eye-line so ergonomics there works just fine.

                                                                                                  1. 1

                                                                                                    We discuss how to improve laptop ergonomics and more at https://reddit.com/r/ergomobilecomputers .

                                                                                                    (I switched to a tablet PC, the screen is also tilted a bit but raised closer to eye level. Perhaps the photo in the ‘fairly visible blogger’s setup was setup for the photo and might be raised higher normally)

                                                                                                2. 2

                                                                                                  That assumes you’re using the laptop’s built-in keyboard and screen all day long. I have my laptop hooked up to a big external monitor and an ergonomic keyboard. The laptop screen acts as a second monitor and I do all my work on the big monitor which is at a comfortable eye level.

                                                                                                  On most days it has the exact same ergonomics as a desktop machine. But then when I occasionally want to carry my work environment somewhere else, I just unplug the laptop and I’m good to go. That ability, plus the fact that the laptop is completely silent unless I’m doing something highly CPU-intensive, is well worth the loss of raw horsepower to me.

                                                                                                3. 2

                                                                                                  A kickass workstation which can’t be taken into the hammock, yes.

                                                                                                  1. 1

                                                                                                    I bought a ThinkStation P330 2.5y ago and it is still my best computing purchase. Once my X220 dies, if ever, then I will go for a second ThinkStation.

                                                                                                    1. 3

                                                                                                      A few years ago I bought an used thinkcentre m92. Ultra small form factor. Replaced the hard drive with a cheap SSD and threw in extra RAM and a 4k screen. Great set up. I could work very comfortably and do anything I want to do on a desktop. Including development or whatching 4k videos. I used that setup for five years and have recently changed to a 2 year old iMac with an Intel processor so I can smoothly run Linux on it.

                                                                                                      There is no way I am suffering through laptop usage. I see laptops as something suited for sales people, car repair, construction workers and that sort of thing. For a person sitting a whole day in front of the screen… No way.

                                                                                                      I don’t get the need for people to be able to use their computers in a zillion places. Why? What’s so critical about it? How many people actually carries their own portable office Vs just doing their work on their desks before the advent of the personal computer? We even already carry a small computer in our pocket att all times that fills up lot of personal work needs such as email, chat, checking webpages, conference calls, etc. Is it really that critical to have a laptop?

                                                                                                      1. 4

                                                                                                        I don’t get the need for people to be able to use their computers in a zillion places. Why? What’s so critical about it?

                                                                                                        I work at/in:

                                                                                                        1. The office
                                                                                                        2. Home office
                                                                                                        3. Living room

                                                                                                        The first two are absolutely essential, the third is because if I want to do some hobbyist computing, it’s not nice if I disappear in the home office. Plus my wife and I sometimes both work at home.

                                                                                                        Having three different workstations would be annoying. Not everything is on Dropbox, so I’d have to pass files between machines. I like fast machines, so I’d be upgrading three workstations frequently.

                                                                                                        Instead, I just use a single MacBook with an M1 Pro. Performance-wise it’s somewhere between a Ryzen 5900X and 5950X. For some things I care about for work (matrix multiplication), it’s even much faster. We have a Thunderbolt Dock, 4k screen, keyboard and trackpad at each of these desks, so I plug in a single Thunderbolt cable and have my full working environment there. When I need to do heavy GPU training, I SSH into a work machine, but at least I don’t have a terribly noisy NVIDIA card next to me on or under the desk.

                                                                                                        1. 3

                                                                                                          The first two are absolutely essential, the third is because if I want to do some hobbyist computing, it’s not nice if I disappear in the home office.

                                                                                                          I believe this is the crux of it. It boils down to personal preference. There is no way I am suffering to the horrible experience of using a laptop because it is not nice to disappear to the office. If anything, it raises the barrier to be in front of a screen.

                                                                                                        2. 2

                                                                                                          Your last paragraph is exactly my thoughts. Having a workstation is a great way to reduce lazy habits IMNSHO. Mobility that comes with a laptop is ultimately a recipe for neck pain, strain in arms and hands and poor posture and habits.

                                                                                                          1. 6

                                                                                                            I have 3 places in which I use my computer (a laptop). In two of them, I connect it to an external monitor, mouse and keyboard, and I do my best to optimize ergonomics.

                                                                                                            But the fact that I can take my computer with me and use it almost anywhere, is a huge bonus.

                                                                                                    1. 4

                                                                                                      God, I hope so.

                                                                                                      On the other hand, the cure might be worse than the disease, if it means more complexity to browsers to make MPAs behave like SPAs…

                                                                                                      1. 35

                                                                                                        As I was reading this wonderful writeup, I had a nagging feeling that most certainly ‘someone on the internet’ will bring up some half-assed moralizing down to bear on the author. And sure enough, it’ the first comment on lobsters.

                                                                                                        I think it’s a beautiful and inspiring project, making one think about the passage of time and how natural processes are constantly acting on human works.

                                                                                                        @mariusor, I recommend you go troll some bonsai artists, they too are nothing but assholes who carve hearts in trees.

                                                                                                        1. 8

                                                                                                          We do have an ethical obligation to consider how our presence distorts nature. Many folks bend trees for many purposes. I reuse fallen wood. But we should at least consider the effects we have on nature, if for no other reason than that we treat nature like we treat ourselves.

                                                                                                          I could analogize bonsai to foot-binding, for example. And I say that as somebody who considered practicing bonsai.

                                                                                                          1. 12

                                                                                                            Foot binding is a social act in which women are deliberately crippled in exchange for access to certain social arrangements in which they don’t need to be able to walk well. The whole practice collapsed once the social arrangement went away. It’s very different than just getting a cool gauge piercing or whatever.

                                                                                                            1. 7

                                                                                                              Thank you Corbin for addressing the substance of my admittedly hot-headed comment. It did give me food for thought.

                                                                                                              I am definitely in agreement with you on the need to consider the impact of our actions on the environment. I have a bunch of 80-year old apple trees in my yard which were definitely derailed, by human hands, from their natural growth trajectory. This was done in the interest of horticulture, and I still benefit from the actions of the now-deceased original gardener. All in all I think the outcome is positive, and perhaps will even benefit others in the future if my particular heritage variety of apple gets preserved and replicated in other gardens. In terms of environmental impact, I’d say it’s better for each backyard to have a “disfigured” but fruitful apple tree than to not have one, and rely on industrial agriculture for nutrition.

                                                                                                              Regarding the analogy with foot-binding, which I think does hold to a large extent (i.e it involves frustrating the built-in development pattern of another, without the other’s consent) – the key difference is of course the species of the object of the operation.

                                                                                                              1. 7

                                                                                                                Scale matters too, I think.

                                                                                                                I’m a gardener who grows vegetables, and I grow almost everything from seed - it’s fun and cheap. That means many successive rounds of culling: I germinate seeds, discard the weakest and move the strongest to nursery pots, step out the strongest starts to acclimatize to the weather, plant the healthiest, and eventually thin the garden to only the strongest possible plants. I may start the planting season with two or three dozen seeds and end up with two plants in the ground. Then throughout the year, I harvest and save seeds for next, often repeating the same selecting/culling process.

                                                                                                                Am I distorting nature? Absolutely, hundreds of times a year - thousands, perhaps, if I consider how many plants I put in the ground. But is my distortion significant? I don’t think so; I don’t think that, even under Kant’s categorical imperative, every back-yard gardener in the universe selecting for their own best plants is a problem. It fed the world, after all!

                                                                                                                1. 3

                                                                                                                  My friend who is a botanist told me about research he did into how naïve selection produces worse results. Assume you have a plot with many variants of wheat, and at the end of the season, you select the best in the bunch for next year. If you’re not careful, the ones you select are the biggest hoarders of nutrients. If you had a plot with all that genotype, it would do poorly, because they’re all just expertly hoarding nutrients away from each other. The ones you want are the ones that are best at growing themselves while still sharing nutrients with their fellow plants. It’s an interesting theory and he’s done some experiment work to show that it applies in the real world too.

                                                                                                                  1. 2

                                                                                                                    The ones you want are the ones that are best at growing themselves while still sharing nutrients with their fellow plants.

                                                                                                                    So maybe you’d also want to select some of the ones next to the biggest plant to grow in their own trials as well.

                                                                                                            2. 3

                                                                                                              I think it’s a beautiful and inspiring project, making one think about the passage of time and how natural processes are constantly acting on human works.

                                                                                                              I mean… on the one hand, yes, but then on the other hand… what, we ran out of ways to make one think about the passage of time and how natural processes are constantly acting on human works without carving into things, so it was kind of inevitable? What’s wrong with just planting a tree in a parking lot and snapping photos of that? It captures the same thing, minus the tree damage and leaving an extra human mark on a previously undisturbed place in the middle of the forest.

                                                                                                              1. 14

                                                                                                                As I alluded in my comment above, we carve up and twist apple trees so that the actually give us apples. If you just let them go wild you won’t get any apples. Where do you get your apples from? Are you going to lecture a gardener who does things like grafting, culling, etc., to every tree she owns?

                                                                                                                The same applies here: the artist applied his knowledge of tree biology and his knowledge of typography to get a font made by a tree. I think that’s pretty damn cool. I am very impressed! You can download a TTF! how cool is that?

                                                                                                                Also, it’s not ‘in the middle of a forest’, but on his parents’ property, and the beech trees were planted by his parents. It’s his family’s garden and he’s using it to create art. I don’t get the condemnation, I think people are really misapplying their moral instincts here.

                                                                                                                1. 5

                                                                                                                  Are you going to lecture a gardener who does things like grafting, culling, etc., to every tree she owns?

                                                                                                                  No, only the gardeners who do things like grafting, culling etc. just to write a meditative blog post about the meaning of time, without otherwise producing a single apple :-). I stand corrected on the forest matter, but I still think carving up trees just for the cool factor isn’t nice. I also like, and eat, beef, and I am morally conflicted about it. But I’m not at all morally conflicted about carving up a living cow just for the cool factor, as in, I also think it’s not nice. Whether I eat fruit (or beef) has no bearing on whether stabbing trees (or cows) for fun is okay.

                                                                                                                  As for where I get my apples & co.: yes, I’m aware that we carve up and twist apple trees to give us apples. That being said, if we want to be pedantic about it, back when I was a kid, I had apples, a bunch of different types of prunes, sour cherries, pears and quince from my grandparents’ garden, so yeah, I know where they come from. They pretty much let the trees go wild. “You won’t get any apples” is very much a stretch. They will happily make apples – probably not enough to run a fruit selling business off of them, but certainly enough for a family of six to have apples – and, as I very painfully recall, you don’t even need to pick them if you’re lazy, they fall down on their own. The pear tree is still up, in fact, and at least in the last 35 years it’s never been touched in any way short of picking the pears on the lowest two or three branches. It still makes enough pears for me to make pear brandy out of them every summer.

                                                                                                                  1. 6

                                                                                                                    I concede your point about the various approaches as to what is necessary and unnecessary tree “care” :)

                                                                                                                    No, only the gardeners who do things like grafting, culling etc. just to write a meditative blog post about the meaning of time, without otherwise producing a single apple :-).

                                                                                                                    But my argument is that there was an apple produced, by all means. You can enjoy it here: https://bjoernkarmann.dk/occlusion_grotesque/OcclusionGrotesque.zip

                                                                                                              2. 3

                                                                                                                Eh. I hear what you’re saying, but you can’t ignore the fact that “carving letters into trees” has an extremely strong cultural connection to “idiot disrespectful teenagers”.

                                                                                                                I can overlook that and appreciate the art. I do think it’s a neat result. But then I read this:

                                                                                                                The project challenges how we humans are terraforming and controlling nature to their own desires, which has become problematic to an almost un-reversible state. Here the roles have been flipped, as nature is given agency to lead the process, and the designer is invited to let go of control and have nature take over.

                                                                                                                Nature is given agency, here? Pull the other one.

                                                                                                                1. 3

                                                                                                                  You see beautiful and wonderful writeup, I see an asshole with an inflated sense of self. I think it’s fair that we each hold to our own opinions and be at peace with that. Disrespecting me because I voiced it is not something I like though.

                                                                                                                  1. 15

                                                                                                                    I apologize at venting my frustration at you in particular.

                                                                                                                    This is a public forum though, and just as you voiced your opinion in public, so did I. Our opinions differ, but repeatedly labeling other as “assholes” (you did in in your original post and in the one above) sets up a heated tone for the entire conversation. I took the flame bait, you might say.

                                                                                                                    Regarding ‘inflated sense of self’ – my experience with artists in general (I’ve lived with artists) is that it’s somewhat of a common psychological theme with them, and we’re better off judging the art, not the artist.

                                                                                                                1. 0

                                                                                                                  I hated this. In my opinion nothing was gained from this exercise that couldn’t have been gleaned with a simple simulation of surface deformation for the curves of the font.

                                                                                                                  Masking this as an exploration of learning “on nature’s terms” is just a pretentious way of carving kyle+karen=love surrounded by a little heart in the middle of the forest. I was thought that only assholes do that.

                                                                                                                  1. 28

                                                                                                                    You could have made the same (correct, imo) point without adding an insult.

                                                                                                                    1. 8

                                                                                                                      I do feel strongly about a guy putting words like on the internet:

                                                                                                                      The project challenges how we humans are terraforming and controlling nature to their own desires, which has become problematic to an almost un-reversible state. Here the roles have been flipped, as nature is given agency to lead the process, and the designer is invited to let go of control and have nature take over.

                                                                                                                      And then terraforms nature by carving into a tree and pretending nature has now somehow gained “agency” by his self interpreted benevolent act. Give me a break.

                                                                                                                      1. 20

                                                                                                                        Give me a break.

                                                                                                                        maybe take one. If you get angry at some random artsy website, you may need some time away from computers. I know for sure that I need that occasionally.

                                                                                                                    2. 24

                                                                                                                      There’s a few points in the writeup that separate this from the random infatuation carving in the forest:

                                                                                                                      • The tree was planted by his parents.
                                                                                                                      • The type of tree and its age was required before a carving can take place.
                                                                                                                      • The author is aware of how to carve tree bark without harming it.
                                                                                                                      • “No trees were harmed in this experiment.”

                                                                                                                      It’s clear that after 5 years from the carving, the healing of the carving has been successful and the tree is healthy.

                                                                                                                      1. 2

                                                                                                                        The author is aware of how to carve tree bark without harming it.

                                                                                                                        This claim is not backed by evidence. The author is aware of how not to kill a tree on a 1-2 year timescale via girdling, but you can’t reasonably claim that it wasn’t harmed.

                                                                                                                      2. 4

                                                                                                                        Do you think a simple simulation of surface deformation is somehow free? Or that other forms of artistic expression should be condemned for their slight environmental impact because they don’t produce a quantifiable value?

                                                                                                                        Like I just can’t wrap my head around any kind of internally consistent worldview where this response makes sense. It feels like, instead of being a principled stand in defense of nature, it’s a backlash against what you see as pretentiousness, but actively setting out to shut down projects that make other people happy for no other reason than that you don’t like them is a lot more asshole behavior than carving things on trees.

                                                                                                                      1. 7

                                                                                                                        Hmm… I don’t know about this tradeoff. That’s a lot of machinery that the client would need to add, for only low to moderate benefit. Most people would probably just continue using bearer tokens due to the complexity.

                                                                                                                        (I say low to moderate benefit because in many cases, the attacker could just make requests via a compromised system and act under its identity.)

                                                                                                                        I think my first step would be to increase the granularity of the permission model, maybe more like what you’d see in a capability system. Github currently exposes hilariously broad scopes for their Personal Access Tokens (https://github.com/settings/tokens/new). For example, the repo scope gives “full control of private repositories”. No! I want to be able to say “read-only permission to repos X, Y, and Z”, or “read and write, but only under org X”. Currently my only way to express these sorts of access restrictions is to create a service account and give it the necessary permissions, then create a token for that account. Very cumbersome. It would serve them well to smooth out that path: Reify service accounts, and provide a way to control their access to resources using a dedicated UI.

                                                                                                                        1. 3

                                                                                                                          We use this to solve the problems you mention:

                                                                                                                          https://github.com/martinbaillie/vault-plugin-secrets-github#permission-sets

                                                                                                                          Granted, it requires running a Vault service, but that’s more or less a cost you pay once and then amortize over multiple scenarios to realize the benefits.

                                                                                                                          1. 1

                                                                                                                            Aha! Good to know about their Apps having fine-grained permissions. I wish they would allow creation of long-lived personal access tokens with that granularity.

                                                                                                                          2. 3

                                                                                                                            for only low to moderate benefit.

                                                                                                                            And more to the point: it enables a lot of harm for not much benefit.

                                                                                                                            It protects against someone pretending to be an authorized dumb terminal that you are in front of, but doesn’t secure the computing device.

                                                                                                                            And the inaccessibility of the keys can be used to prevent you from asserting control of the software you run.

                                                                                                                            1. 2

                                                                                                                              I know that’s conspiracy theory territory, but I wonder if GH even wants to change this situation. 3rd party services have limited options for good integration, get bought out and integrated and then work great under GH ownership. Same for the notification API not matching the notification screen and a few other API things that just don’t work as well as they could. They had years to fix them and the requests keep appearing in many places, yet they’re indifferent.

                                                                                                                            1. 2

                                                                                                                              That is one hell of a chain. Well done.

                                                                                                                              1. 6

                                                                                                                                It’s also great for deploying read-only websites. My photo gallery is published as an S3 bucket full of images and a SQLite file; the web server is a Clojure application that has a local copy of the DB. Currently the deployment script works by downloading the DB from S3 and shoving it onto the web server’s file system, but there’s no reason the web server couldn’t just periodically fetch the DB for itself.

                                                                                                                                1. 3

                                                                                                                                  This might be a fun option for replicating your database to the web server. https://litestream.io/

                                                                                                                                  1. 1

                                                                                                                                    Possibly, although rsync has served me well enough so far. If I had more traffic, I might be concerned about requests hitting a partially-written SQLite file, but then I could just switch to doing atomic file moves or a blue-green pair of DB files. I haven’t looked into Litestream, but whatever it is, it’s probably more complicated than that.

                                                                                                                                    1. 1

                                                                                                                                      It almost isn’t. One could think of litestream as rsync for sqlite formatted files, with the option of doing continuous rsync.

                                                                                                                                      1. 2

                                                                                                                                        Mmm. But rsync is everywhere already, which automatically makes it simpler. :-)

                                                                                                                                        If I needed continuous uptime, the application code to do periodic fetches and swap-outs on the DB would probably take just a couple hours to write.

                                                                                                                                        1. 3

                                                                                                                                          I’m not trying to convince you to switch. I’m just saying litestream is probably a good replacement, if you ever find yourself needing to go beyond the lazy of rsync.

                                                                                                                                          If I needed continuous uptime, the application code to do periodic fetches and swap-outs on the DB would probably take just a couple hours to write.

                                                                                                                                          If you find yourself thinking of actually doing this; I almost guarantee Litestream would take less time and do it in a more foolproof way.

                                                                                                                                  2. 2

                                                                                                                                    That’s a nice setup that I’m going to keep in the back of my mind.

                                                                                                                                    1. 1

                                                                                                                                      One other aspect is that when I sync to the server, this is the process:

                                                                                                                                      1. Upload all new image files
                                                                                                                                      2. Upload updated DB
                                                                                                                                      3. Delete all stale image files

                                                                                                                                      That way there’s very little chance of a dangling reference – even if the process is interrupted in the middle.

                                                                                                                                      (Images are all content-addressed as well.)

                                                                                                                                  1. 19

                                                                                                                                    SQLite is cool, but giving up the great type and integrity features of Postgres just to avoid running one more process seems like a bad trade-off for most of my applications.

                                                                                                                                    1. 13

                                                                                                                                      One thing I have learned recently is that SQLite has CREATE TABLE ... STRICT for type checking, because I felt the same pain moving from Postgres for a small CLI application. Could you elaborate on what integrity means here?

                                                                                                                                      More on STRICT here: https://www.sqlite.org/stricttables.html

                                                                                                                                      1. 6

                                                                                                                                        In PostgreSQL one can not only have foreign keys and basic check constraints (all present in one form or another in SQLite), but one can even define his own types (called “domains”) with complex structures and checks. See https://www.postgresql.org/docs/current/sql-createdomain.html

                                                                                                                                        1. 2

                                                                                                                                          I haven’t tried this for a very long time but I seem to recall that SQLite provides arbitrary triggers that can run to validate inputs. People were using this to enforce types before STRICT came along and it should allow enforcing any criteria that you can express in a Turing-complete language with access to the input data.

                                                                                                                                          1. 9

                                                                                                                                            Triggers might be functionally equivalent, but with PostgreSQL custom types (i.e. domains) not only is it more easy and practical to use, but it can also be safer because the constraints are applied everywhere that type is used, and the developer isn’t required to make sure he has updated the constraints everywhere. (Kind of like garbage collection vs manual memory management; they both work, both have their issues, but the former might lead to fewer memory allocation issues.)

                                                                                                                                        2. 1

                                                                                                                                          Oooh, that’s new. I’ll have to use that.