1. 1

    I’m not sure that this would be a standard, named function. It’s basically n choose 1 and n choose n-1 zipped together. Relevant Wikipedia: https://en.wikipedia.org/wiki/Combination

    In python:

    zip(itertools.combinations([1,2,3],1),itertools.combinations([1,2,3],2))
    [((1,), (1, 2)), ((2,), (1, 3)), ((3,), (2, 3))]
    
    1. 2

      This doesn’t get you the same result. In his function, he takes each item out of the list, and returns the [the item, the list without that item]. In Python, you could achieve the same thing like so:

      def unnth(n, x):
         xx = x.copy()
         nn = xx.pop(n)
         return [nn, xx]
      
      def unnths(x):
         for i in range(len(x)):
           yield unnth(i, x)
      
      >>> list(unnths([1, 2, 3]))
      [[1, [2, 3]], [2, [1, 3]], [3, [1, 2]]]
      
      1. 1

        Ah, you’re right, my example isn’t correct. I forgot to check the way python orders the combinations before putting them into zip, and one is backwards from what you would want. Doing it this way sort of sucks, though, since you end up with two copies of one of the lists due to how reversed works.

        x=xrange(1,4)
        zip(itertools.combinations(x,1),reversed(list(itertools.combinations(x,len(x)-1))))
        [((1,), (2, 3)), ((2,), (1, 3)), ((3,), (1, 2))]
        
    1. 4

      I noticed this on the cargo thread. Thought the downvote on the below quote to be particularly strange:

      In Go I don’t miss such a component at all. I’d even say that not having to use a tool like Bundler in Ruby, pip in Python or npm in Node.js is refreshing.

      1. 4

        Yes, this is this thread. Thanks for confirming that the downvote is strange :-)

        I’m worried about that kind of anonymous downvotes gaining grounds as Lobsters grows.

        A downvote “I disagree”, not anonymous, and associated to a mandatory short comment, would be more respectful and would encourage constructive discussion.

        1. 2

          If you require a comment for disagreement then you’ll likely end up with things akin to the “+1” posts common on other types of forums.

          1. 1

            Could you elaborate? I’m not sure I follow your reasoning.

            1. 1

              I think lenish is saying that people will use votes to show agreement on the opinion they feel matches theirs vs. commenting to show agreement.

              1. 1

                Sorry, didn’t see this until just now.

                What I’m saying is that, if you require a comment in order to downvote something, you’ll end up with comments that just say things like, “-1”, not necessarily anything constructive.

                1. 1

                  I would downvote that kind of comment as spam. I think the implication of such a comment is that it needs to be nontrivial.

                  1. 1

                    But you can’t downvote without leaving a comment, which means every time you want to downvote one of those you have to say something that’s not also spam (and so does everyone else), unless I misunderstood the original concept as only requiring comments for downvoting the original post.

                    1. 1

                      The point is that if you raise the bar for downvoting, you’ll only have downvotes that people think are important. For example, if I decide that I don’t like comment by @moses, I could just downvote every one of @moses’s comments. But if I see that I need to add a comment, I’d feel dumb if I added a comment which said “downvoted because @moses is a loser.” So I probably won’t.

                      TL;DR What you’re describing is not a bug, but a feature.

                      1. 1

                        I’d feel dumb if I added a comment which said “downvoted because @moses is a loser.” So I probably won’t. TL;DR What you’re describing is not a bug, but a feature.

                        My point is that there are a lot of people who won’t feel dumb saying things like that and will quite happily post bad comments to downvote something.

                        1. 1

                          It sounds like the two points you are arguing are contradictory, so I think I missed something. Here are the two points you are holding:

                          1. there are a lot of people who won’t feel dumb saying things like that and will quite happily post bad comments to downvote something
                          2. which means every time you want to downvote one of those you have to say something that’s not also spam

                          So it sounds like you’re saying that both the bar is too high and also the bar is too low. Could you elaborate?

                          1. 1

                            I was talking about two groups of people, sorry for the lack of clarity.

                            There is a group of people who don’t care about post quality and will post things like “downvote” just to downvote a comment, which adds nothing to the discussion.

                            There is another group of people who care about post quality, and have to come up with something more valuable to say when downvoting, which may deter perfectly valid downvotes.

                            As an example, if the user in the latter category wants to downvote a user in the former category, does it make sense for everyone who wants to downvote a comment saying nothing other than, “downvote,” to also say, “your comment doesn’t add anything to the discussion?” At what point to comments like that become just as annoying as comments saying only, “downvote.”

                            1. 1

                              I think it’s fine to deter those “perfectly valid” downvotes. Downvotes rarely seem to improve the community unless it’s a behavior that we strongly want to discourage, and if you want to strongly discourage something, it merits a comment.

                              It might just not be worth downvoting comments that don’t add to the discussion.

          2. 1

            A possible source of bias in that thread: I tweeted a link to that comment, specifically because I didn’t feel like writing my response as a blog, and I also did not want to present my post without context.

            That said, because lobsters doesn’t have open registrations, I can’t imagine that I would have caused it to a serious degree…. but figured it should be mentioned.

            1. 1

              steveklabnik: I saw your tweet, and I think it was a good idea to share this with the “outside”, but I don’t believe this is your tweet that triggered this behavior.

          1. 4

            Note that, as with rebasing (see below), amending replaces the old commit with a new one, so you must force push (-f) your changes if you have already pushed the pre-amended commit to your remote. Be careful when you do this – always make sure you specify a branch!

            You really should avoid changing history on remote repos. It can break other peoples' branches, automated tests, etc. I always tell people not to use it at all to prevent them from developing a bad habit that will cause issues later. It’s much better to review your changes before pushing to make sure everything is in order and rebase if it isn’t.

            There were conflicts

            I’m surprised there’s no mention of git mergetool in this section.

            When I try to push, I get an error message:

            Ugh. See above. If you don’t feel very dirty using git push -f then you’ve picked up a bad habit.

            I made several commits on a single branch that should be on different branches

            I don’t think his approach here is ideal. He’s doing a git reset --hard $sha, then making new branches and cherry picking. If you didn’t make a note of the shas for the commits before doing the reset it’s pretty annoying to find those again.

            Normally I do:

            git checkout -b newbranch $sha
            git cherry-pick $commit1
            git checkout -b newbranch2 $sha
            git cherry-pick $commit2
            git checkout master
            git reset --hard $sha
            
            1. 4

              To be fair, the OP explicitly says that these are “rules for when things go wrong.” If you find yourself in a position where you have to change public history, then something has gone wrong. This doesn’t mean you shouldn’t need to know how to fix it.

              Perhaps the OP should have put this in big giant letters as a warning.

              (To be clear, your advice is spot on. Upvote. I just think it’s framed in such a way that it ignored the context of the OP.)

              1. 1

                OP explicitly says that these are “rules for when things go wrong.”

                He does at the top of the page, but then follows up with examples which you should never do:

                • Amend commit
                • git push -f to fix the remote with the amended commit

                and

                • git rebase -i ...
                • git push -f to fix the remote with rebased branch

                These aren’t situations where things are broken and you need to fix them. They are situations where you are using git improperly and then bypassing git’s checks to prevent you from doing those things.

                In the first example, you should just push the change as a new commit. In the latter example, you should not be using git rebase -i on commits that are already pushed to a remote.

                Git provides tools for merging branches while fixing the history. The advice is really wrong.

                (To be clear, your advice is spot on. Upvote. I just think it’s framed in such a way that it ignored the context of the OP.)

                I hope you don’t take my comments as rude; I just think OP is totally wrong in this regard.

                1. 3

                  Yes, when you’re force pushing, something has gone wrong. That is the point. However, this does not mean one is using git improperly. For example, one might accidentally push information that should be private to a public repo. Technically, this information is now public and there is nothing you can do about it. Nevertheless, it would be reasonable to value removing that information from your public history at the expense of force pushing.

                  Moreover, there are cases when force pushing is completely reasonable and proper. Have you ever worked on a pull request via GitHub? A PR is technically public, but force pushing to it (say to squash the commits) is not considered poor etiquette.

                  You are advocating proper use of git. This is a good thing. However, this does not mean there are never good reasons for using it improperly. (I notice that you subtley moved the goalposts on me. I said, “things have gone wrong when you find yourself having to modify public history.” You basically responded by telling me that my problem doesn’t exist by telling me not to modify public history!)

                  1. 1

                    (I notice that you subtley moved the goalposts on me. I said, “things have gone wrong when you find yourself having to modify public history.” You basically responded by telling me that my problem doesn’t exist by telling me not to modify public history!)

                    I’m not trying to move the goalpost. I just conflated two ideas I had.

                    • I don’t think OP’s “issues” need to be or should be solved with git push -f
                    • OP’s “issues” are only issues because they have the mistaken idea that changing git’s history and then overwriting a remote with that is a reasonable approach. I’ve yet to run into an actually good reason to need to change a remote’s history.

                    Take the example of accidentally pushing a password to your repo, here’s the list of things you shouldn’t have done:

                    • Hardcoded a password
                    • Committed a password
                    • Pushed a change with a committed password

                    But, accidents do happen, so how do you fix it?

                    • Change the password
                    • Commit a change which removes the password from the file
                    • Push that change

                    Doing a rebase or amend and a git push -f doesn’t offer much, if any, advantage (except to protect your pride, I guess?). It does have the potential of causing more issues, however:

                    • Anyone who pulled before the overwrite is going to have issues merging later changes into their branches or their own changes back into the modified branch
                    • Rewriting the history doesn’t guarantee someone else didn’t get a copy of that commit and pushed it to another repo (and you may not even think to check)
                    • You still need to change that password everywhere ASAP
                    • You may overwrite someone else’s changes that came after yours

                    So maybe, if you caught it quickly enough, you can remove it from the history before any other copies are made, but I expect most of the time people don’t notice until much later that something like this has happened. Modifying history at that point is really awful.

                    Moreover, there are cases when force pushing is completely reasonable and proper. Have you ever worked on a pull request via GitHub? A PR is technically public, but force pushing to it (say to squash the commits) is not considered poor etiquette.

                    I have. I have also used git push -f to do just that several times. I have since concluded that is not a particularly good thing to do for several reasons:

                    • It caused confusion when the project maintainer attempted to merge my PR immediately after I had changed the history of the PR (he didn’t see what he thought he should in git log after the fact)
                    • It caused our automated tests to fall for nonobvious reasons if done after they were triggered but before they had checked out the changes (arguably an issue with our automated test error handling)
                    • Needing to do this was generally due to my not having adequately reviewed my changes before pushing them
                    • All instances where I did it could have been fixed by making an additional commit instead of modifying the history
                    • Github PR comment history becomes nonsensical after repeated review + rebases

                    Additionally, it’s possible for the person merging the PR to fix the history themselves if they think it appropriate. (This is done by getting the patch from github directly, applying it, then modifying it as you please, then pushing those changes.)

                    The only time I think it really makes sense to do this is for PRs that take a long time to get merged and you need to rebase on newer changes for some reason. I feel like this is generally a bad workflow, but it can happen. I think this is also less of an issue if you’re doing git via email instead of github, as you can just submit a new version of the patch that has been rebased and not have to worry about history.

                    1. 1

                      Take the example of accidentally pushing a password to your repo, here’s the list of things you shouldn’t have done: … Doing a rebase or amend and a git push -f doesn’t offer much, if any, advantage (except to protect your pride, I guess?)

                      Umm. I didn’t say password. I said data that should not be public.

                      If that data were passwords, then your solution is absolutely the correct one and it is precisely what I would have done.

                      But passwords aren’t the only type of private data.

                      You can’t just cite a single example, propose a solution, and then generalize this to “overwriting public history is therefore always wrong.”

                      It does have the potential of causing more issues, however:

                      I absolutely agree that all of those issues are very real problems. Nevertheless, there can still be situations where overwriting public history is worth the cost.

                      So maybe, if you caught it quickly enough, you can remove it from the history before any other copies are made, but I expect most of the time people don’t notice until much later that something like this has happened. Modifying history at that point is really awful.

                      Yes I agree it is really awful.

                      I’m speaking from experience here. Someone else caught my mistake a day after I made it. While the data was now effectively public, it would be better to remove it. So I sent mail to the people I work with, apologized about the force push and explained that they should be aware of it.

                      It’s a nuisance of a thing to go through, but that seems like the point of the OP. Something had gone horribly wrong by mistake, and a force push was really the only way to fix the problem.

                      The only time I think it really makes sense to do this is for PRs that take a long time to get merged and you need to rebase on newer changes for some reason. I feel like this is generally a bad workflow, but it can happen.

                      It works well for Rust, and this situation is pretty common in a project that big and moving that rapidly. But yes, I agree with all of the issues you cited. Some of them aren’t related to git and are a result of a crappy GitHub interface. (I have a bookmarklet in my browser that automatically expands comments on “outdated diffs.”)

                      1. 1

                        and are a result of a crappy GitHub interface

                        It is quite annoying at times.

                        Some of them aren’t related to git

                        I really would like to work on a project where patches were done more like the Linux kernel some time. I feel like a lot of engineers use github as a crutch. =/

                        I’m speaking from experience here. Someone else caught my mistake a day after I made it. While the data was now effectively public, it would be better to remove it. … It’s a nuisance of a thing to go through, but that seems like the point of the OP. Something had gone horribly wrong by mistake, and a force push was really the only way to fix the problem.

                        My main point is that when things like this happen it’s a process problem. It’s taken me a couple days to figure out how to put my general idea into words that I think are satisfactory, so I’m sorry for any confusion.

                        My current job does this decently. If you use our standardized workflow all changes get reviewed and must be approved before they are pushed to a remote. Everyone is strongly discouraged from not following that workflow. It has worked well for preventing things that shouldn’t be in a non-local repo from getting pushed.

                        If I were starting from scratch, though, I would want a system where changes are pushed to one remote, then after approval they are pushed to the real “master” remote. Approval would be granted in a multi-stage process: passing any build, passing automated tests, someone signed off on the changes. The main part of this system that I’m unsure of is whether or not to automatically squash the changes. Squashing would prevent data leaking through changes that are added in one commit and removed in another, but you can also lose valuable history for tools like git blame.

                        You can’t just cite a single example, propose a solution, and then generalize this to “overwriting public history is therefore always wrong.”

                        After everything above, I’m not trying to argue “overwriting public history is always wrong” so much as “if you reach the point where you need to overwrite public history then there’s something very wrong with how you’re managing your repo.” The reasoning for the latter is the former, and that’s why I still think the advice in OP is wrong. Yes, it’s good to know how to solve issues like these, but I think we should focus on how to prevent these (and other) issues rather than how to fix them if they happen.

                        What I see a lot of people reading when they see the linked page is “oh, so this is how I can stop having to deal with git’s persnicketiness” rather than treating it as what OP really meant it to be.

                        1. 1

                          I think we’ll just have to agree to disagree here. I don’t think it’s reasonable to assume that we’re all in a position where there is time for careful enough review of every commit to make sure things that shouldn’t be public don’t become public. Overwriting public history is a trade off. If you have enough capital to implement processes that always avoid being in that situation, then that’s great. But if you don’t, I think it’s entirely reasonable to cope with a force push rarely. (And I don’t think there is anything “very wrong” with that.)

                          And certainly, part of this trade off are the number of users of your repository. If it’s very small, the worst case scenario of a force push is a groan from your fellow engineer from across the room.

                          For what it’s worth, aside from that GitHub quirk of hiding outdated diffs, I very much like working with it. I’ve had tons and tons of people contribute to my projects that otherwise probably wouldn’t have. (I don’t want to exclude contributions from people who don’t know how to use git efficiently. GitHub will hand-hold novices, and I like that.)

                          1. 1

                            For what it’s worth, aside from that GitHub quirk of hiding outdated diffs, I very much like working with it. I’ve had tons and tons of people contribute to my projects that otherwise probably wouldn’t have. (I don’t want to exclude contributions from people who don’t know how to use git efficiently. GitHub will hand-hold novices, and I like that.)

                            Oh, don’t get me wrong, I do like github and think it has helped collaboration on a lot of projects. I don’t think it’s the best thing to use if you want to keep your repos private, however. Especially if you already are using other tools for issue tracking, etc.

              2. 2

                “He” should be “she”. Not a big deal in general, but let’s make sure not to assume that neat new projects are done exclusively by men.

                1. 1

                  In english, “he” is used when the gender of the subject is unknown. Also, using “she” would be just as bad, since it assumes that all new projects are done exclusively by women.

                  Of course, you could use “it”.

                  1. 2

                    I don’t think that has been true for at least fifty years. There are plenty of other ways of referring to the author without using a gendered pronoun, like “the author” or “k88hudson”. If you really want to use a gendered pronoun, it’s not a ton of work to go check who the author is, and whether you can guess a gender.

                    The pronoun “she” is not just as bad in this case because the author is actually a lady. Using a gendered pronoun appropriately does not insinuate anything.

                    1. 1

                      The pronoun “she” is not just as bad in this case because the author is actually a lady. Using a gendered pronoun appropriately does not insinuate anything.

                      Agreed. I didn’t realize that we were talking about a particular person here. Where is this pronoun of contention, exactly? I thought you were talking about the article itself, which was written by k88hudson (I think?).

                      1. 2

                        Maybe it would be useful to add a “parent” link to comments. The parent of my comment (which you replied to) is here.

                        1. 1

                          Wow, yes, that would be great. I didn’t even know your comment had a parent. Sorry for the misunderstanding. ;-)

                    2. 2

                      In english, “he” is used when the gender of the subject is unknown. Also, using “she” would be just as bad, since it assumes that all new projects are done exclusively by women.

                      Historically this may be true, but today, “they” is used fairly commonly as a gender-neutral choice. See also various other alternatives.

                1. 4

                  I think this is pretty neat. It also raises an interesting pedagogical question, which is when you should show this to someone if they’re learning git for the first time. I think that this is probably one of the last things you should show someone who is new to git, and that it’s important to understand what all of these things are going to do semantically before you hand someone a cheat sheet. I tried to learn git via cheat sheet style, and I had a terrible time of it until I went and actually understood what was going on.

                  Now, I use cheat sheets for reminding me of the syntax, and make sure I understand how it’s going to rearrange my DAG before ever actually running any commands. Things are better now.

                  1. 2

                    I think showing this early would be helpful, because it shows a string of commands used in sequence. This is more useful for someone trying to get done than individually-described commands in the man pages.

                    The importance of understanding what the commands actually do goes without saying. When I was learning git, I would create some temporary fake repos to reconstruct a given situation and then would run different commands until I knew how they worked.

                    1. 4

                      I think showing this early would be helpful, because it shows a string of commands used in sequence.

                      I wouldn’t show this to anyone since it advises using git push -f to rewrite remote history. That’s a great way to break other people’s branches, lose other people’s commits, etc.

                    2. 1

                      Not that it’s directly comparable pedagogically but…

                      Maybe somebody could find out when they’re introduced to pilots / astronauts?

                      1. 2

                        At least for pilots, these are generally called “checklists” instead of “flight rules”. They are different for each aircraft also, for instance, a lightplane’s checklists may only take up a few pages since they are so simple, but an airliner’s may be up to 500 pages.

                        Checklists are also not just for takeoff and landing, an airliner has checklists for everything from a broken gear light to an engine failure.

                        Checklists are introduced to pilots as soon as they leave ground school and begin training in the plane itself. When flying you always use checklists to ensure that you did not forget something, especially in an emergency. They also help the pilot in deciding what to do, for instance, it would say whether to lower the gear or try a belly landing for each different landing surface.

                        Not sure if it really relates to git though, since we as programmers are rarely in a situation where a checklist could be the difference between life and death.

                    1. 5

                      I can’t take any article on bash programming seriously when it uses unquoted variables, especially one about defensive coding.

                      1. 7

                        My goal for general computing and work is to avoid using the mouse whenever possible.

                        Computer: Lenovo W540, Intel i7-4930MX, 16GB RAM, 128GB SSD

                        OS: Gentoo

                        Window Manager: awesome

                        Terminal: urxvt

                        Shell: bash with lots of aliases, functions, and scripts

                        Text editor: vim

                        Task management: taskwarrior

                        Chat: weechat running in screen on a VPS to connect to IRC, bitlbee, and slack’s IRC gateway

                        Email: mutt

                        Browser: firefox with pentadactyl for vim-like keybindings

                        Revision control: git

                        Testing: vagrant + virtualbox to spin up a, typically Ubuntu, VM to run whatever I want to test

                        Typically I will spin up a local VM with the code I am working on in a shared folder, then while I’m editing in one terminal (outside the vm), I can test in another terminal which has sshed into the vm. I find that it works quite well, and I don’t have to pollute my main OS with tons of stuff I don’t care about except for the project I’m currently working on. An extra benefit is that I never have to care about version inconsistencies, since when my VM is provisioned I know it has exactly the version that project needs.

                        I am considering switching to packer + docker to do this, since it will use less memory and disk space while starting up faster.

                        1. 1

                          Thanks for the taskwarrior link, looks like it may be what I’ve been looking for to manage project based todo lists.

                        1. 9

                          If you’re using vim at all, you can probably use pass as a password manager. It’s quintessentially unix-ey; uses GPG for encryption, git for distribution (so you can very easily use your own git server if “The Cloud™” doesn’t appeal) and the underlying filesystem for database structure. It’s definitely a better bet than this.

                          1. 3

                            Link: pass

                            1. 1

                              It’s implemented as a giant bash script. Even though I like bash, it’s not the kind of language I want to use to manage passwords.

                              See: http://git.zx2c4.com/password-store/tree/src/password-store.sh

                            1. 50

                              Changes so far to OpenSSL 1.0.1g since the 11th include:

                              • Splitting up libcrypto and libssl build directories
                              • Fixing a use-after-free bug
                              • Removal of ancient MacOS, Netware, OS/2, VMS and Windows build junk
                              • Removal of “bugs” directory, benchmarks, INSTALL files, and shared library goo for lame platforms
                              • Removal of most (all?) backend engines, some of which didn’t even have appropriate licensing
                              • Ripping out some windows-specific cruft
                              • Removal of various wrappers for things like sockets, snprintf, opendir, etc. to actually expose real return values
                              • KNF of most C files
                              • Removal of weak entropy additions
                              • Removal of all heartbeat functionality which resulted in Heartbleed

                              Commits are happening pretty fast, but the API is not being changed.

                              1. 7

                                FYI, I’ve posted a git repo of the changes OpenBSD made to https://github.com/jmhodges/libssl

                                Easier to read than digging through cvs per-file histories. It’s a one-time dump for now.

                                1. 3

                                  You can get commit logs for the BSDs at FreshBSD:
                                  http://freshbsd.org/search?project=openbsd&q=file.name:libssl

                                  (I had blissfully acclimatised to how much easier it is to read a revision log when you actually have, you know, a revision log, as opposed to trying to piece history together out of file-based logs. Trying to get an idea of what the OpenBSDs are up to was a painful reminder. In CVS it looks like a lot fewer changes than you realise once you look at a revision log. They really are applying that plasma torch fast and thick.)

                                  Loving the colourful commit messages. “Toss a unifdef -U OPENSSL_SYS_WINDOWS bomb into crypto/bio.” “Go home, VMS, you’re drunk”. “Q: How would you like your lies, sir? A: Rare.” Etc.

                                  1. 1

                                    commitid support should help there, but I haven’t been able to turn it on for us yet.

                                2. 5

                                  I hope the new version is called OpenOpenSSL.

                                  1. 12

                                    Joking aside, I’d probably vote for OpenTLS.

                                    1. 4

                                      Can’t.

                                      1. Products derived from this software may not be called “OpenSSL” nor may “OpenSSL” appear in their names without prior written permission of the OpenSSL Project.
                                      1. 1

                                        Haha I thought of that… But I think I like OpenTLS best.

                                      2. 2

                                        what’s a “backend engine”? is that just “engine”? if so, i suspect they will keep the engine interface as it’s used by openssh (eg for hardware modules) (if i’m remembering right).

                                        (i hope this works; i hope the openssl crew can then find some way to switch to this (i think that would be hard even without external pressure from companies that have paid for code that is being stripped, but still, i hope it can happen)),

                                        1. 4

                                          All of the engines that interfaced with hardware crypto accelerators. The interface still exists AFAIK, but all of the individual engines are gone.

                                          1. 2

                                            wow. all of them indeed. strangely, it looks like the pkcs11 engine was always third party - https://www.opensc-project.org/opensc/wiki/engine_pkcs11

                                        2. 2

                                          I think stripping down openssl is a good idea.

                                          May I propose some more things? Go through the algorithms and identify ones that are basically unused. An example is DSA. Also, it may be debatable to remove algorithms that are generally considered too weak to be used. RC4 comes to mind, which will probably see a deprecation RFC soon.

                                          1. 3

                                            I think you will see the OpenBSD developers doing this - they are aggressively attacking the code at the moment.

                                            1. 1

                                              Not supporting things like that may prevent adoption for people who are stuck using them for backwards compatibility with older systems.

                                              Arguably if they don’t have the resources to fix those systems they may not have the resources to switch to this, once it’s usable, anyway. shrug

                                            2. 1

                                              Will this make merges of upstream changes significantly more difficult?

                                              1. 14

                                                It sounds like they’re not just completely abandoning compatibility with upstream; they’re incinerating compatibility with upstream with a plasma torch.

                                                1. 5

                                                  Why do you think they have any intention of merging?

                                                  1. 5

                                                    Because they’re smart people and probably learned their lesson after that Frankenstein monster of an Apache they had to solely maintain for so many years before dumping?

                                                    1. 2

                                                      I think they will follow the same policy as for OpenSSH when it comes to providing code to a project that is of no use to them at all.

                                                      1. 1

                                                        It was a fine daemon, really.

                                                        Merging of upstream changes stopped because they switched to a non-free license.

                                                  2. 1

                                                    wow, you weren’t kidding about the “massive”! very exciting stuff, gives me much the same sort of thrill that reading books like “where wizards stay up late” does.

                                                    1. 1

                                                      Congrats, @jcs. I think you’re the first member of the 100 point story club.

                                                      1. 2

                                                        This is particularly concerning with that uuencoded binary payload in the install shellscript (it turns out to be a uuencoded gzipped tarball). Very sketch.