Threads for hydrargyrum

      1. 4

        Empty commit can be pretty useful if you are using IaC and want to fix drifts

        1. 3

          Also when GitHub actions get stuck (happens to me frequently) and I need to trigger a re-run.

          Edit: To expand, this happens whenever my “prettier” action has prettified the code and committed it back to the branch. It seems to trigger another run of the actions (there are three) but these will be stuck on Expected — Waiting for status to be reported until I push an empty commit.

          1. 2

            I do work on multiple fixes/chores in parallel but during development I’m too lazy to create branches and switch between them so I just have one big develop branch and make a visual separation between groups of commits with a git commit --allow-empty -m -----------. That also makes it easy to rebase and check conflicts for everything at once.

            1. 1

              Drifts by committing something empty and having git-ops do the enforcement of repo state?

              1. 1

                Yes, sort of, plus if you’re using Github/Gitlab etc you’d have the commit in the repo pointing to the PR for audit/history purposes. See Atlantis for example for such workflow.

            1. 7

              Instead of doing everything in a GitHub Action separately, I suggest pre-commit: It’s better because:

              • The same tasks can be run locally.
              • The tools are pinned, formatting, linting rules will be the same. When you ran it locally, it will pass on CI.
              • The whole team can run the same tests, no need to wait problems in CI, it’s too late.
              • Checks can run in pre-commit Git hooks, so no mistakes in review/CI etc.
              1. 2

                A better tool for python specifically is tox which is like make/rake specifically for python builds. It’s designed to be the glue between jenkins/etc and your tests. It knows all about virtualenvs and so on.

                I found pre-commit much more invasive in the past - you’re typically deciding everyone’s local development workflow centrally. I also don’t see why there is any connection between git commits/pushes and having a clean build. eg I’m making progress on a bug and want to save my game and git commit fails with “nope! variable name too short!”

                1. 1

                  What you described sounds annoying, but on the other hand, I’m yet to personally work with someone that actually does the “commit often and in small chunks” thing IRL. I might do it sometimes but it’s pretty rare.

                  It’s also something easy to work around, I think, with pre-commit.

                  A much bigger problem I’ve found, specially when working with lots of inexperienced people, is that everyone will develop their own, ad-hoc, often horrible, local workflow, and a lot of time gets spend on code review checking for things that all those tools could catch locally, automatically. I found that that trumps the inconvenience of pre-commit.

                  If your working with seasoned devs, though, I can see how your priorities could be different.

                  1. 1

                    A much bigger problem I’ve found, specially when working with lots of inexperienced people, is that everyone will develop their own, ad-hoc, often horrible, local workflow, and a lot of time gets spend on code review checking for things that all those tools could catch locally, automatically. I found that that trumps the inconvenience of pre-commit.

                    That isn’t a usecase for pre-commit per se, that is a usecase for having any kind of branch CI. There are numerous tools that will allow you to run your build on a branch but pre-commit is about tying build to unrelated git activities like commit/push/rebase/etc

                    1. 1

                      I partially agree. Current teams I work with have a decent CI, so code review time is better spent, but I’m still seeing loads of commits fixing stuff caught by the CI. Which means they are mostly not running things locally. Pre-commit is a convenient interface to give to them to run things locally.

                      1. 2

                        I read that more as pre-commit is convenient for you because you’ve forced them to run things locally. Which I don’t totally disagree with, but a couple issues I do have are:

                        1. Hooking pre-commit seems like the worst time to force the extra step into the process… either you should be doing it as you work, or you should be doing it at the end in handling a PR. Forcing it piecemeal at random times when you happen to commit just seems like the worst option as the work isn’t necessarily fresh in your mind, and you’re in the middle of trying to accomplish something else, but you get pulled into handling the failures immediately.
                        2. This approach also assumes you have invested in making the pre-commit environment 100% reliable and consistent with the CI workers… which is the ideal we would hope for, but I don’t always see in practice. I’ve frequently seen pre-commits that rely on something like node/npm blowing up frequently even when you’re not touching (or even know anything about) frontend, so people just disable the pre-commit and push to CI anyway to see if it works there, because who really wants to debug the npm failure of the day if it might be ok on the worker anyway?
                        1. 2

                          I can see how those can be valid concerns in general.

                          They don’t apply to my case, because, as I mentioned, no one I work with really does the “commit often and in small chunks” thing, our projects are all only python, no frontend, and the set of tools we run on CI is not that big.

                          I would say that pushing to see if it works on CI is kinda of a antipattern, though, and something that should be fixed somehow. Maybe not with pre-commit, but people shouldn’t feel forced to push to remote and wait a potentially long time just to see if their code work.

                          By the way, on making local validation consistent with CI, this is a big beef I have with most CI systems, it’s basically impossible to reuse the CI setup to do validation locally.

                          1. 2

                            as I mentioned, no one I work with really does the “commit often and in small chunks” thing

                            I often see the same, but that seems to make it worse in my mind. If you’re frequently committing, then at least anything that fails a precommit should be fresh in your mind and likely relevant to the reason you were committing. When users pile up layers of loosely related changes, possibly over days, they will need to context switch back to something they might have done days ago when some unrelated task calls for a commit.

                            I totally agree that not being able to align local and CI build envs is an issue, and that relying on pushing to CI is an antipattern. But pragmatically those are antipatterns that frequently exist in the wild, and that pre-commit hooks need to account for… and if and when you have resolved those issues, you’ve solved much of what people call out pre-commit as a fix to, so I don’t see a whole lot of added value in it.

                            1. 2

                              You know, now that you expanded on it, I think we configured pre-commit as a pre-push hook, in the project I used before. So, that would solve some your concerns.

                              The other part is that you can run pre-commit on demand, without a commit or push action, which I remember doing often just to run the lintings and stuff.

                              1. 2

                                Yeah, that sounds more palatable… I think we mostly agree on the desired end state, I think I just tend to prefer using explicit call and/or a more real-time filesystem watcher for my local changes.

                                1. 2

                                  Yeah, basically, I don’t care how, just give the same validations as CI, in as easy an invocation as possible.

                  2. 1

                    I share the same feeling regarding pre-commit, everytime I install the hook, I uninstall it in rage after it takes whole seconds everytime I want to do a temp save point or try to amend a commit I know is imperfect, only to have the commit rejected. I should just acknowledge this tool is not compatible with my way of working. To my workflow, pre-commit is like an annoying peer eavesdropping me and poking me everytime I make a typo that I have already noticed myself when the wrong char appearid on screen.

                  3. 1

                    I’m pretty interested in this approach as well. I think the answer is that most projects should run these sorts of checks both in a hosted, shared, environment, and locally on developer machines regularly. Running all of these locally is absolutely possible, and should be in a team’s pre-commit hook. I tend to just alias black / isort / bandit together and run them regularly while developing if it’s a personal project.

                    How are you executing the pre-commit scripts in your CI environment post-push?

                    1. 3

                      We just setup everything needed for all the pre-commit hooks to run and just invoke their GitHub Action. See an example here: Pretty simple!

                      1. 2

                        Side note: Just discovered vulture from your pre-commit, looks super interesting.

                        This is a neat approach, and yeah, totally agree with you that if your team is all-in on pre-commit this makes total sense. It gives you one place to configure what’s running (as well as the options for those steps). I need to have a deeper look into using it for my personal projects.

                        In the past I’ve seen teams go both ways - especially with things like unittests in pre-commit. In your example you have your checks in the hook but not the tests. That’s probably what I’d end up with too.

                        There are those occasional times that you want to be able to commit / push something that’s work in progress without executing all of those checks (--no-verify to the rescue), but really that should be few and far between. I’m not actually sure what the arguments against using a pre-commit hook in a professional setting are these days.

                        1. 7

                          I’m not actually sure what the arguments against using a pre-commit hook in a professional setting are these days.

                          Naaa, hooks are just a wrong solution to the right problem. What you want is to ensure that the canonical version of code (tip of the main branch) possesses certain properties at any time. Where properties are pretty general: unit tests are passing, code is formatted, no vulnerable dependencies are used, licensees are in order, etc. The correct way to check for that is when you update the tip of the main branch on the server which keeps the canonical version of the code. So, to incorporate changes from a feature branch (or just a feature commit):

                          • Server which holds the canonical version of code creates a new “merge commit”
                          • Server checks that this new commit possesses all the required properties
                          • Server atomically updates the tip of the master branch to point to this already checked commit.

                          pre-commit is a “client-side validation” version of this workflow — advisory validation which doesn’t enforce anything, and runs on the outdated version of the code (main is probably ahead of the base of the feature branch by the time you finish).

                          Practically, pre commits very much get in a way of people who like to do small changes, and who strive to maintain clean git history, because the way you do that is by rewriting many drafts, and pre-commit creates friction proportional to the number of the wip-commits, rather than finished commits.

                          Ultimately, you want to enforce properties somewhere between typing things in the editor and getting code in the canonical tree. Moving enforcement point to the left increases friction, moving it to the right increases correctness.

                          1. 3

                            Keeping every single commit clean from trivial mistakes is very useful form automated git bisect debugging. A seriously underrated way of debugging regresions.

                            1. 1

                              It’s possible to either:

                              • bisect only through merge commits
                              • rebase and require checks to pass on every commit, and not only for the last one.
                            2. 1

                              I would agree with you in a unitemporal setting. But I think that integration is bitemporal. In particular, whether dependencies are vulnerable is a bitemporal property. This doesn’t negate your point, but it suggests that your “right problem” is a bit too broad, and it’s actually two problems that we’re trying to solve:

                              1. whether the code integrates correctly for the developer’s environment at time of authorship
                              2. whether the code integrates correctly for the user’s environment at the time of installation
                        2. 2

                          I personally write all checks as tests using the standard testing framework of a particular language. So it’s, eg, cargo test locally, cargo test on CI, and, if you enjoy pre-commit hooks, cargo test In the hook.

                          If some tests are particularly slow, I h skip them unless RUN_SLOW_TESTS env var is set (which is set in CI).

                      1. 12

                        More utilities:

                        I wrote, which is now the most popular dotfile manager. Happy to answer any questions about it.

                        1. 2

                          Happy user of chezmoi, I even inject dotfiles in containers with something like: chezmoi archive | docker exec xxx tar xf -.

                          I sometimes encounter difficulties with old software (e.g. older git) rejecting recent options I include in my dotfiles. I solved it with an after script deleting unwanted options. Not sure to remember why I didn’t want templates for that.

                          1. 2

                            Happy to hear you’re happy!

                            Injecting dotfiles into containers with chemzoi is tracked in this issue:

                            Spolier alert: this is planned for chezmoi v3 ;)

                            1. 1

                              Excellent, thank you!

                        1. 1

                          Could you give a few examples of diff vs generated messages? Ideally in repo, not as lobsters reply :)

                          1. 1

                            Late reply but I use a similar project that’s self-hostable:

                            1. 9

                              Every perfect commit should include a link to an issue thread that accompanies that change.

                              But the link will eventually point to a 404, because the org decided to change ticketing platform, for good or bad reasons, it will happen, and what was documented there will be lost (even if tickets are copied to new platform (which is not always the case), links will still be dead). A link can be provided for later commenting, by other people, yes, but the commit message should be self sufficient as it’s the only info source that will last forever.

                              1. 6

                                More often it’ll point to a 403 when a new team member joins and can’t get permission to the project board that nobody remembers as being important.

                                I joined a 10 year old project at a megacorp that was in maintenance mode and after 8 months left, still having never found the permissions I needed for even the architecture documentation.

                                1. 4

                                  My last day at Mozilla was in mid-2015. They infamously use Bugzilla for everything. Since then, every company I’ve worked for has used Jira. There was a brief attempt by Airtable to break in, but Jira just absolutely completely owns the market for issue tracking at tech companies, as far as I can tell.

                                  So while there is value in having technical information in the commit message, the larger context can and should be offloaded to the issue tracker via a link to the Jira ticket ID. And the “don’t document it there because it will be thrown away in a migration” argument works just as well as an argument against documenting in commit messages – who says the company’s going to keep the commit history if and when they switch code hosts or version-control systems? So the only logical conclusion is not to document anything, anywhere, in any system, ever, since they all are equally susceptible to being thrown away.

                                  1. 2

                                    who says the company’s going to keep the commit history if and when they switch code hosts or version-control systems?

                                    Code history is certainly the thing best kept, better than ticket history. It’s also the simplest to keep.

                                    1. 1

                                      It’s a good idea to keep all the documentation. But your premise was that companies won’t do that. My premise is 1) companies tend to just buy Jira and stick to it, and 2) that once you buy into your premise, there’s no reason to believe any particular forum of documentation is more likely to be kept compared to others, and so it makes no sense to push for any one form of documentation above others.

                                  2. 2

                                    Enough people have reported that they have seen orgs with no respect at all for maintaining their institutional memory that I’m going to research ways to address this.

                                    I’m optimistic that the “git notes” mechanism can help here - either by copying issue threads into annotations in the repo itself or at least by adding links to archived issue tracker content.

                                    1. 1

                                      I tried some experiments with git notes in this repo:

                                      My conclusion at this point is that a better path would be to mirror the issues into a separate branch in the repo itself:

                                  1. 17

                                    then it can help you make your code more concise and readable.

                                    # Reuse result of "func" without splitting the code into multiple lines
                                    result = [y := func(x), y**2, y**3]

                                    I don’t get it. Is this just to avoid writing one extra line, like this?

                                    y = func(x)
                                    result = [y, y**2, y**3]
                                    1. 12

                                      The article explicitly addresses this objection:

                                      You might say, “I can just add y = func(x) before the list declaration and I don’t need the walrus!”, you can, but that’s one extra, unnecessary line of code and at first glance - without knowing that func(x) is super slow - it might not be clear why the extra y variable needs to exist.

                                      The walrus version makes it clear the y only belongs to that statement, whereas the “one extra line” version pollutes the local scope, and makes your intention less clear. For short functions this won’t matter much, but the argument is sound.

                                      1. 5

                                        with y = func(x):

                                        1. 5
                                          for i in range(3):
                                          print(i)  # prints 2

                                          Local scope gets polluted all the damn time in Python. I’m not saying that’s desirable, but it is part of the language, and that’s a worse argument then most.

                                          y with the walrus, as written in the article, pollutes local scope anyway, by the by, as the result of func(x). No intentionality is lost and you’ve done literally the same damn thing.

                                          Do what you can’t do and you’ve got my attention. (For my earlier post, using it within an any, at least, takes advantage of short-circuiting (though implicit!))

                                          1. 1

                                            y with the walrus, as written in the article, pollutes local scope anyway, by the by

                                            You are correct, I should have tested myself before saying that. So yeah, the benefits look much weaker than I had supposed.

                                        2. 10

                                          Yeah, I find the walrus operator redundant in almost every case I’ve seen it used. If I’m feeling generous, I’ll give partial credit for the loop-and-a-half and the short-circuiting any behavior— but it was wrong to add to the language and I’ve banned it from any Python I’m in charge of.

                                          Edit: Also, the accumulator pattern as written is horrendous. Use fold (or, yes, itertools, fine) as McCarthy intended

                                          1. 9

                                            You can make the same argument with only minor modification of the examples against many things that now are considered not just acceptable but idiomatic Python.

                                            For example, you always could just add the extra line to manually close a file handle instead of doing with open(some_file) as file_handle – so should we ban the with statement?

                                            You could always implement coroutines with a bit more manual work via generators and the yield statement – should we ban async and await”?

                                            You could always add the extra lines to construct an exception object with more context – should we ban raise from?

                                            Hopefully you see the pattern here. Why is this bit of syntax suddenly such a clear hard line for some people when all the previous bits of syntax weren’t?

                                            1. 12

                                              I once had a coworker that felt that functions were an unnecessary abstraction created by architecture astronauts. Everything people did with functions could be accomplished with GOTO. In languages without a GOTO statement, a “professional” programmer would write a giant while loop containing a series of if statements. Those if statements then compared with a line_number variable to see if the current line of code should be executed. The line_number was incremented at the end of the loop. You could then implement GOTO simply by assigning a new value to line_number. He argued that the resulting code was much more readable than having everything broken up into functions, since you could always see what the code was doing.

                                              1. 3

                                                “If the language won’t let us set the CPU’s IP register, we’ll just make our own in memory!”


                                              2. 1

                                                with is a dedicated statement. It does 2 things ok, but only them, no more. You can’t add complexity by writing it in another complex structure like a list-comprehension. The walrus can be put everywhere, it’s vastly different IMHO.

                                            2. 4

                                              I also find that in result = [y, y**2, y**3] it’s much clearer to parse what’s going on at a quick glance, while you need to think more when coming up on the walrus.

                                              Even clearer might be result = [y**1, y**2, y**3].

                                            1. 1

                                              I’m sorry. You used the words ‘efficiency’ and ‘Python’ in the same article.

                                              Python is the first language in some time to emphasize clarity, and programmer efficiency, solidly over execution time. While one can construct examples where the walrus operator provide more clarity than alternatives, these examples are all concerned with execution speed.

                                              It’s not that the point is wrong. It’s that the article does not address if it is right or wrong.

                                              1. 2

                                                Is a variable assignment being also an expression a useful concept? Sure. Is it a good idea to incorporate that concept in python? I don’t think so.

                                                Some other languages have it, as you could do in C if ((fp = fopen(...)) != NULL) {, because that’s an idiom and consistent with the rest of the language. There’s no problem with that.

                                                In python though, everything was designed against this kind of pattern. Stuff should be made explicit, the Zen of python recommends it, the whole ecosystem tries to follow it, and the language does its share to help being like that. i++ doesn’t exist. The walrus operator is an anomaly to the rest of the language and a slap in the face to everything that was built before.

                                                1. -2

                                                  Is a variable assignment being also an expression a useful concept? Sure. Is it a good idea to incorporate that concept in python? I don’t think so.

                                                  Hmmm…. One might guess you like loud politics. Shouting where reason fails.

                                              1. 3

                                                Actually, what Microsoft does here, is the scalable version of someone learning from Open Source code and then starting a consulting business

                                                There’s a difference between learning to code from everyone’s code and using that infused knowledge, and copying byte-for-byte someone else’s code.

                                                1. 3

                                                  I’m curious what learning people think they are doing when they type in “TwoSum” and it pastes the answer from Leet Code. Based on what I have seen from some online coding academies, this may actually be someone’s idea of coding education.

                                                1. 4

                                                  Is it comfortable to use the thumb to move all the time? I ask cause I have some pain to my thumbs after texting too much on my phone…

                                                  I personally use a vertical mouse, and it changed my life. Used to have chronic wrist inflammations, they’re gone now.

                                                  1. 6

                                                    I use a kensington expert trackball for that reason. It was very alien at first, but now I love it.

                                                    1. 4

                                                      Same here, I am addicted to using the ring to scroll. I find it much easier on my wrist, but to be honest i have both a mouse and this guy which i’ll alternate between during the day.

                                                      1. 3

                                                        Ya same setup here, I use a regular mouse for gaming since I just can’t get used to using a trackball for that… but use the trackball for everything else. The kensington’s ring scroll is the bomb!

                                                        1. 1

                                                          I’m looking for a trackball to buy but I heard bad things about the kensington’s scroll ring. Can any of you confirm if it’s easy to scroll accidentally or not, or if it has any other flaws?

                                                            1. 1

                                                              I don’t think I’ve ever accidentally scrolled the ring.. Maybe with bad posture it’s easier to? But after looking at mine and just now trying to get it to scroll accidentally… I just don’t see an obvious way to do that with how I place my hand on it when in use. 🤷‍♂️

                                                      2. 4

                                                        I got thumb tendinitis from using one. I use a vertical mouse now, super happy.

                                                        1. 1

                                                          Vertical mice make my shoulder seize up something fierce, but I’m really happy with an old CST L-Trac finger trackball. It’s funny how wildly people’s ergonomic needs can vary.

                                                          1. 1

                                                            CST L-Trac here too! I bought one based only on the internets and I wish it was a bit smaller. Definitely something to try out if you can, especially if your hands ain’t super big. Bought another for symmetry so I don’t end up in a rat race finding something as good but just a bit more fitting.

                                                            And there were the accessories aspect!

                                                            CST’s business is now owned by someone else who I don’t think have the back/forward-button accessory. I kinda regret not having got those. ISTR checking out what they had and it was lame.

                                                            What I’d really like to see are some specs and community creations for those ports, like horizontal scroll wheels, but I think Linux doesn’t really support that anyway.

                                                        2. 4

                                                          Having used an extensive range of input devices (regular mice, vertical mice, thumb trackballs, finger trackballs, touchpads, drawing tablets, and mouse keys), my thoughts on this are as follows:

                                                          Regular mice are the worst for your health. Vertical mice are a bit better, but not that much. Thumb balls are a nice entry into trackballs, but you’ll develop thumb fatigue and it will suck (thumb fatigue can make you want to rip your thumb off). Finger balls don’t suffer from these issues, but often come in weird shapes and sizes that completely nullify their benefits. The build quality is usually also a mess. Gameball is a good finger trackball (probably the best out there), and even that one has issues. I also had a Ploopy and while OK, mine made a lot of noise and I eventually sold it.

                                                          Touchpads are nice on paper, but in practice I find they have similar problems to regular mice, due to the need for moving your arm around. Drawing tablets in theory could be interesting as you can just tap a corner and the cursor jumps over there. Unfortunately you still need to move your arms/wrist around, and they take up a ton of space.

                                                          Mouse keys are my current approach to the above problems, coupled with trying to rely on pointing devices as little as possible. It’s a bit clunky and takes some getting used to, but so far I hate it the least compared to the alternatives.

                                                          QMK supposedly supports digitizer functionality (= you can have the cursor jump around, instead of having to essentially move it pixel by pixel), but I haven’t gotten it to work reliably thus far. There are also some issues with GNOME sadly.

                                                          Assuming these issues are resolved, and you have a QMK capable keyboard, I think this could be very interesting. In particular you could use a set of hotkeys to move the cursor to a fixed place (e.g. you divide your screen in four areas, and use hotkeys to jump to the center of these areas), then use regular movement from there. Maybe one day this will actually work :)

                                                          1. 1

                                                            you could use a set of hotkeys to move the cursor to a fixed place (e.g. you divide your screen in four areas, and use hotkeys to jump to the center of these areas),

                                                            isn’t it what keynav does? Never succeeded to get used to it though, couldn’t abandon my mouse.

                                                          2. 2

                                                            I use an Elecom Deft Pro where the mouse is in the middle of the mouse. I generally use my index & middle finger to move the ball. For me, I find it more comfortable than a normal mouse or one with the ball on the side (thumb operated).

                                                            1. 1

                                                              everyone is probably different but I have a standard trackball mouse (Logitech, probably older version of this post) and it’s very comfortable. The main thing is to up the sensitivity a lot. Your thumb is precise, so little movement is needed!

                                                              No good for games, perfect for almost everything else.

                                                              (I have used fancy trackballs that a coworker has. It’s terrible for me, I do not get it at all even when trying for hours on end)

                                                              1. 1

                                                                Anything you overdo is bad for you.

                                                                I swap between a trackpad, a mouse and an M570 every few days.

                                                              1. 1

                                                                My solution to this problem consists of a few steps:

                                                                1. Use either a password manager that generates passwords from a “master key”, use SSO for everything, or use multiple, password managers with encrypted backups on multiple cloud services
                                                                2. Use strong 2FA (multiple PIN-protected YubiKeys + TOTP) for everything
                                                                  FYI: YubiKeys support 63-digit alphanumeric “PINs”, so there’s no risk with untrusted people accessing them either.
                                                                3. Backup the primary passwords for [1] and the QR codes for [2] on an encrypted USB drive
                                                                4. Deposit sets of each a PIN-protected YubiKey [2] and one of the encrypted USB drives [3] together in different, trustworthy places.
                                                                5. Always keep one set on your body.

                                                                The only situation in which I could get locked out of all my services is four different places, some of them hundreds of kilometers apart, all being burned/nuked/SWATted at the same time, while I’m swimming (the only situation in which I don’t follow rule 4)

                                                                1. 2

                                                                  Yubikeys are waterproof. Unless you swim naked, you could have them with you.

                                                                  1. 1

                                                                    Oh that’s good to know! Do you know how well they handle the salt in seawater? If they handle that well, and I find an equally-waterproof usb drive, that’d be awesome!

                                                                    1. 2

                                                                      It has a pretty solid rating of IP68 (

                                                                      • 6 Dust-tight No ingress of dust; complete protection against contact (dust-tight). A vacuum must be applied. Test duration of up to 8 hours based on airflow.
                                                                      • 8 Immersion, 1 meter (3 ft 3 in) or more depth

                                                                      and their press blog (take with a grain of salt) claims that it survived a 48 meter dive in saltwater.

                                                                      the only thing that salt could do would be corrode the contacts on the port plug, it’s encased in plastic (not just a plastic case like 99% of usb storage devices), just make sure it’s fully dry before plugging it in.

                                                                      1. 1

                                                                        I did not test salt water myself, only washing machine and swimming pool a few times and did not notice any problem after.

                                                                  1. 1

                                                                    Don’t know what is the practise on lobsters, sorry if it’s wrong but I’ll link to my recent previous comment about git-imerge.

                                                                    1. 23

                                                                      If I had to give two real high-dollar items for making git branching less painful:

                                                                      • Rebase, not merge, your feature branch. Do not squash.
                                                                      • Rebase your feature branch against master/main/whatever at least every day .

                                                                      I’d also suggest knowing when to cut losses–if you have a zombie branch a few months old, take the parts that are super neat and throw away the rest.

                                                                      1. 7

                                                                        This is pretty much it. GitHub exacerbates this problem because its pull request workflow based on branches is broken. What you really want is a pull request on a cherry picked range of commits onto another branch. That way you can have commits from your branch flowing into main while you continue to develop along your branch.

                                                                        1. 2

                                                                          Indeed, any of these advices would have solved the problem. Instead, I ended up spending almost a month just trying to solve the mess.

                                                                          1. 3

                                                                            My condolences, friend. Happens to all of us eventually.

                                                                          2. 2

                                                                            Some time ago I setup an alias for git cherry-pull <mainline>, which (rebase style) lets you assign each commit to a new branch, pushes them, then opens the ‘new pull request’ page for each branch.

                                                                            I should dust off the code and publish it.

                                                                            [edit] A colleague pointed out that a script needs to be ~perfect because it obscures the details of the git operations, while doing it longhand keeps them front of mind. Might explain why I rarely use it anymore.

                                                                            1. 1

                                                                              Sounds nifty! Is the interface like rebase interactive? And I’d recommend a new name, because google autocorrupts it to cherry-pick.

                                                                              1. 1

                                                                                I’ve written similar scripts. The two things that I kept hitting are:

                                                                                1. Getting this to play nicely with rebase -i and merging/splitting commits is…not trivial.
                                                                                2. Automation in PRs is often triggered by the main/master branch, so all but your bottom PR doesn’t get checked.
                                                                            2. 3

                                                                              It seems to be fairly unmaintained, but I really like git-imerge for long-lived branches. It does pairwise merges of each of your commits against each of the upstream ones giving you an NxM matrix of all possible merges. It builds this as a frontier from the top-left (shared parent) branch to the bottom-right (final merge). You get to resolve conflicts exactly on the pair of your commit vs theirs that introduced it. You then have the full history for bisecting if the end result doesn’t work. You can then choose one of three ways for it to clean up the resulting history:

                                                                              • The equivalent of a git merge.
                                                                              • The equivalent of a git rebase.
                                                                              • A rebase-with-history, where it gives you a rebase but also sets the parent of each of your commits such that downstream users can still merge from your branch.
                                                                              1. 1

                                                                                I’ve tried git-imerge recently, in a situation where 2 branches had diverged from hundreds of commits, but many commits were common and cherry-picked from one side to the other.

                                                                                The performance was catastrophic. After half an hour eating my cpu with absolutely no progress information, I investigated externally and saw imerge was painstakingly creating 1 tag per commit, so had created hundreds of tags and was still not finished yet. Not knowing how long it would take after or what would be its next step and how long it would take, I understood it was simply not designed for my case, so I stopped it, cleaned the mess it created and uninstalled it.

                                                                                1. 2

                                                                                  The worst case for me was merging 8 months of LLVM changes into a tree that had had active development over that time. Several thousand upstream comments, over a hundred local ones. It took about two weeks of CPU time but, critically, only about half an hour of my time. Fixing the conflicts was incredibly easy because it showed me the precise pair of commits where I and upstream had modified the same things. I’d done a similar merge previously without the aid of git-imerge and it took well over a week of my time.

                                                                                  In general, if I can trade my time for CPU time, I’m happy. I can trivially buy more CPU time, I can’t buy more of my time.

                                                                              2. 2

                                                                                I’d recommend squashing before the rebase simply so there’s only one commit you have to resolve conflicts on.

                                                                                But yes, rebase often. This is the way.

                                                                              1. 2

                                                                                Emoji isn’t a language, it’s a writing system. And as much as I despise it, the fact that it is at least to some extend universally comprehensible regardless of one’s language background might well mean it’s here to stay. I wouldn’t be surprised if it or some evolution of it becomes the dominant writing system in 100 years or so.

                                                                                1. 2

                                                                                  universally comprehensible regardless of one’s language

                                                                                  It’s not as much as you think. It’s culturally influenced, and even age-influenced (like vocabulary). Examples are the infamous “face with tears of joy” which is understood as a sad face by many elder. Or the “skull” emoji which is understood as “dead laughing” by the youth while you just understand “death”, because they frown upon “face with tears of joy”. Or the “eggplant” that does mean a vegetable to many people but a penis to others.

                                                                                1. 1

                                                                                  cdg => cd up the filesystem until you find the directory that contains a .git dir.

                                                                                  This one’s actually an eshell function:

                                                                                  (defun eshell/cdg ()
                                                                                    "Change directory to the project's root."
                                                                                    (eshell/cd (locate-dominating-file default-directory ".git")))
                                                                                  1. 3

                                                                                    or in pure shell: cd "$(git rev-parse --show-toplevel)"

                                                                                  1. 2

                                                                                    zsh only:

                                                                                    alias -g PG="|grep"
                                                                                    alias -g PL="|less"
                                                                                    # foo PL instead of foo | less

                                                                                    I use them daily since years

                                                                                    pvrun: run any command, but mostly cp/mv/tar, wrapped by pv to view I/O progress

                                                                                    Not exactly shell but still use them daily, in my .tigrc:

                                                                                    # u to fixup selected commit
                                                                                    bind main u !git commit --fixup=%(commit)
                                                                                    # r to rebase to selected commit
                                                                                    bind main r !git rebase -i %(commit)~
                                                                                    # P to push-create a new branch pointing at selected commit
                                                                                    bind main P !sh -c "git push origin %(commit):refs/heads/$(printf 'branch name? ' >&2; read reply; echo $reply) --force"
                                                                                    1. 1

                                                                                      I use vis sometimes when dealing with binary/corrupted files in my terminal text editor, which makes moving around hard when some characters aren’t visible.

                                                                                      A modern version of this utility that preserves unicode symbols would be handy.

                                                                                      1. 1

                                                                                        This is also useful also to view spaces vs tabs or type of newlines. Undisguised self advertising, I made 2 related programs:

                                                                                        • vhd, the Visual HexDump, which is sort of hexdump but respecting newlines, not for full-binary files then
                                                                                        • univisible, can compose/decompose unicode characters (NFKC/NFKD), display verbosely every code point
                                                                                      1. 13

                                                                                        The two things that helped me understand database basics were 1) this site and 2) implementing indexes in a simple in-memory database. Also, just try inserting 1,000; 100,000; and 10M rows into any database and try running some queries with/without indexes. Observe disk usage as well. Extra points for benchmarking the inserts themselves with/without indexes. You get an understanding pretty quickly.

                                                                                        1. 2

                                                                                          A good intro to basic indexes is SQLite’s query planner docs: