1. 33
    1. 58

      The amount of time I spent during my career helping people with Git or SQL issues is mind boggling. IMHO, this is not the fault of Git or SQL, it’s the fault of our industry.

      If a carpenter didn’t know how to use a sawbench or a nailer, we would consider them a bad carpenter. But if a developer doesn’t want to learn the standard tool, and just want get their work done without learning how to master anything, this is somehow acceptable…

      1. 30

        I dunno, it seems like a bit of both … Both git and SQL could be designed in a way that retains all their power but with a user interface that makes more sense, and is easier to learn.

        And easier to remember – I definitely find myself forgetting SQL stuff I learned, simply because I use 10 other tools 10x more than SQL. git has less of that problem because I just write down all the commands in shell scripts, and mostly forget them.

        Good design critique of SQL: https://www.scattered-thoughts.net/writing/against-sql/

        Also software is pretty young, so old software systems often have “scars” of things we wouldn’t do today. We could make them better with modern techniques (at least in theory). SQL is powerful but also has a lot of mistakes and bad evolution.

        git has a good core, but also a lot of mistakes. Unix shell too.

        1. 12

          Just to clarify a bit, if someone said

          “It’s the fault of our industry that programmers are resistant to engaging with the tech they work with”

          “It’s the fault of our industry that common tools are underinvested in, and we collectively settled on git”

          I would agree somewhat with both things. (Although I think there are actually many and drastically worse possible outcomes than git; overall I’m not upset with it compared to e.g. SQL)


          I think the problem is mainly that software is just a hodge podge of different crap at every job, which keeps changing.

          So people are reluctant to invest any time in one thing. They do the minimum to fix stuff (kinda) and move on. That’s pretty much rational.

          There are some things worth investing time in, but they don’t know what those things are. SQL is probably one of those, but it’s often covered up with many leaky abstractions, which are largely due to its inherent flaws (see Against SQL post)

          Kubernetes is another conundrum … do I actually invest time in this, or do I just copy and paste some YAML and hope it goes away? (Ironically k8s itself seems to encourage sloppy and superficial usage; maybe part of its appeal is that it doesn’t demand any kind of technical excellence)

          My opinion is that Kubernetes is a flawed implementation of a flawed architecture, and it’s worse than its predecessors

          https://lobste.rs/s/yovh5e/they_re_rebuilding_death_star_complexity#c_mx7tff

          People may argue the “industry is settling on it”, but either way, it’s definitely not because it’s not possible to better, or that those are state of the art ideas. There were/are many competing systems like those from CoreOS, Mesos, Hashicorp, and so forth but we got the one that’s free with a lot of hype

        2. 2

          Good design critique of SQL: https://www.scattered-thoughts.net/writing/against-sql/

          SQL has no special syntax for this:

          select foo.id, quux.value 
          from foo, bar, quux 
          where foo.bar_id = bar.id and bar.quux_id = quux.id
          

          Well, if you name your columns properly, you can do

          select id, value 
          from foo
          join bar using (bar_id)
          join quux using (quux_id)
          

          which I think counts as special syntax. Obviously it would be better to be able to use arbitrary columns names…

          I agree with basically the entire rest of the article, however. I really hope some of these SQL-alternatives get beyond compiling back down to SQL and start having first-class support.

      2. 9

        We are living through the worst era of software development IMO. Every investor knows that the first company to nail a market will get a nearly impenetrable 50% of the market and the next 5 companies will share the next 49%. So opportunity costs are all that matter. This has always been true to some extent. However in the 1970-2000 era we were still figuring out through trial and error how to forecast software development, how to invest in it, and we were limited by firmer hardware timelines. So there was enough time to do things right or go back and fix hacks while waiting for another part of the org to catch up. Since the 2000-2001 dot com bubble bursting businesses and investors have figured out how to match investment to forecasting to scale software development nearly infinitely. That is to say that while a given project might but benefit from a staffing increase, the big software companies have rarely found it to be the case that hiring another developer would not lead to a marginal profit increase. The result is booming salaries but also an insistence that no one ever take more time than absolutely necessary for any given task because the way to back those high salaries is to make sure the company wins enough of these races to market. Personally I’m looking forward when the frontiers shrink and we focus on carrying costs in addition to opportunity costs, even if that means lower salaries.

      3. 9

        Heck, next time I onboard a developer I’m planning to start the process with “well, can you touch type?”… I think that’s part of the reason people hate being asked to live code in interviews. Many can’t even use a keyboard properly.

        1. 14

          As a developer with cerebral palsy, I hate that question. I never look at the keyboard when I type, but I’ve been repeatedly told that I don’t “touch type” because I don’t put the “correct” finger on each key. The fact that the bones in my arm are fused and my hand won’t move that way doesn’t matter - I was told that coding is like carpentry and manual dexterity is more important than intellectual curiosity.

          Thankfully, my present employer never even asked that question.

          1. 15

            I never look at the keyboard when I type

            Then you’re touch typing, and pretending otherwise to your face is being kind of an asshole. Maybe you could improve your technique, or use a specially crafted layout or something (Dvorak himself did design one-handed layouts for instance), but in my experience the most important part of touch typing is removing the need to nod constantly.

            Now if your condition prevents you from typing faster than say 30 words per minutes, I’d say that would count as a slight disability, and discriminating you on that basis would be wrong, possibly even illegal. That being said, being able to type fast enough remains relevant if it was a prelude to live coding: we ought to give more slack to people who can’t type fast, else we’d just be discriminating against typing speed, which probably wasn’t the goal to begin with.

            1. 2

              in my experience the most important part of touch typing is removing the need to nod constantly.

              Same. I know HOW to touch type with f and j and home row. I choose not to, on some vague perception of avoiding carpal tunnel by just having my hands somewhere on the keyboard instead of a rigid location.

              I get about 90wpm without looking at a keyboard at all. Though if I’m not paying attention at all, I can type near-total gibberish.

              Rgus gaooebs wgeb U tgubj U;n guttubg sine jets vbyut U;n actyakky guttubg ybrekated keys but then eventually I hit a key near the edge of the keyboard and my hands re-align to the keyboard correctly.

              I do not relate to my colleagues who will be spartan to the point of incomprehensibility because they can’t type fast enough to get their questions into a group chat. Some people have a hard time communicating remotely, and I blame the inability to type fast enough.

          2. 10

            Wow, I’m so sorry to hear that. Touch typing is a very important and valuable skill at this point in time, but “using the right finger” hasn’t been relevant in my whole lifetime.

          3. 2

            I myself don’t like to stress out my pinky fingers so I don’t type as the prescriptivists would have it. What I’m aiming at, is the ability to type smoothly without extra cognitive overhead. It is much easier to code if you can type as you think, rather than coming up with an idea and then being like “sigh” now I have to type that in. I’m well aware of the fact that some people with disabilities cannot achieve that, but we want to get as close to the typing part of coding (and especially debugging) being as low overhead as possible.

      4. 9

        If nail guns that fired forwards 90% of the time, but I’m some situations would fire only when you didn’t press the trigger, or would fire at 90 degrees to the angle that you’re aiming them, then carpenters wouldn’t adopt them. In contrast, developers happily adopt such tools and blame the user when they take someone’s hand off.

        1. 8

          When is git doing firing backward?

          Git is very simple, it’s 3 entities (blobs, trees and references) datamodel and a CLI to create/read/update/delete these entities locally and synchronize them with a remote server. In my analogy, git was the nail gun, and SQL was the table saw.

          Very often, when I help fellow developers with their git problem, the first issue they have is that they don’t even know what they are trying to achieve themselves. They just want it “to work.” So even if we would just redesign git from the ground up, the most user-friendly possible, we would still facing the same issue: software engineers don’t want to learn the tool, regardless of how simple the tool is.

          For example, how many time have you seen well documented well designed API being misused, because the user of the API was just in a hurry? I’ve seen it hundred of times over my career personally.

          1. 9

            Git-the-concept is indeed as simple as you described, but git-the-CLI is undeniably far from it.

            1. 4

              The core paradox with git is that it’s simultaneously a thin wrapper around its data model and plumbing, yet porcelain operations rarely intuitively map to operations on the model.

          2. 9

            When is git doing firing backward?

            Every year when we onboard new juniors there are a ton of things I see them struggle with, despite their best intentions. Some of my favourites:

            • The path syntax for git add differs depending on whether the path is a submodule or not. git add some/path and git add some/path/ are equivalent, unless some is a submodule, in which case git add some/path adds the submodule but git add some/path/ loads all the files in the submodule in your repository (which is something that shouldn’t even be allowed IMHO but anyway…)
            • Pattern matching for git add is offloaded to the shell, so git add . doesn’t actually mean “stage all changes in current directory”, it means “stage all changes to every existing entry in the current directory”. If an entry has been deleted, it won’t be added. Forget juniors, I screw up every other refactoring commit this way just because I forget.
            • The standard way to resolve conflicts on a WIP branch (e.g. you have to rebase it because a dependency of something you work on has changed) is via the stash: you stash your changes, rebase on latest, fix the ensuing merge conflicts. If you forget to do git rebase --continue at this point and pop your stash back right away you get to meet git fsck.

            Y’all like crafts analogies, take it from someone who was an electrical engineer at one point: if a salesperson showed me a soldering iron that works like this, they’d either have to set up an embarrassing appointment with a proctologist or they’d unlock a new fetish but either way, I’d be shoving it up their butts.

            1. 2

              Y’all like crafts analogies, take it from someone who was an electrical engineer at one point: if a salesperson showed me a soldering iron that works like this, they’d either have to set up an embarrassing appointment with a proctologist or they’d unlock a new fetish but either way, I’d be shoving it up their butts.

              Then again, I’ve never seen a soldering iron that claims to rework a board where someone else came in and soldered different components to the pads you were working on while you weren’t looking… and might reasonably expect that one that did was significantly more complex than one that didn’t.

              1. 2

                Then again, I’ve never seen a soldering iron that claims to rework a board where someone else came in and soldered different components to the pads you were working on while you weren’t looking

                There’s no direct analogy for that, but the first two points above don’t have anything to do with advanced hoodoo that’s in any way analogous to board reworking, they’re very basic interface-level problems.

                1. 0

                  Counterpoint: the git add issues are not great, but they are exacerbated by your sloppiness in choosing files. I never run into this because I use git add -p or I select specific filenames.

                  Putting care into writing our commit histories is at least as important as the care we put into writing code.

                  1. 3

                    Please don’t generalise. “Sloppiness in choosing files” doesn’t describe every scenario in which one might want to do git add ., git add *, or git add on anything that isn’t a specific file path. Lots of software would be a lot better if the knee jerk reaction to these things would be closer to “huh, what didn’t I think of?” instead of “what is that n00b thinking?”.

                    Just two relevant examples:

                    I once worked on a codebase with a very neat plugin architecture. Refactorings sometimes resulted in files appearing or disappearing under plugins/plugin-being-refactored/ and almost never in changes anywhere else. git add plugins/plugin-being-refactored/ was a very reasonable thing to write under these circumstances, but it rarely had reasonable results.

                    In another project, we got a big repo with a whole lot of graphics assets made by artists, with watercolor, on paper. That stuff gets converted to a computerised image, and then automatically turned into binary assets that a multimedia engine can gobble up and display. Because we want to allow everyone to do builds on their own machines, but the pipe between scanned images and binary assets is extremely slow (think hours on a beefy workstation), and after trying several other solutions (yes, including automated builds on beefy build servers), the binary assets are stored in the repo, along with the infrequently-updated master copies from which they come. It’s an uneasy compromise, but the price in git purity is more than offset by the gains in functionality. There are a few hundreds of these assets. git add assets/gfx/* after they’re regenerated doesn’t do the obvious thing, manually selecting specific filenames is not a reasonable thing to expect someone to do, and it’s not exactly something easily automatable.

                    Neither of these is sci-fi version control system behaviour, it’s something that the likes of, uh, it hurts just to write its name, Clearcase, could handle.

                    You may disagree with either of these two approaches, and that’s fine, it took a while to get me on board with the second one and I only gave in after I stubbornly tried two “pure” approaches and everyone hated me for it.

                    But then if git add isn’t supposed to be used except as git add -p or when specifying a particular file, why even allow running it on a directory in the first place? If the point is to never run git add . except when initialising a new repository, the obvious conclusion is that the functionality of git add . should just be a flag for git init, rather than something that can work right only once.

                    1. 1

                      Respectfully, I disagree. If you have a number of files and you don’t want to stage them all for commit, then a glob is in most cases just the wrong tool. If you want to argue that git add . makes it too easy to choose the wrong tool, I’ll agree with you there. There’s nothing wrong with breaking out one of the many git GUIs out there to sift throught the changes if there’s that many.

                      It’s worth noting that I’ve seen multiple people complain that git’s concept of staging is too complex and it should just do git add . for everything! I disagree with this and appreciate being selective in writing my commits.

                      I do seriously believe that one should put as much care into crafting their git history as their code, unless nobody else will ever see it (and even then, future-you is not going to have much more context than someone else).

                      In another project, we got a big repo with a whole lot of graphics assets made by artists, with watercolor, on paper. That stuff gets converted to a computerised image, and then automatically turned into binary assets that a multimedia engine can gobble up and display. Because we want to allow everyone to do builds on their own machines, but the pipe between scanned images and binary assets is extremely slow (think hours on a beefy workstation), and after trying several other solutions (yes, including automated builds on beefy build servers), the binary assets are stored in the repo, along with the infrequently-updated master copies from which they come.

                      git was absolutely not designed for this use-case. If you’re dealing with large binary files, yeah it’s going to suck. No argument. I believe this is why Perforce is more common than git in game development.

                      There are a few hundreds of these assets. git add assets/gfx/* after they’re regenerated doesn’t do the obvious thing, manually selecting specific filenames is not a reasonable thing to expect someone to do, and it’s not exactly something easily automatable.

                      See, this would be a situation where I would expect globbing to work perfectly. Are you generating intermediate files that don’t get committed? This sounds like it needs a Makefile or similar build system.

                      But then if git add isn’t supposed to be used except as git add -p or when specifying a particular file, why even allow running it on a directory in the first place?

                      Because whether or not it’s reasonable to do that depends on context. No approach to this will work for everyone. I would like to see more interactivity, I’ve recently taken to writing some personal commands that integrate FZF. I’ve also been leaning heavily on Fugitive lately.

                      If git add -p and specific filename selection doesn’t get you 90% of the way there, then I think your workflow is simply not well-suited for git. That’s not necessarily a failure of git. Perhaps you need to invest in some personal tooling to fill in the gap, or choose another VCS. There’s no reason why it needs to be everything to everyone.

                      1. 1

                        I just tried this again and it looks like this actually got fixed at some point!

                        The use cases I mentioned may just not be the best fit for git, sure, but I wasn’t trying to describe specifically relevant cases, but cases where the workarounds were difficult. The unwanted behaviour was very easy to trigger just from normal interactive use:

                        $ git status
                        On branch master
                        Your branch is up to date with 'origin/master'.
                        
                        Changes not staged for commit:
                          (use "git add/rm <file>..." to update what will be committed)
                          (use "git restore <file>..." to discard changes in working directory)
                                deleted:    src/genhash.c
                                modified:   src/modgen.c
                        
                        Untracked files:
                        ...
                        

                        it would’ve been a reasonable expectation for git add src/ to add both the deleted genhash.c and the modified modgen.c, but at some point in the past it only added the latter (assuming this wasn’t some weird shell chomping feature, and it was actually bash or whatever that I was holding wrong…).

                        You may like the git add -p workflow, and I do, too, when I need to make a series of commits out of a larger changeset. But if I know that all the files I changed are under src/, that I want all the changes in a single commit (because it’s the right way to craft that commit, not because I’m sloppy), then I’m gonna do git add src/ rather than git add -p every single file y every chunk.

            2. 1

              The git add things are certainly rough ededs, but I can’t say I’ve ever run into them. 95% of the time, I use git add -p. The only time I do git add .` is when a have a completely new repo. And TBH, most of the time I have stuff lying around the root directory that I don’t want checked in anyway, so I just add files individually.

              If you forget to do git rebase –continue at this point and pop your stash back right away you get to meet git fsck.

              In my experience it says there was a merge conflict and leaves the existing stash entry. So you just do git restore and continue on.

      5. 7

        If a carpenter didn’t know how to use a sawbench or a nailer, we would consider them a bad carpenter.

        I absolutely agree with your general point, but the metaphor is a little problematic when it comes to SQL. Unfortunately for software developers, there are 20 companies that make some form of a “nail gun” and they all work pretty differently, are appropriate for different situations, involve different trade-offs, and some of them are expensive so they aren’t taught in schools. To make matters worse, if you use some of them the way you would use others, you’ll shoot yourself in the foot.

        1. 9

          I think the problem is not necessarily “not-knowing”, but perhaps more a reluctance to engage with things.

          I wouldn’t expect somebody to know PostgreSQL specifics because there’s a solid chance they might not have interacted with PostgreSQL yet in their careers. However I do expect a curiosity around the tools in use in any given team.

          I’ve not personally worked anywhere where database management is outside of the remit of the team who primarily uses the database - even when we’ve had DBAs. As a result, even if an ORM is in use, to perform our jobs well, the teams I’ve worked in have always needed to develop database specific knowledge.

          Additionally, there are more important transferrable principles than not. For example, using query planners to estimate costs of queries before landing them into the code, or understanding what an ORM is generating so we don’t inadvertently ship something that has odd query patterns. These things get you 80% of the way to sensible database usage, IMO.

          I have seen, at times, people express that they shouldn’t have to know these things. I think that attitude really complicates the work and social dynamics of teams.

        2. 2

          But carpenters aren’t expect to know every kind of ‘nail gun’, they are mostly expected to have a broad knowledge of the principles and a deep knowledge of one or two specific types which form part of their tool belt. Of course they should have the willingness to learn more if a specialized job calls for it, but usually they can get by with the tools in their toolkit.

      6. 6

        I agree that we should have higher expectations for people learning the tools of the craft, but there are also hobbyist wood workers in this world who can make things without knowing all their tools. A nail gun is a huge improvement on a hammer, so why shouldn’t we strive to make a better version control? We have the capability to build powerful tools that allow both power users and hobbyists to contribute code.

        I’ve been using git for over 10 years myself. I teach git to semi technical university students and I’ll admit I dread trying to show them some of the things mentioned in this post.

        1. 12

          hobbyist

          You hit the nail on the head with that.

        2. 4

          make a better version control

          Here’s to hoping some group takes up the mantle to provide a Pijul web client that uses simple technologies ala cgit & SourceHut. There’s a lot of things the VCS tool does right, but I’m skeptical of the web rewrite for “the edge”—along ith how easy it would be self-host and how truly decoupled from Cloudflare’s platform it will be in practice.

          1. 1

            Pijul already has a comparable system to cgit or SourceHut. It is called Nest.

            1. 3

              I believe the parent comment was referring to the “edge” Nest rewrite using Cloudflare Workers and TypeScript: https://pijul.org/posts/2023-05-23-nest-a-new-hope/

              1. 3

                Right. Well you can still run the new version of Nest on one of the host-your-own cloudflare workers runtimes, which seems easier than setting up the 3-4 services you need for each sourcehut component, but whatever.

                1. 4

                  Easier compared to what? If you’re self-hosting, that’s probably a lot more difficult to set up and administer. I believe there’s a NixOS module for SourceHut that makes it as easy as enable = true and some minor config to get running. I’m skeptical of how this whole edge thing is going to work out in the long run. Businesses are competing in incompatible ways, and it’s all “free” for now til it isn’t (see Heroku).

                  However, there should be nothing stopping from some entity to writing a different front-end for the system.

            2. [Comment removed by author]

        3. 3

          a hobbyist […zip…] a nail gun

          Sure, it’s gonna help them. But they better learn how to use it before they start, or are we going to blame Bosch (or whoever made the nail gun) when the poor hobbyist nails their foot to the floor accidentally?

          1. 11

            If git had half the ergonomics and safety features of a hobbyist nail gun, there’d be a lot less complaining about it.

          2. 5

            If a nail gun was as complicated to use and reason with as git is, maybe we would.

      7. 2

        I know plenty of good carpenters that are missing fingers.

      8. 1

        I really dislike SQL’s failure to be eminently obvious to people who learn about data structures. This is likely super unpopular but I’m convinced that a version of SQL that doesn’t have query optimizations and just are like “write out the query plan” (which is a tree structure! People understand trees! They understand the concept of indices) would be much easier for developers to deal with.

        Data analysts are like… I understand the theory but I’ve never in practice seen firsthand a situation where companies don’t end up having programmers write the SQL. I know these people exist! Just never seen it myself

        Tho “just write the queryplan”…. might end up totally messing up portability so this is tricky. In practice loads of people have implementation-specific code for table creation and sometimes for querying but this model would probably make things tougher on that front.

      9. 1

        That’s not entirely fair, I think. I invested heavily in git, but 99% of my day job is just make new branch, commit, push. And I just forget things that I don’t use.

        There a still a gazillion things I don’t know how to fix. I have probably been fixed it before, but it required multiple arcane incantations and no way I remember those from a year ago. In the end I usually manage, but it’s not like a smooth ride.

    2. 26

      I’m sorry… This is gonna sound like I am teasing you or that I’m mocking you, but actually I’m really being frank, and I’m gonna give you my advice. It’s just not gonna be great.

      Or you could say “check out the previous commit and force push it”, answering their question. I don’t like this blog post. It seems to be suggesting all our tooling should be geared towards what “humans think about” instead of what engineers need to do. Humans think “I want to build a table”, not “I have to use tools to transform wood into components that assemble into a table”, but they have to do the latter to achieve the end goal. It’s IKEA vs real work.

      1. 10

        The tools we build need to be geared towards what the users think about.

        Engineers should be familiar with the mindsets their tools force upon them. That’s the true meaning perhaps of Englebart’s violin: we adapt to our tools instead of the other way around.

        1. 3

          and when someone else already pulled that commit that you just removed…

          1. 10

            Why not simply use the command that was designed for this? Do a git revert, and then a git push. Works for the whole team.

            This is a nice example of the issue outlined in the post. Only there really is no way you can dumb down git so far that you can simply forget the distributed nature of it. The only wisdom needed here is that you always need to add stuff to the chain, never remove. This principle, and the consequences of not not following it, should really be part of your knowledge if you want to be a a software developer.

            1. 2

              I think this depends on how you read the scenario in the article - I read it as “I just pushed something that shouldn’t exist in the git history”, like I’ve been in situations where someone’s pushed a hard to rotate credential, and so you rewrite history to get rid of it to reduce damage while you work to rotate

              1. 4

                a hard to rotate credential

                Isn’t this the real problem? Rather than blaming git for being a distributed version control system, how about solving the credential rotation issue?

          2. 1

            They can still use it to create a new commit if they find a need for the content. It is really only a problem if they want to communicate about it to someone who has not pulled it. IME that is extremely rare because the way to communicate about a commit is to communicate the actual commit.

      2. 2

        It feels a bit like when the author is mentoring less experienced devs, he assumes they can’t grasp the more complex aspects of the tool because it doesn’t fully click the first time.

        Over the past three decades as I’ve learned from all sorts of folks, and done my best to pass on my knowledge on all sorts of things from operating systems to networks and all sort of software development tools, I’ve often had to ask for help on the same things multiple times, esp if I don’t use a particular mechanism often.

        For the past few years, I’ve worked as a build engineer and part of my team’s job has been to help come up with a workflow for a large team of engineers that makes sense. Sometimes we intervene when there are issues with complex merges, or a combination of changes that work on their own, but not together.

        Most people also can sort things out on their own given some time. You don’t have to back out a change and do a force push - I would, because it makes the history cleaner, but there’s absolutely nothing wrong with a revert commit, which is extremely straightforward to create.

    3. 15

      It’s not like our field hasn’t tried simple systems before. Before distributed version control systems, we had centralized version control systems, which, to be fair, were already pretty good. Before that we had version control systems where a commit could be any combination of versions and files. And before that we had version control systems that just kept track of an individual file’s version.

      The history of version control is the search for the simplest thing that doesn’t create a greater mess somewhere down the road. This doesn’t have to mean that git is the final station. But I wouldn’t be surprised if its successor were even more complicated.

    4. 13

      And now for the first time I’m realizing that they have never even considered that there’s such a thing (remote and local branches), because why would they?

      This might be a bit dismissive but I can think of at least one reason someone would consider remote and local branches when working with a distributed VCS.

    5. 9

      Some of the comments here are a bit odd to me… “Git has bad UX” doesn’t imply “DVCS is bad”. Git’s underlying model isn’t awful, and you have some degree of inherent complexity from being a DVCS, but it’s also just bad UX. Commands are super overloaded, default behaviors can be odd, I’ve been using it for like 10 years and I still fuck up rebases once in a while… There are definitely ways a successor could improve the basics without ditching the entire model!

    6. 8

      Next to agreeing with the notion that these can just be learned I think there’s another problem.

      People are scared of using other tools. If you want to use something centralized, go for SVN, if you want something that might be more your mindset and had everything built in use Fossil, if you don’t want to use relational database use one of the many alternatives.

      Because I think but doing so leads to surgery bad thing which is putting huge amounts of confusing abstractions on top of it. The world doesn’t need more ORMs nor more Git GUIs where merge has a completely different meeting than it git merge.

      I’m sure there’s many good versions but the concept of layering abstractions to not learn what you could and should learn is something that causes a lot of bad software/projects/products to be out there.

      This also goes into topics like security or also annoying discussions. Whenever you have some technical and interesting discussion on containers, deployment, systems, kubernetes, etc. online you are bound to have someone creep in and basically argue that their lack of knowledge is proof that a system is bad. I’m usually (not always) on the contra-sids of these things, I think often because I have enough experience to have seen them fail. But any comment drowns in “but it’s hard to learn” comments. The other side of that is “it’s best practice”.

      And here I think lies the problem with Git. There’s a lot of people that feel like Git is somehow best practice or somehow the best or only serioi way to do think yet, they hop from whatever the trendiest tool is and never even understand the distributed part nor why it’s even called Pull Request on GitHub. Oh conflating Git and GitHub is another topic.

      That said. Git (just like PostgreSQL) has amazing official documentation.. High quality and up to date. Please make use of it.

      Or like I said before. Maybe see if there’s a better solution for you than Git.

      1. 4

        For sure, don’t use Git. That’s great advice that people often forget is an option, even when they’re tech lead. There’s a reason successful large software development businesses still pay for perforce

        1. 1

          What is that reason, exactly? The main argument I’ve seen is that Perforce or Subversion can act as a system of record, but any centralized repository can do that, including several git-oriented workflows.

          (Also, which large firm are you thinking of, exactly? I think of Google, which developed Android using git instead of their monolithic Perforce installation.)

          1. 2

            I don’t know the reason, I’ve not used it. Companies that came to mind were Valve and Ubisoft who are explicitly known to use Perforce.

            1. 4

              I have read (but don’t have direct experience) that Perforce is more common in some industries (such as game development) because it handles tracking binary files/assets better than git does.

              1. 3

                In particular, p4 can ingest large files without choking, and it has a locking mechanism so that an artist can take ownership of a file while editing it - necessary because there is no merge algorithm so optimistic concurrency doesn’t work.

          2. [Comment removed by author]

    7. 6

      How do I delete a branch?

      That suspiciously looks like a wrong question, that can only be asked because the user’s perception of Git is so completely out of whack.

      In Subversion this question would make more obvious sense: branches are basically copies of the trunk. Creating a branch more or less means copying the trunk into a branches/ subfolder and commit that. To delete that branch, well, you just delete that subfolder and commit that deletion.

      In Git branches are little more than pointers to commits, so deleting a branch means deleting the pointer. Maybe the corresponding commits will become unreachable in the process, but otherwise deleting the branch won’t delete the code itself (and if you rendered unreachable a commit you didn’t want to lose there’s always git reflog). And if you know that much about branches (every regular Git user should, no exception, no excuses) then you don’t need to ask how to delete a branch. You already typed man git branch and got your response right there, written in a language you can understand.

      I have some changes. I wanna share it with Jon and Johnny so they can tell me what they think. And then maybe they can add their ideas. And then we can have a merging of our ideas, and eventually test out if it works, and have it out there and ship it.

      That wasn’t hard to explain. I think it’s very easy for all of our listeners to understand that mental model in their heads.

      What wasn’t hard was failing to explain with enough precision. As a listener, I do not fully understand, because I can sense a lot of hand waving and possible ambiguities. Has someone speaking such requirements even have any idea what they would want to do if there’s a conflict?

      It’s no wonder such users would complain about Git being too hard or too complicated. Sure the default command line kinda sucks, especially for beginners, but there’s no escaping some of the complexities of version control. Heck, I’m not sure I can design a simpler data model than Git even if I removed the “decentralised” requirement.

    8. 6

      My issue is that 99.99% of the time I’m mindlessly using a tiny fraction of git - clone, branch, add, commit, push. So 99.99% of the time I don’t need to understand git, think about how it works, or remember the rest of its capabilities. If I only ever use the first page of the manual, why would I have a refresher read of the remaining 300 pages every six months on the off chance I need to do something out of the ordinary? There should be two tools: everyday-git, and chaos-git.

      1. 3

        Git was designed to for a whole most engineers don’t inhabit. One of the core goals is to allow distributed, peer-to-peer sharing of commits. This makes sense when you work on the Linux kernel. In that world, they routinely test WIP patches shared via a mailing list. But that’s not the world most of us live in. Usually, you just want to make some changes, have them reviewed, and merge them. Merging is a pain no matter what system you use since you have to choose how to do the operation. That’s why it’s common for people to hold off on making big changes if they know another colleague is working on the same files.

        So Git makes possible what most of us don’t care about (distributed/peer-to-peer sharing of commits), and can’t help with what makes version control complicated (merging) because it’s just essential complexity.

        It would be useful if someone made a system where merging was made as simple as possible - perhaps by tracking edits from the editor itself, chunking them logically (instead of “mine or theirs”, “keep these extra logs I added but keep their changes to add retries”).

        It doesn’t help that the most popular OSS repository, GitHub, is so tied to this technology.

        1. 3

          It doesn’t help that the most popular OSS repository, GitHub, is so tied to this technology.

          Gitlab is also so tied - Hg support was specifically rejected IIRC: https://gitlab.com/gitlab-org/gitlab-foss/-/issues/31600

          Throughout the code of both GitLab, Gitaly, and all related components, we assume and very much require Git. Blaming a file operates on the assumption we use Git, merge requests use Git related operations for merging, rebasing, etc, and the list goes on. If we wanted to support Mercurial, we would effectively end up duplicating substantial portions of the codebase. Gitaly helps a bit, but it’s not some kind of silver bullet that magically allows us to also support Mercurial, SVN, CVS, etc. It’s not the intended goal, and never will (or at least should) be. Instead, the goal of Gitaly was to allow us to centralize Git operations, and to shard the repositories across multiple hosts, allowing us to eventually get rid of NFS.

          (This quote was in the context of why adding Mercurial support would be difficult and expensive.)

    9. 4

      We have centralized VCS. For many reasons, people thought they needed distributed ones. Git solved that issue.

      Now, I find it pretty difficult to criticize Git on its distributed nature. The two difficult questions raised in the post are both made purely difficult because Git is distributed, and thus users need to think about their code suddenly in a distributed manner.

      The user hands-on aspect of version control –how we interact with it– needs to be built in something that is much more closer to how humans think about it. Rather than being:

      I will get you to think about a directed graph or as operations on a directed graph

      No human thinks about that! Humans think about:

      I’m gonna save my work and I’m gonna share it with other people. Then I’m gonna step off this computer and just leave for the day.

      No, this is not sufficient. Some very basic, surface use of git is to save your work in a shared repository that other people can sync with. But that’s not why Git is still necessary to so many teams and organization.

      Git is a way to model the evolution of a source code into something that we can reason about and organize. Its representation as a directed graph is absolutely necessary for clear and accurate modeling.

      Why do we need this model? Because source code distribution is not always that clear cut. I am not maintaining a single ‘main’ branch that every one agrees is the latest and greatest version of my software. I maintain 2 LTS branches, several stable branches in between that are all downstream receivers for bug fixes. Then I have customer branches with specific changes tailored to some weird usage that only some people asked and that I didn’t want to generalize.

      This distribution pipeline requires a tool that allows me to model the code evolution. I cannot think about it only as “I’m gonna save my work and share it with other people.”. It is simply not sufficient. A tool that would simplify code distribution that way would simply sweep under the rug the inherent complexity of the task, without actually solving it.

      Sorry but software engineering requires a bit of engineering.

      1. 8

        We have centralized VCS. For many reasons, people thought they needed distributed ones. Git solved that issue.

        No, the order of events was different.

        A minority of people thought they needed DVCSes. They reflected on the requirements and made some options.

        When Linus Torvalds thought he needed a DVCS, he did not care about anything but speed and correctness-on-ext3 (old ext3, the more paranoid-about-correctness one), so he first wrote Git as quickly as possible (cutting all the corners like looking at the models-of-the-world of other FOSS DVCSes that were available), then spent maybe 100× the effort on convincing people they need a DVCS (pushing Git in the process).

        Git is not the answer to DVCS needs of people who, on their own, believe they need a distributed VCS. Git has a lot of design decisions that are bad for 90% of the people, and bad not for complexity reasons.

        1. 5

          then spent maybe 100× the effort on convincing people they need a DVCS

          I thought it’s more like he spent 100x the effort on convincing people that he needed the distributed VCS, not that those people needed it. The convincing was done by companies like GitHub and Bitbucket who needed to sell version control.

          They needed to sell git, linux just said, “I need git so I can do my Linux thing. So I’ll be using git.” I guess 99+% of the people using Linux never checked out Linux sources from git. I know I’ve compiled my own kernels, but as far as I can recall, I downloaded a tarball, I didn’t check out the kernel myself.

          But again, that’s what I thought, maybe I’m wrong about it.

        2. 1

          Well, certainly Linus implemented Git in a way that solved what he perceived to be his issues.

          It seems that this specific solution to this specific set of problems fitted in a broadly acceptable manner the set of problems of many software teams.

          I don’t think Linus did much of Git evangelizing. Linux of course put it on the forefront, many people jumping on the bandwagon ‘if it’s good enough for linux, it probably is interesting to test’. But I find it weird that you try to characterize this pretty normal result as some kind of scheme from Linus, having a masterplan to ruin our lives by persuading us to use something we didn’t really need, for no discernible benefit to him.

          He is not the one that started selling hosted servers for Git and inviting people to use them. Those platforms also pretty clearly have a very different use of Git that they shepherd users to, than the one Linus developed for himself.

          I have not found another DVCS that offered ‘git rebase’ and ‘git cherry-pick’, which are the two main differentiators I get from it. Fossil does not cut it and I don’t agree at all with the opinion of its developers.

    10. 4

      One inconvenience on top of my head is that git has no concept of which branch a commit is on.

      For example when you go one commit back – git checkout HEAD~1: Now, git doesn’t know which branch you were, I mean are on. You have to remember that yourself, because git is in a detached head state. Which not only trips up any beginner, but is really unhelpful, since you need the branch name in order to get back.

      1. 8

        A big problem with Git is that some commands do two things and other don’t, and it’s totally inconsistent. So when you do git commit, it does two things: it adds a new commit to the system and it moves the branch to point to it. When you do git checkout branch, it does two things: it changes the current branch and it loads the files from that branch. But when you do git checkout HEAD~1, suddenly git only does one thing and it just changes the files without changing the branch pointer. Why so inconsistent!

        Basically all the confusing things in Git are caused by “in normal mode, this command does two things, but the core of it is really this one thing, so when you do a special command, it only does half of its normal work.” Git push and pull are both awful about this.

        1. 2

          That is an interesting point. The two steps you identify are definitely part of the conceptual model. There might be a usability improvement by introducing a more formal concept of “the current branch”. From git’s internal perspective there’s always two steps, even in your single step example. When you checkout anything it updates the working copy files and then it updates HEAD to point at whatever you named. You could probably save some people a lot of headaches by introducing a checkout mode that only worked with branch refs and then told you that you would need to explicitly detach from the current branch in order to continue with operations that would leave you in a detached HEAD state. Sort of like a castle wall with a guard that is happy to let you out but just lets you know there are dragons beyond the wall in case you’re not prepared to encounter dragons.

          1. 2

            a checkout mode that only worked with branch refs and then told you that you would need to explicitly detach from the current branch in order to continue with operations that would leave you in a detached HEAD state.

            Actually that’s what the novel git switch does — when you try to git switch HEAD~1, for example:

            fatal: a branch is expected, got commit 'HEAD~1'
            hint: If you want to detach HEAD at the commit, try again with the --detach option.
            
      2. 4

        You can do git checkout - to go back to where you were before: https://stackoverflow.com/a/2427389

    11. 3

      I think that the model underlying git is not well understood by some folks, and I think learning it is unavoidable, if you want to use it effectively.

      But also, the git cli UX is atrocious. Id actually be interested to read about its history and how so many commands became overloaded for so many purposes.

      I’m re-setting up CI for my nix monorepo and luckily, from reading git release notes over the years, I know about “git switch” and other newer commands that are more intuitive but boy howdy, it’s amazing how many old resources there are, or even new ones that don’t use these nicer, newer commands.

      Finally, using “jj” for about a month made me upset - git can be good. Like, actually track every change, never ever lose code on accident, easily manipulate many branches - easy. I hope that some notable bloggers can give it a shot and write about it so that it can gain some momentum.

    12. 2

      I got fed up with struggling with git at some point and resolved to learn it for good. I scanned around the internet and found out that almost every guide/tutorial on git was really really bad. So it’s no surprise that people are bad at git. Nobody is helping them learn it.

      The one tutorial I felt did a good job is the one that Atlassian publishes: https://www.atlassian.com/git/tutorials/learn-git-with-bitbucket-cloud

    13. 2

      Features I would remove from git:

      • The index
      • The stash
      • Submodules

      The index is a feature everyone pays for by being a constant stumbling block, and both the index and the stash are unnecessary features that everyone learns to use, yet inferior in my opinion to commits. You can have multiple commits as your index or stash, they stay with their branch, and you have to learn to squash commits anyway.

      The stumbling block is hidden state:

      git add this.txt
      git rm that.txt
      git checkout such.txt
      
      // time passes
      
      git diff // looks okay
      git commit -a
      

      You would think this was possible to learn to avoid. But no, you can’t, because it happens when you don’t expect it. I discovered this accidentally 11 years ago, and it’s been on the back of my mind ever since, when I type commit. And yet, I fell into this trap just after lunch today: I failed to remember that I had messed with some totally unrelated files before lunch.

      As for submodules, they are impossible to use correctly by more than 1 person: Nobody is going to remember to update their submodules every pull. So it gives you stale builds by default – something nobody wants to jeopardize. Updating dependencies, or at least checking their versions, is something that has to be automated by a build system. You can update submodules in your buildsystem, but that’s just a dream for most, since the mindshare is as always with the default.

    14. 2

      hence forth all interviews will be basic and intermediate git usage. an excellent filter.

    15. 2

      I think git has a fundamental complexity that you can’t get away from and still have it be as useful. the whole issue of people not understanding that it’s complex for a reason is a different problem

    16. 1

      Let’s mention that Git is known for its impenetrable documentation:

      https://git-man-page-generator.lokaltog.net/

    17. 1

      I will get you to think about a directed graph or as operations on a directed graph No human thinks about that! Humans think about:

      I do think this way. But the way incredibly inconsistent git is, I still prefer Sublime Merge than the CLI. Because it just doesn’t make sense.