1. 25
  1. 7

    Why clone forks? Why not just add them as a remote to your existing clone?

    1. 1

      I personally find them useful sometimes. Say I have a fork that I am working on and need to refresh an endpoint to see how something should work but my work in progress breaks that. I can skip over to the cloned fork project and see things instantly.

      Switching branches mid workflow can achieve the same but also dirty the environment in ways that can cause unexpected behaviour therefore I now prefer to keep them as separate projects.

      1. 5

        Have you tried git worktree? I’ve only read its documentation but never tried it myself or heard about people’s experiences about using it for that kind of workflow.

        1. 4

          Exactly this, I’ve used worktree before (and the docs are relatively easy to understand) to have several branch trees open side by side, just do git worktree add <path> <branch> and that’s it. When you’re done, just do git wokrtree remove <path>.

          The only tricky part is that, if you have a branch checked out in a worktree, you can’t do operations on it from another worktree. git will tell you that you already have a checked out copy, and you’ll need to do those inside the other worktree folder.

        2. 1

          dirty the environment in ways that can cause unexpected behaviour

          One example of this I get a lot is that different versions have different (conflicting) dependencies and so trying to just have one built tree means you’re constantly rebuilding the virtualenv or node_modules or whatever.

      2. 5

        I have the same question as peter. When I fork a project I just add the original project as a remote and then occasionally rebase on top of it when I have to.

        Personally, I organize my stuff in two folders:

        • ~/work, where anything for-profit goes, and
        • ~/sandbox, where everything else goes.

        I don’t use autojump, but I came up with something similar (in purpose) to make jumping into particular projects easier. It’s a fish script I call workon and all it does is it finds the first occurrence of a project name within either ~/sandbox or ~/work and jumps into it. If the project happens to be a Python project and there’s a virtualenv for it, then it activates that as well. I’ve been using this in some shape or form for years (even before I started using fish) and it’s probably my most used command (excluding vcs commands).

        1. 2

          While I don’t use fish and we obviously work on different projects, your “workon” script is a great idea! I’m totally going to write my own version of that!

          1. 1

            If you don’t use fish (and maybe if you do) then you can set CDPATH to get this automatically. :)

            More info if you search for CDPATH here: https://www.gnu.org/software/bash/manual/html_node/Bourne-Shell-Builtins.html

        2. 3

          I organize my file system a bit differently. Everything is accessed from Alfred and Repos Workflow.

          1. 1

            I quite like your organisation structure.

          2. 3

            Stuff I wrote goes in ~/devel. Stuff other people wrote goes in ~/software.

            1. 2

              I do the same: ~/projects and ~/src , chosen (with hindsight) to be less and more Unixy respectively.

              1. 1

                Pretty much the same for me, except I use “prj” instead of “projects”

            2. 2

              I organize by creating a folder each day with that day’s date.

              E.g. today I created 2019-02-01 and cloned https://github.com/laarc/laarc under it.

              It’s nicer to organize by date, because if I need something from long ago I can just use locate or rg to find it. And this way lets me create extra clones for one-off experiments without polluting some global folder with a bunch of copies.

              1. 2

                This is the way that I organize mine, too! :)

                I also wrote a ZSH script to help me clone faster given the directory information. Here it is if interested!

                https://github.com/monokrome/dotfiles/commit/c78843ec34dba28fd1cc0947efb25d5e93a751ab

                1. 2

                  I came to a similar conclusion than the author and created h.

                  It goes one step further and automatically changes directory to the folder for you. That way I don’t have to know if the repository has been checked out or now.

                  Type h <url> or h <owner/repo> and you get a local working directory with the code.

                  1. 2

                    git get sounds like https://github.com/motemen/ghq. I use this a lot and have my path as ~/src.

                    1. 2

                      Interesting! I was unaware of this when I made this script. Will check it out.

                    2. 2

                      ooh. A zero dependency Python script is certainly better than ghq, but I need it to expand bare user/proj into github URLs automatically :)

                      1. 1

                        Very interested in how it can be improved. Can you give an example of what you mean?

                        1. 1

                          git get user/project

                          without typing https://github.com

                          1. 2

                            Zero dependency POSIX shell version: http://sprunge.us/WanJ6v uncomment line 14 for github default.

                            1. 1

                              This link has gone dead with:

                              This application is temporarily over its serving quota. Please try again later.

                            2. 1

                              Ah! Gotcha. I added an issue here: https://github.com/pietvanzoen/git-get/issues/5

                              Thanks for the suggestion.