1. 3

    I am not saying my dreams are widely shared, or good projects to take up… but I will answer the questions as stated.

    Like spreadsheet, only to data-block-first. Multidimensional arrays come first, then they are layed out to show them best (unlike spreadsheets, where huge 2D sheet is primary, then arrays are kind of clumsily specified). Of course the ranges are also named; operations are likely to be a mix of how spreadsheets work nowadays, how normal code is written in Julia/Python/R/… and some things close to APL/J. No idea whether this can be made more useful (for someone) than just the existing structured-iteration libraries, maybe with a bit better output/visualisation code…

    A DVCS that does not regress compared to Subversion. I want shallow (last month) and narrow (just this directory) checkouts supported on the level that the workflow that makes sense is just versioning entire $HOME and then sometimes extracting a subset as a project to push. Although no idea if the next iteration of Pijul will approach that.

    A hackable 2D game with proper orbital mechanics take-off-to-landing (including aerodynamic flight with a possibility of stalling before landing etc.). Orbiter definitely does more than I want, but for me a 2D version would feel a nice more casual thing. And probably 2D has better chances of not needing Wine…

    Writers have tools for sketching out and reshuffling the story; for proofs and for code documentation there is more weight on the notion of what depends on what; and sometimes one can reverse the dependency, or replace with a forward declaration; sketching and experimenting around all that can probably be aided by some kind of a tool, but no idea how it would look like. I guess it would have something in common with Tufts VUE…

    1. 3

      A DVCS that does not regress compared to Subversion. I want shallow (last month) and narrow (just this directory) checkouts supported on the level that the workflow that makes sense is just versioning entire $HOME and then sometimes extracting a subset as a project to push

      git can technically do this (using worktrees, subtrees, sparse checkouts etc.) - but the UI for it … does not exist. It seems like a low-hanging fruit to implement this (and one which some friends with whom I collaborate on monorepo tooling may end up picking at some point).

      1. 2

        The thing that git fails completely on data-model level, is that it insists a branch is a pointer. In fact it is more of a property of a commit, which leads to much much better handling of history, and as a curious (but convenient) implication also brings possibility of multiple local heads for a single branch…

        Of course all-$HOME versioning is likely to benefit from a more careful approach to branches, and maybe treating not just content but also changes as more hierarchical structures with possibility to swap a subtree of changes in place, but I really do not believe in anything starting from git here…

      2. 2

        Your spreadsheet concept basically already exists in Apple Numbers. Spreadsheets there don’t take up the whole page but instead are placed individually as a subset of the page.

        To your point on DVCS, there are big companies that do have this kind of thing available, but I’m not sure how much of it is open-sourced.

        1. 3

          Thanks!

          Re: Apple Numbers: Hm, interesting (not interesting enough to touch macOS, but I should look up whether they support more dimensions in all that etc.). Although I would expect the background computational logic to be annoyingly restrictive, but that could be independent of the layout.

          Re: DVCS: what I hear is very restrictive actually, more about how to handle an effectively-monorepo without paying the worst-case performance cost than something thinking in terms of how to structure the workflow to be able to extract a natural subproject as a separate project retroactively.

        2. 1

          As to the first and last points, maybe https://luna-lang.org would be interesting to you? (I am a huge fanboi of them.)

          1. 1

            One more data flow language?

            I mean, data flows are cool, sure, but I am fine writing them in one of the ton of ways in text, though.

            They don’t solve the data entry + presentation issue per se (layout of computation structure and layout of a data set are different issues), and structuring a proof looks way out of scope for such a tool.

            ETA: of course a data flow language well done is cool (and any laguage paradigm well done is cool), I just don’t have a use case.

            1. 1

              With Luna the idea is that you can write in text if you want, then jump to graphical instantly and tweak, then jump back to text, etc. with no loss of information.

              As to the rest, I guess I don’t know the domains well enough to really grasp your needs & pain points :) just wanted to share FWIW, in case it could get you interested. Cheers!

              1. 1

                Sure, I understood that capability to switch between representations losslessly, I just need a reason to do significant mouse-focused work (which, indeed, is not said anywhere in my comment) so using this capability would always be a net loss for me personally.

        1. 3

          I mainly used i3 on my laptop as a student, as the tilted desks in auditoriums make using a mouse nigh on impossible and trackpads just kinda suck. I switched to dwm after a while though.

          As an emacs user I want to give exwm a try soon to see how viable that is for day-to-day usage.

          1. 3

            I’ve been using EXWM as my sole window manager on all of my devices for years and it’s possibly the biggest productivity boost in my setup, ever (apart from lower-level things like investing in Nix). FWIW, my configuration (not actually all that complex is here, especially config/desktop.el.

            1. 2

              Nothing to add here except my switch to EXWM was similar; I look back on my pre-EXWM days as a kind of dark ages.

              1. 1

                Is there some advantage that EXWM has over i3?

                1. 2

                  Yes, EXWM treats every X client as just another Emacs buffer, so you don’t have to use two separate sets of bindings to manipulate something depending on whether it’s inside Emacs or outside it. Every other WM in the world lacks this incredible feature.

              2. 2

                What makes it so productive for you compared to other tiling wms?

                1. 3

                  It’s difficult to explain concisely, because it requires some understanding of Emacs (i.e. one should be over thinking that Emacs is a text editor).

                  Emacs is my primary workflow tool and having my window manager integrated into that means that there’s no longer an additional “layer” to deal with, I can use all the same tools and mechanisms to manage my windows as I use to manage everything else. I can also introspect and modify my WM the same way I would my Emacs-based mail client.

                  There’s a longer form blog post I’m working on about this, if you’re interested I can send you the draft (though I’m not particularly happy with it yet).

                  1. 1

                    I am interested, please do :)

            1. 8

              Nice write-up @cadey!

              Nix is full of gotchas and it takes quite a bit of time to ramp-up. This write-up shows a lot of the benefits that you can get from using Nix. There are some interesting milestones to reach like setting up a binary cache and then populating it with CI jobs.

              One gotcha in docker.nix is that the system part should be fixed to “linux-x86_64”, otherwise macOS users will get docker images with macho binaries in them. At that point macOS users will start seeing build failures, which can be resolved by setting up a nix remote builder machine.

              1. 3

                At that point macOS users will start seeing build failures, which can be resolved by setting up a nix remote builder machine

                Using dockerTools.buildLayeredImage or nixery should work when building images for foreign systems, as long as no cross-compilation is needed (i.e. the actual binaries are cached).

                For buildLayeredImage you could import the package set twice (once with the current system, once with the target) and use the packages from the target - the function just assembles a tarball out of it, and which system that happens on should be irrelevant.

                1. 1

                  Another way is to have CI build the image and push it to a binary cache. Configure the binary cache on the macOS machine. As long as no rebuild is necessary the local machine can then just pull it from the cache.

                2. 2

                  Is there a way to avoid setting “linux-x86_64”? That is, have Nix recognize that the target system is linux and build accordingly?

                  1. 1

                    The system attribute is what Nix uses to recognize the target system.

                    There are also some cross-compilation facilities in nixpkgs if you are always building from the mac for example that could be used. Search for nixpkgs.pkgsCross.

                1. 2

                  Finishing up a static site generator in Nix for my new website and then finally getting around to writing a blog post about Emacs I’ve been thinking about for a while.

                  Specifically what I’m trying to do is write a post that highlights to people that Emacs is not a text editor (though it has a text editor) and is actually a platform for implementing applications with text-based UIs. This is not to convince people to use Emacs, but to engage with it enough to understand the paradigm and maybe become interested enough to investigate how that paradigm can apply to other stuff they’re working on.

                  1. 1

                    Your Emacs article sounds interesting. Looking forward to it.

                  1. 27

                    Interesting post. The main issue of Nix is its onboarding curve and lack of simple-to-grok documentation.

                    There’s a few things in this post worth responding to:

                    Now, you may ask, how do you get that hash? Try and build the package with an obviously false hash and use the correct one from the output of the build command! That seems safe!

                    Nix has the prefetch-* commands that can do this for you and output either the hash, or a full Nix expression that refers to that thing.

                    I could avoid this by making each dependency its own Nix package, but that’s not a productive use of my time.

                    It depends. My personal view recently has been that Nix should adopt a more Bazel-like model, in which the Nix language is also used for describing the actual software builds rather than just wrapping external package managers.

                    I have implemented this for Go (buildGo.nix / docs) and Common Lisp (buildLisp.nix), and with the Go one specifically external dependencies can be traversed and automatically transformed into a Nix data structure.

                    For example, here’s the buildGo.nix packaging of golang.org/x/net (from here):

                    { pkgs, ... }:
                    
                    pkgs.buildGo.external {
                      path = "golang.org/x/net";
                      src = builtins.fetchGit {
                        url = "https://go.googlesource.com/net";
                        rev = "c0dbc17a35534bf2e581d7a942408dc936316da4";
                      };
                    
                      deps = with pkgs.third_party; [
                        gopkgs."golang.org".x.text.secure.bidirule.gopkg
                        gopkgs."golang.org".x.text.unicode.bidi.gopkg
                        gopkgs."golang.org".x.text.unicode.norm.gopkg
                      ];
                    

                    This makes every subpackage available as an individual Nix derivation, which also means that those builds are cached across different software using those dependencies.

                    this is at least 200 if not more packages needed for my relatively simple CRUD app that has creative choices in technology

                    For most mainstream languages generators have been written to wrap 3rd-party package managers automatically. For some languages (e.g. Python), the nixpkgs tree actually contains derivations for all packages already so it’s just a matter of dragging them in.

                    Oh, even better, the build directory isn’t writable.

                    This isn’t true for Nix in general. The build directory is explicitly writable and output installation (into /nix/store) usually (in Nix’s standard environment) happens as one of the last steps of a build.

                    It might be that this was a case of some language-specific tooling implementing such a restriction, in which case there’s probably also a flag for toggling it.

                    You know, the things that handle STATE, like FILES on the DISK. That’s STATE. GLOBALLY MUTABLE STATE.

                    The conceptual boundary is drawn differently here. In some sense, we look at the artefacts of realised derivations (i.e. completed “builds”) as a cache. The hashes you see in the /nix/store reference the inputs, not the outputs.

                    Also, nothing written to the store is mutated afterwards so for any given store there is mutability, but it is append-only.

                    As a side effect of making its package management system usable by normal users, it exposes the package manager database to corruption by any user mistake, curl2bash or malicious program on the system.

                    I’m not sure what is meant by this.

                    Edit 1: Ah, on the above this tweet adds some background (I think):

                    It doesn’t matter how PURE your definitions are; because the second some goddamn shell script does anything involving open(), you lose. The functional purity of your build is gone.

                    By default, Nix sandboxes builds which means that this is not a problem. Only explicitly declared dependencies are visible to a builder, and only the build directory and output path are writeable. Users can enable various footguns, such as opting out of sandboxing or whitelisting certain paths for passthrough.

                    1. 6

                      By default, Nix sandboxes builds which means that this is not a problem.

                      Only on linux, unfortunately. The author seems to be on mac, which is probably why they didn’t know about the sandboxing.

                      1. 3

                        It seems like sandboxing is available on Mac (thanks puck for pointing this out), but for users running Nix in single-user mode (which OP might be doing) there is currently some extra hoop-jumping required to make it work correctly.

                        1. 1

                          I was thinking of this line from https://nixos.org/nix/manual/:

                          In addition, on Linux, builds run in private PID, mount, network, IPC and UTS namespaces to isolate them from other processes in the system (except that fixed-output derivations do not run in private network namespace to ensure they can access the network).

                          It looks like on mac it’s just a chroot to hide store paths but you can still curl install.sh | bash in your build. I didn’t know it even had that much sandboxing on mac though, so thanks for pointing it.

                      2. 4

                        You know, the things that handle STATE, like FILES on the DISK. That’s STATE. GLOBALLY MUTABLE STATE.

                        The conceptual boundary is drawn differently here. In some sense, we look at the artefacts of realised derivations (i.e. completed “builds”) as a cache. The hashes you see in the /nix/store reference the inputs, not the outputs.

                        Also, nothing written to the store is mutated afterwards so for any given store there is mutability, but it is append-only.

                        I really like this aspect of Nix: it’s like all packages exist in some platonic library of babel, and we copy a few of them into our /nix/store cache as they’re needed. This style of reasoning also fits with the language’s laziness, the design of nixpkgs (one huge set, whose contents are computed on-demand) and common patterns like taking fixed points to allow overrides (e.g. all of the function arguments called self).

                        A similar idea applies to content-addressable storage like IPFS, which I’m still waiting to be usable with Nix :(

                        1. 2

                          Nix should adopt a more Bazel-like model, in which the Nix language is also used for describing the actual software builds rather than just wrapping external package managers.

                          Would that involve “recursive Nix” to allow builders to use Nix themselves, in order to build sub-components?

                          1. 3

                            Recursive Nix is not necessary. For some languages this can already been done. E.g. the buildRustCrate function reimplements (most of) Cargo in Nix and does not use Cargo at all. This is in contrast to buildRustPackage, which relies on Cargo to do the builds.

                            You can convert a Cargo.lock file to a Nix expression with e.g. crate2nix and build crates using buildRustCrate. This has the same benefits as Nix has for other derivations: each compiled crate gets its own store path, so builds are incremental, and crate dependencies with the same version/features can be shared between derivations.

                            1. 2

                              No, I’m not using recursive Nix for these. In my opinion (this might be controversial with some people) recursive Nix is a workaround for performance flaws of the current evaluator and I’d rather address those than add the massive amount of complexity required by recursive Nix.

                              What’s potentially more important (especially for slow compilers like GHC or rustc) is content-addressed store paths, which allow for early build cutoff if two differing inputs (e.g. changes in comments or minor refactorings) yield the same artefact. Work is already underway towards that.

                            2. 2

                              Can you please edit documentation somewhere to note the existence of the prefetch commands and how to use them?

                              Does that buildGo.nix thing support Go modules?

                              1. 7

                                Can you please edit documentation somewhere to note the existence of the prefetch commands and how to use them?

                                nix-prefetch-url is part of Nix itself and is documented here, nix-prefetch-git etc. come from another package in nixpkgs and I don’t think there’s any docs for them right now.

                                Nix has several large documentation issues and this being undocumented is a symptom of them. The two most important ones that I see are that the docs are written in an obscure format (DocBook) that is not conducive to a smooth writing flow and that the docs are an entirely separate tree in the nixpkgs repo, which means that it’s completely unclear where documentation for a given thing should go.

                                The community disagrees on this to various degrees and there is an in-progress RFC (see here) to determine a different format, but that is only the first step in what is presumably going to be a long and slow improvement process.

                                Does that buildGo.nix thing support Go modules?

                                I’ve never used (and probably won’t use) Go modules, but I believe Go programs/libraries written with them have the same directory layout (i.e. are inspectable via go/build) which means they’re supported by buildGo.external.

                                If your question is whether there’s a generator for translating the Go module definition files to Nix expressions, the answer is currently no (though there’s nothing preventing one from being written).

                                1. 1

                                  Is there a way to get a hash of a file without making it available over HTTP?

                                  1. 6

                                    Yep!

                                    /tmp $ nix-store --add some-file 
                                    /nix/store/kwg265k8xn9lind6ix9ic22mc5hag78h-some-file
                                    

                                    For local files, you can also just refer to them by their local path (either absolute or relative) and Nix will copy them into the store as appropriate when the expression is evaluated, for example:

                                    { pkgs ? import <nixpkgs> {} }:
                                    
                                    pkgs.runCommand "example" {} ''
                                      # Compute the SHA256 hash of the file "some-file" relative to where
                                      # this expression is located.
                                      ${pkgs.openssl}/bin/openssl dgst -sha256 ${./some-file} > $out
                                    ''
                                    

                                    Edit: Oh also, in case the question is “Can I get this hash without adding the file to the store?” - yes, the nix-hash utility (documented here) does that (and supports various different output formats for the hashes).

                              2. 1

                                For example, here’s the buildGo.nix packaging of golang.org/x/net (from here):

                                Proxy error (the link obvisouly).

                                Edit: Back up!

                                1. 2

                                  Hah, sorry about that - I’m running that web UI on a preemptible GCP instance and usually nobody manages to catch an instance cycling moment :-)

                                2. 1

                                  Oh, even better, the build directory isn’t writable.

                                  This isn’t true for Nix in general. The build directory is explicitly writable and output installation (into /nix/store) usually (in Nix’s standard environment) happens as one of the last steps of a build.

                                  It might be that this was a case of some language-specific tooling implementing such a restriction, in which case there’s probably also a flag for toggling it.

                                  It’s most likely caused by the derivation either trying to build inside a store path, e.g. cd "${src}" && build, or inside a copy of a store path (which preserves the read-only flags), e.g. cp -a "${src}" src && cd src && build. We can see if that’s the case by looking at the build script in the failing .drv file: they’re plain text files, although they’re horrible to read without a pretty-printer like pretty-derivation. This is probably quicker than trying to get hold of and inspecting the failing derivation in nix repl, since it may be buried a few layers deep in dependencies.

                                  I actually make this mistake a lot when writing build scripts; I usually solve it by putting chmod +w -R after the copy. If someone else has written/generated the build script it may be harder to override; although in that case it would presumably break for every input, so I’d guess the author might be calling it wrong (AKA poor documentation, which is unfortunately common with Nix :( )

                                  It might be a symptom of the Koolaid, but I find this a feature rather than a bug: Nix keeps each download pristine, and forces my fiddling to be done on a copy; although the need to chmod afterwards is admittedly annoying.

                                1. 11

                                  I have been using NixOS exclusively on my computer for 2 years now (see my NixOS config). I also write Haskell, both at work and as hobby, using Nix instead of stack (more on that here). Ask me anything!

                                  1. 3

                                    So if I start with your NixOS config, can I get staarted with NixOS and adapt it to my needs? Is it possible to combine it with Guix?

                                    1. 4

                                      Is it possible to combine it with Guix?

                                      Partially. Guix is basically a fork of Nix with Scheme implanted instead of the Nix expression language. Both package managers use the same fundamental unit (Nix’s derivation), and derivation (.drv) files produced by Guix can be imported in Nix. There’s no easy way of bridging that gap though, currently. I’m also not sure if both can run off the same store.

                                      As for running Guix on NixOS, work is being done (albeit slowly) to enable this via a NixOS module.

                                      In general I’d recommend learning the Nix language instead. Even as a Lisper, I find it to be quite pleasing to work with once you get the hang of it. There’s a one-page overview over the language that I wrote a while back to help people get started, which you might find useful.

                                      1. 1

                                        Thanks for this. Your one-pager is really useful; perhaps the most accessible intro to the lanauge I’ve read. But I have to say that the Nix language makes me yearn for the clean semantics and constructs of Scheme. Rather than running Guix on Nix, wouldn’t it be easier and cleaner to write Nix on Guix? I don’t mean to start a language war here but if you put a gun to my head and asked me to unify the two approaches then i know which route I would choose.

                                        1. 1

                                          Glad to hear nix-1p helps!

                                          Nix has a few warts (such as the ? operator and some of its builtins), but overall seems like a fairly clean language to me.

                                          There’s pros and cons to each of the two approaches here. For example, Guix gets namespaces for free from Guile, which means there is a defined and queryable package set (whereas Nix just has one big attribute set that you traverse).

                                          The downside of this is that you now have a namespace, and declaring things into them becomes a side effect. In Nix it’s very easy (for people experienced with the language & tooling) to understand exactly which code is relevant, this becomes less clear once you have sequential execution, mutability and so on.

                                          My ideal setup would probably be a language with the exact semantics of Nix (purely-functional, lazy) but an S-expression syntax. That’s easy to implement, but at the moment there’s more important things to work on in the ecosystem.

                                      2. 2

                                        So if I start with your NixOS config, can I get staarted with NixOS and adapt it to my needs?

                                        The first step would be to make a fresh install of NixOS on your machine. And then, yea, you can fork and use my config per the instructions in README; although you don’t really need to. You can start from the NixOS base configuration.nix, and then customize it based on the tips from https://nixos.wiki/

                                        Is it possible to combine it with Guix?

                                        Nope. Guix does not even use Nix.

                                      3. 1

                                        Have you used docker inside NixOS? And if so, why?

                                        1. 2

                                          To run an one-off image from Docker registry, like mysql or redis, specific to a project. Docker is generally not required on NixOS for creating reproducible environments.

                                      1. 2

                                        Fun! D is one of those languages I’ve had on my todo-list for a while. People seem to have mixed experiences with it, but I’ve heard a lot of good things about the standard library and that’s worth investigating :)

                                        1. 9

                                          “Without a single branch” is a stretch… There are no explicit conditionals, but map and fold are certainly branching under the hood for iteration.

                                          Anyway, here’s a “no conditional” version in Common Lisp. Took about 30 seconds to write:

                                          (defun wc (file-name)
                                            (with-open-file (stream file-name)
                                              (loop 
                                                for line-count from 1
                                                for line = (read-line stream nil)
                                                while line
                                                for words = (cl-ppcre:split "\\s+" line)
                                                summing (reduce #'+ words :key #'length) into total-chars
                                                summing (length words) into word-count
                                                finally (return (values line-count word-count total-chars)))))
                                          
                                          1. 1

                                            Not that it matters, but does the total characters include the whitespace in your implementation?

                                            1. 1

                                              It looks like a bug to me. Would think you’d want summing (length line) into total-chars

                                              1. 1

                                                Good catch… A couple other mistakes were that the line count was off by 1, and the total character count was missing all white space.

                                                Here’s a version that allocates less and runs a little faster:

                                                (defun wc-new (file-name)
                                                  (with-open-file (stream file-name)
                                                    (loop 
                                                      for line-count from 0
                                                      for line = (read-line stream nil)
                                                      with word-count = 0
                                                      while line
                                                      do (cl-ppcre:do-matches (word-start word-end "\\w+" line)
                                                        (incf word-count))
                                                       summing (1+ (length line)) into total-chars
                                                       finally (return (values line-count word-count total-chars)))))
                                                
                                                1. 5

                                                  This version runs quite slowly for me (probably due to the use of regular expressions and explicit splitting). It also doesn’t seem to count words correctly (I get much higher counts than with wc).

                                                  Here’s a Lisp version (entire program included) that runs in about ~1.5x the time of the coreutils version on average (testing with a ~400MB file that is basically a lot of copies of the GPL) and appears to be correct:

                                                  (defpackage wc
                                                    (:use #:cl #:iterate)
                                                    (:export :main))
                                                  (in-package :wc)
                                                  (declaim (optimize (speed 3) (safety 0)))
                                                  
                                                  (defun main ()
                                                    (let ((filename (cadr sb-ext:*posix-argv*))
                                                          (space (char-code #\Space))
                                                          (newline (char-code #\Newline)))
                                                      (with-open-file (file-stream filename :element-type '(unsigned-byte 8))
                                                        (iter
                                                          (for byte in-stream file-stream using #'read-byte)
                                                          (for previous-byte previous byte)
                                                          (for is-newline = (eql newline byte))
                                                  
                                                          ;; Count each byte
                                                          (sum 1 into bytes)
                                                  
                                                          ;; Count every newline
                                                          (counting is-newline into newlines)
                                                  
                                                          ;; Count every "word", unless the preceding character already
                                                          ;; was a space.
                                                          (when (or (eql space previous-byte)
                                                                    (eql newline previous-byte)
                                                                    (not previous-byte))
                                                            (next-iteration))
                                                  
                                                          (counting (or is-newline (eql space byte))
                                                                    into words)
                                                  
                                                          (declare (fixnum bytes newlines words))
                                                          (finally (format t "  ~A ~A ~A ~A~%" newlines words bytes filename))))))
                                                  
                                                  

                                                  If you have the Nix package manager installed, you can try this version by running this command (note that this will pull in SBCL):

                                                  nix-build -E '(import (builtins.fetchGit "https://git.tazj.in") {}).fun.wcl'
                                                  
                                                  1. 1

                                                    Well that’s embaressing, I hadn’t thought to compare against wc from coreutils! Using “[^\\s]+” instead of “\\w+” should fix it. The regex splitting was an easy way to avoid explicit branching, but definitely has a performance penalty.

                                          1. 6

                                            I disagree with more things in this list than I agree with, a few things seem particularly pathological:

                                            37: This meme is just outdated. Please stop perpetuating it.

                                            49: This is not that hard to fix, even in small organisations.

                                            32: As edef put it, this is how you get low-frequency, high-impact bugs to hang around forever.

                                            A lot of the technical things in this post are issues that a smaller organisation without the budget to build sophisticated tooling would have (e.g. 10) or that are indicators of larger organisational issues (e.g. 26, 5).

                                            1. 3

                                              37: This meme is just outdated. Please stop perpetuating it.

                                              Nah, I see this a lot at both my work and where I volunteer teaching programming. Most memorize a few commands for it that usually work. When they don’t work, they turn to StackOverflow or one of the few people who do understand it.

                                              1. 1

                                                49 is still valuable. If your org has solved it, that’s great, but you still need to use the solution the org has come up with, rather than just looking at master.

                                                1. 1

                                                  I’m in an org that is switching to Gitlab from a long history with TFS. And it’s amazing how many of my “intermediate” and “senior” software engineers can’t get code out of git with out having to use Visual Studio. And God forbid they have to use the git cli. So I think 37 isn’t a meme for a lot of us.

                                                  And 49 is great till you find out somebody switched a pipeline to use a branch for some reason and never moved it back.

                                                1. 1

                                                  My monorepo and some other minor stuff such as my blog and services I wrote for personal coordination stuff.

                                                  Used to self-host email but now my domain is a GSuite account instead.