1. 2

    This is nice. But it’s still more setup than needed. Instead I just .gitignore everything and force-add the things I want to add.

    Starting from zero:

    cd ~/
    git init .
    echo '*' > .gitignore
    

    To add a file, use the “force” option:

    git add -f
    

    To setup a new computer:

    cd ~/
    git clone <git-repo-url> foo
    mv foo/.git .
    git reset --hard @
    

    That’s it. No idea why people insist on symlinks, install scripts, “gnu stow” and various other over-engineered things. Well, actually I have an idea of why people insist on over-engineered things ;)

    1. 4

      I did that for a while, but I find it really easy to accidentally mess stuff up badly by thinking you’re in a directory with a git repo but then you’re accidentally running against your ~/.git. That’s not an issue if everything is pushed to a server (unless what you accidentally ran included git push -f), but if there”s been a while since you pushed (or, god forbid, a while since you committed), it’s possible to lose quite a bit of work that way.

      Maybe one solution would be to use something like ~/.dotfiles-git instead of ~/.git and alias dotgit to git --git-dir=~/.dotfiles-git --work-tree=~ ? We’d have the same simplicity benefits, but just running regular git commands in the wrong dir wouldn’t have any effect (and IDEs and text editors which try to scan the git dir to be useful wouldn’t be as sad).

      1. 1

        I do pretty much the same, but with Mercurial. No need to run setup to shuffle symlinks, or have the repo outside ~ itself when I’m making changes, and where one might git clone to pull in other projects in a setup script, I do have a script can add them, but my config degrades gracefully without them. I can pull stuff in if I want to, and function fine without. I don’t think I’ve run into any trouble or surprises doing this, so I’m not sure what the advantages of other schemes might be.

        1. 0

          if you do this manually right now, check out https://github.com/tubbo/homer :)

          1. 1

            Are you joking? That’s an example of the very thing I want to avoid. The “manual” steps I describe above are not more complicated than the steps needed to install and use a “shell plugin” or other third-party tool.

            1. 0

              no, not joking…just trying to help. i did the above for years and got annoyed trying to remember the right commands to run. Homer does a few more things than just the home directory repo thing but that’s one of its big features.

              1. 4

                That makes sense if dotfiles is the only thing you use git for, or you’re in a situation where the only git commands you really need are git clone/add/commit/push. However, as I’ve gotten used to how git actually works through years of contributing to open source and working professionally with other people, what /u/jmk is suggesting doesn’t look like commands I would have to remember any more than how you have to remember that cd <directory> goes to a directory or tar czf foo.tgz foo makes a compressed tarball of foo.

        1. 5

          AF_UNIX is supported on Windows now too, and has some advantages over named pipes: https://devblogs.microsoft.com/commandline/af_unix-comes-to-windows/

          1. 6

            At the rate things are going, in 2024 Windows is just going to be a UI on top of the Linux kernel…

            1. 6

              You use unix or you spend 30 years reinventingreinventing unix, they say…

              1. 2

                I’ve speculated about that on lobste.rs before. To summarize a few thousand words of discussion: Windows supports legacy applications primarily via its elaborate kernel. A fully backwards compatible swap is out of the question.

                I think the recent-ish Windows Long Term Service Branch/Channel gives Microsoft a path to switch the consumer Windows kernel to Linux, and put the NT kernel into full on maintenance mode. But perhaps developing WSL on NT will always cost less than a full kernel swap? Hard to say.

                1. 2

                  I think I’d be okay with that.

              1. 3

                The items in the “general” section should be caught by linters. But overall this is a good list because it’s terse, and thus usable. If it grows into a giant FAQ or guide then it will be ignored as usual :)

                Actually, this post should be in :help.

                More generally, I wish that @sjl accepted PRs or at least corrections for http://learnvimscriptthehardway.stevelosh.com/ , so that the community could continue to iterate on it. Instead of continuing the sprawl of random tutorials/guides/vim-101 blogs.

                1. 2

                  If it grows into a giant FAQ or guide then it will be ignored as usual :)

                  This post was inspired by Effective Go, which is exactly the sort of “giant guide” that you mention, but I still think it has value because it’s quite detailed and in-depth (you can almost learn most of Go from just this guide alone). I think both approaches have value; I opted for a more concise post because I’m lazy and wanted to actually finish this in 2019, and not 2023 or whatever because:

                  $ ls -l ~/arp242.net/_drafts/*.markdown | wc -l
                  233
                  
                  1. 1

                    This post was inspired by Effective Go, which is exactly the sort of “giant guide” that you mention

                    For long-form it would make more sense to iterate on http://learnvimscriptthehardway.stevelosh.com .

                    1. 1

                      Learn VimScript The Hard Way is great, and I regularly refer people to it, but it’s also written in a certain format that doesn’t work well for everyone. I don’t think it should be the only resource for all things VimScript related.

                1. 2

                  This all can only happen because the web got used to monolithic all-inclusive, batteries-included browsers in the 90s.

                  They need to be broken up in components, so that one could pick the bookmarks database, a caching solution, a VPN/proxy solution, a webclient/downloard (or choose to omit these components for certain use cases). Web- and content blocking could well be done in a proxy => no conflict of interest like in the current browsers oligopoly Translation could well be done in a proxy, Mozilla would not have leverage to interfere.

                  I’ve lately gotten fed up with Mozilla mostly due to the imperative take on DoH.

                  Maybe donate some money to the next browser guys?

                  1. 8

                    What technologies are you envisioning as enabling an alternate universe where such components are composable in a useful way? Keeping in mind that not even operating systems have figured this out after 30 years: we barely have working shells.

                    1. 2

                      Sorry for the late answer, I’m offline in lots of waiting rooms at the moment.

                      I want to ask counter questions:

                      • Why are there only two major browsers left? Because “browser” is a huge all-inclusive behemoth. It need to be broken in smaller components (or modules or parts), making it more comprehensible, hackable, changeable, and rightsizesable.
                      • Why is hacking browsers so unattractive to many people? Because of the hard learning curve: even IF you managed to wrap your head around one browser and changed or extended its function, even then you archieved it only in one browser, changing N browsers is N-fold the amount of work.

                      Bookmarks IMO are one thing that does not belong to the application but to the user - it is HIS data. Why should a User have to implant N extensions in N browsers only to be able to export/import/move his bookmarks colletion?

                      Many users chose to rent something like del.icio.us or pinboard, just to have everything in one place.

                      If you were to extend a bookmark manager to possibly regularly ping the URLs, cache sites for offline use, go the zotero route, save HARs or WARs of bookmarked resources or just implement a way better search, you have to do so for each and every browser (and maybe buy even other platforms, because Safaris does not run on BSD or Linux, IE11 does not run on Linux…)

                      Same goes for caching, and cache cleaning. Why does it have to implemented in each and every browser, why does it have to be deleted in each and every browser? A “local disk cache server” could even store google fonts and all mayor frameworks and CDNs for offline use, blocking off requests to the originals and REFERER: leaks.

                      Same goes for browsing history - it’s the users history, not the users history in IE, not the users history in (Chrom(e|ium)|Opera), not the users history in FF. There should be ONE place to look it up and to delete it.

                      Same goes for adblocking, and adblocking rulesets.

                      It is alway a massive vendor-lock-in of a users data in a product.

                      For the interfaces I can only recommend reading Apollo program documentaries.They worked without fixed, predefined interfaces, but were flexible to change it if there was need to change. And it worked out for the project.

                      If for example you had your DOM (and react’s shadow DOM) in a headless browser, and your browser tab was only a “copy-of-DOM-renderer”, you could archive fantastic things:

                      • the DOM in the headless browser could run without adblocker, all of the dandruff fully loaded => all of the “pleeze switch off adblock for our domain” dialogs and blockings are gone. You would nevertheless not see any of it, because YOUR adblocker would run during copying the DOM to your renderer - you get the “improved” copy.
                      • during the same step you could also implement translating the text, as well as OCR on the “[EXIT]”- Button-GIFs and other texts in images.
                      • if you are really good in AI, your DOM-copying middleware could implement an “almost shopper”: automatically clicking on banners, browsing ads, adding things to shopping carts - and never buy them, but reload them every 10 min, the best corporate honey pot possible. Placing ads in the web would be prohibitively expensive for THEM if every browser would do so.
                      1. 1

                        Why are there only two major browsers left? Because “browser” is a huge all-inclusive behemoth. It need to be broken in smaller components

                        Sounds great, but it’s meaningless: everything would be better if everything were just magically better. What you are proposing will add engineering cost to an already complex endeavor, without adequate motivation.

                        My question was: what technologies and methods do you propose. Your “counter questions” only reframe the original premise.

                        Why is hacking browsers so unattractive to many people? Because of the hard learning curve

                        I’m certain that everyone wants their own projects to be well-architected so that humans can reason about them. Anything else is counterproductive, for all parties. There is zero doubt in my mind that everyone working on every software project wants the project to be simpler and better architected.

                        Typically that is defeated by the need to use existing/legacy work. For a radically different approach, see urbit which throws out everything and starts fresh.

                        If you were to extend a bookmark manager …

                        Why would you need a browser for that at all?

                        It is alway a massive vendor-lock-in of a users data in a product.

                        People should not use “lock in” to mean “inconvenient”. There is absolutely no lock-in if you have access to your data. “Lock in” refers to cases where you can’t get your data.

                        If for example you had your DOM (and react’s shadow DOM) in a headless browser, and your browser tab was only a “copy-of-DOM-renderer”, you could archive fantastic things:

                        Headless support exists and is actively improving, because it’s crucial for testing/automation/accessibility. Example: https://chromium.googlesource.com/chromium/src/+/lkgr/headless/README.md

                        The ad-blocker idea is interesting, but will be complicated regardless of the current “browser monopoly”.

                        It’s insane that we have pervasive open source in 2019, yet people find a way to call it a “monopoly” or “lock in” or other hyperbole. This is a better future than we could ever have hoped for in the 90s, when the fate of OSS was not so obvious.

                    2. 1

                      What do you envision the benefits being of having a separate bookmarks database (for example) as opposed to just making the browser’s built-in one more configurable somehow? I share @jmk’s concern that that kind of architecture would make everything a little bit slower, with 95% of users not getting any kind of benefit to offset that slowness.

                      I’m also not sure how you could do content blocking as well in a proxy as you can with something like uBlock Origin. That extension lets me block certain elements on certain pages, and it lets me whitelist content from cdn.com when it’s requested from example1.com but not when it’s requested from example2.com. Even if you were able to get all this kind of logic in a proxy, what would the UI look like for the user to configure it? I’m guessing you would want that UI to be in the browser, right?

                      1. 1

                        Sorry for the late answer, I’m offline in lots of waiting rooms at the moment. I gave my answers in the reply to jmk, please read there (and forgive spelling errors, I’m not used to write in bed).

                    1. 3

                      designed to be translated to any regex dialect

                      This reminds me of SRE regex, did you take any inspiration from there? It seems to me that structured REs of some kind would remove a lot of unnecessary pain from writing & parsing regexes.

                      Eggex:

                      / ~[ a-z A-Z ] /
                      

                      SRE:

                      (w/nocase (- (/"az"))
                      
                      1. 3

                        I think I had heard of it, but I didn’t take inspiration from it. Someone else brought it up, and here is a partial list of the differences:

                        https://www.reddit.com/r/ProgrammingLanguages/comments/d82zjq/egg_expressions_oil_regexes/f18ld8c/

                        Probably the more direct influence was Russ Cox’s regex articles and RE2’s multi-engine design and front end. There was this pretty exhaustive survey of regex syntax across engines:

                        https://github.com/google/re2/blob/master/doc/syntax.txt

                        I didn’t talk about this, but 5 years ago, before Oil existed, I tried to do the same thing with Python’s syntax:

                        http://www.oilshell.org/annex/cre-syntax.html

                        That page may go down at any moment, but the syntax of that older version of Eggex was somewhat based on the RE2 docs.


                        In short, the relation is that RE2 aims to support a union of features in a lot of different regex engines, and so does eggex. Although of course eggex is vastly simpler since it’s a language designed to be translated, rather than an engine.

                        1. 2

                          Interesting, thanks. Amazing how much work Alex Shinn has done and it basically goes unused. I wonder how many of us are piddling along the same path. :) Ubiquity always wins..

                          With OilShell you are doing a great job of communicating your ideas, which the Emacs/Lisp crowd generally fails to do.

                      1. 18
                        • More software = more entropy.
                        • Package managers are hard.
                        • NPM has some anti-features, so just because it has a feature doesn’t mean it’s a good idea.

                        Lua needs more packages and Luarocks needs help. Hisham does most of the work, it sure would be nice if others helped him. Starting from scratch is fun, but just adds entropy.

                        1. 12

                          Seriously echoing this: luarocks has its issues but “it isn’t npm-like enough” is not one of them. Please consider working together instead of fragmenting effort.

                          1. 1

                            Hisham has been openly antagonistic (or forcefully oblivious) to security concerns. The late addition of https to the rocks server was something. Having some competition in the package system is a good thing.

                            1. 1

                              Please don’t spread fear/uncertainty/doubt. Hisham is doing a lot of work, and I’m sure a well-wrought PR would be welcome. Drive-by claims of insecurity are less likely to be immediately embraced.

                              1. 2

                                Luarocks was run for years over bare http, and not signatures on the packages. I talked with Hisham about this at one of the yearly Lua meetings. He basically said, “we have https now, what is the big deal?” I told him I could have rooted everyone at the LuaConf with a wall of sheep attack on luarocks. He shrugged. I hope he has come around, because I love Lua and LuaRocks is pretty damn awesome (enough) in every other way. Well except not having per project environments and native code and …

                                1. 2

                                  What is a “wall of sheep attack”? Googling it refers to a group(?) at Defcon…

                                  1. 1

                                    Luarocks was run for years over bare http, and not signatures on the packages.

                                    And no checksums? I agree that’s concerning.

                          1. 2

                            Paul Phillips is always entertaining. And the talk is insightful, for example Unification can burn:

                            By definition, you are eliminating a distinction … if your cover is not airtight, breakage will ensue wherever the distinction appears.

                            Similar to “leaky abstraction”, but typically an abstraction is additive whereas “unification” is reductive, it takes away. Eliminating needless distinctions is very desirable, but it must be total.

                            Other takeaway: interop is hard, particularly for type-obsessed languages.

                            1. 8

                              Really cool. Funny that “vim-as-a-library” would need ncurses but I assume that’s just to move past some build issue.

                              Inspecting (render-buffer) reveals that Paravim is not using any of Vim’s actual screen state. Instead it just gets the buffer text and displays it. That explains why the video doesn’t show Vim’s statusline, etc. And syntax-highlighting is done by tokenizing the buffer contents in clojure, then assigning colors.

                              That’s why in Neovim we’ve done lots of work exposing Vim’s internal state as a UI protocol: so that statusline, tabline, syntax, visual states, messages, etc., are available to embedders.

                              From the video:

                              There’s all kinds of cool things you can do when everything is a Vim buffer

                              Indeed. Maybe we can finally stop reinventing “Vim modes” in every IDE/editor/etc. That’s one of the primary goals of Neovim.

                              1. 3

                                I’m curious what you think about the developer’s reasoning on why he went with Vim over Neovim. https://www.reddit.com/r/Clojure/comments/d1tz3p/paravim_a_parasitic_editor_for_clojure/ezs6veu/ And it seems he is saying this isn’t a vim mode, it is full on Vim, if that makes a difference.

                                I’m not trying to stir anything up, just genuinely curious. I use and am a big fan of all you folks and your respective projects.

                                1. 2

                                  And it seems he is saying this isn’t a vim mode, it is full on Vim,

                                  ? I’m in favor of libvim for that reason, and want projects like it to succeed. I want “Vim modes” in IDEs, shells, web browsers, etc., to stop re-implementing vim and start embedding it.

                                  1. 1

                                    Ahh, ok. I think I misunderstood. My knowledge is thin in all this but I find it all very interesting. This libvim does sound cool. And this paravim project is quite fun. There have been some really cool developments happening in the Clojure and vim worlds. Fireplace is still so solid, and there are many new ones like Conjure, Iced, and Acid. Liquid. It’s a fun community.

                              1. 5

                                By running inside your project, it has complete access to the running state of your program. The ambition is to create something that follows the Lisp and Smalltalk tradition, blurring the line between your tools and your code to provide unprecedented interactivity.

                                I see Vim is asymptotically nearing its final apotheosis as Emacs…

                                1. 1

                                  What does that mean, precisely? Both Vim and Emacs have 300k lines of C source code, and 1M+ lines of Vimscript and Elisp respectively. So I’m really curious why people in the last 15 years keep implying that Vim is meaningfully leaner (or … what, exactly?) than Emacs.

                                  blurring the line between your tools and your code

                                  This is called “composition”. That was supposed to be “the unix way”, or so I thought.

                                  1. 2

                                    You can run code in your Emacs, and now apparently you can run Vim in your code.

                                1. 10

                                  Given the title, I was hoping a little more for lessons learned, and some reflection of benefits/costs versus alternatives after one year, rather than largely being a description of what ES/CQRS is.

                                  1. 6

                                    Same here. The few folks I’ve talked to first-hand that tried ES/CQRS systems (a very small set size for sure!), ended up running screaming for the hills after a while (and/or doing a rewrite). Maybe they did it wrong, or maybe doing it right is hard? Unsure.

                                    So, I’d be sure be interested in hearing more anecdotes/stories/tales about how ES/CQRS went right, or didn’t.

                                    1. 11

                                      The few folks I’ve talked to first-hand that tried ES/CQRS systems (a very small set size for sure!), ended up running screaming for the hills after a while (and/or doing a rewrite). Maybe they did it wrong, or maybe doing it right is hard?

                                      ES is a quagmire like ORM (and really, OOP in general): never-ending domain-modeling churn, with the hope that a useful system is “just around the corner”.

                                      This stuff is catnip for…

                                      The context of the project was related to the Air Traffic Management (ATM) domain

                                      …government/defense contractors. The drones are obsessed with building and rebuilding the One True Hierarchy of interconnected objects. But taxonomy is the lowest form of work.

                                      According to Martin Fowler Event Sourcing:

                                      Ensures that all changes to application state are stored as a sequence of events

                                      Indeed, that sounds awesome. And in order to do that you need actual computer science (e.g. datomic), not endless domain-modeling.

                                      Domain Driven Design (DDD) is an approach to tackle …

                                      Like clockwork :)

                                      1. 2

                                        Some great links in there, and some good reading. Thanks!

                                      2. 10

                                        I have been working on an ES/CQRS system for about 4 years and enjoy it, but it’s a smaller system than the one the article describes. It’s a payment-processing service.

                                        Because it’s a much smaller service, I haven’t really gotten into a lot of the DDD side of things. The system has exactly one bounded context which eliminates a lot of the complexities and pain points.

                                        There was definitely a learning curve; this was my first exposure to that architecture. I made some decisions early on in my ignorance that ended up being somewhat costly to change later. However, I’ve been in this game a pretty long time, and I could say the exact same thing about plenty of non-ES/CQRS systems too. I’m not sure this architecture makes it any more or less painful to revisit early decisions.

                                        What have the costs been?

                                        • The message-based model is kind of viral. If you have a component that you could almost implement as a straight system service that’s just regular code called in the regular way, but there’s one case where an event would interact with what it does (example: a customer cancels a payment that hasn’t completed yet) the path of least resistance is to make the whole component message-driven. This sometimes ends up making the code more complicated.
                                        • Ramping up new engineers takes longer, because many of them also have never seen this kind of system before.
                                        • In some ways, debugging gets harder because you can no longer look at a simple stack trace to see what chain of logic led you to a certain point.
                                        • We’ve spent an appreciable amount of time mucking around with the ES/CQRS framework to make it suit our needs. Probably still less time than we would have spent to implement the equivalent feature set from scratch, but I’ve had to explain why I’m spending time hacking on the framework rather than working on the business logic.
                                        • If you make a significant change to the semantics of the system, you may need to deal with old events that no longer have any useful meaning. In many cases you can just ignore them, but sometimes you have to figure out how to translate them to whatever new conceptual model you’re moving to.

                                        What have the benefits been?

                                        • The fact that the inputs and outputs are constrained makes it phenomenally easier to write meaningful, non-brittle black-box unit tests. Like, night-and-day difference. Tests nearly all become, “Given this initial set of events, when this command/event happens, expect these commands/events.”
                                        • Having the ability to replay the event log makes it easy to construct new views for efficient querying. On multiple occasions we’ve added a new database table for reporting or analysis that would have been difficult or flat-out impossible to construct using the data in existing tables. With ES, the code to keep the view model up to date is the same as the code to backfill it with existing data. For a couple of our engineers, this was the specific thing that lit the light bulb for them: “Wait, you mean I’m done already? I don’t have to write a nasty migration?”
                                        • In some ways, debugging gets easier because you have an audit trail of events and you can often suck the relevant events into a development environment and replay them without having to sift through system logs trying to manually reconstruct what must have happened.
                                        • The “dealing with old events” thing I listed under costs is also a benefit in some ways because it forces you to address the business-level, product-design question up front: how should we represent this aspect of history in our new way of thinking about the world? That is extra work compared to just sweeping it under the rug, but it means you’re never in a situation where you have to scramble when some customer or regulator asks for history that spans a change in product design.
                                        • Almost nothing had to change in the application code when we went from a single-node-with-hot-standby configuration to a multiple-active-nodes configuration. It was already an asynchronous message-passing architecture, but now the messages sometimes get delivered remotely.
                                        • And finally the main reason we went with ES/CQRS in the first place: The audit trail is the source of truth, not a tacked-on extra thing that might be wrong or incomplete. For a financial application that has to answer to regulators, this is a significant win, and we have had meaningful benefit from being able to prove that there’s no way for information to show up in a customer-facing report without an audit trail to back it up.

                                        The main conclusion I’ve reached after working on the system is that ES/CQRS is a tool like any other. It isn’t a panacea and like any engineering decision, it involves tradeoffs. But I’m happy to have it in my toolbox to pull out in cases where its tradeoffs are the right ones for a project.

                                        1. 1

                                          Thanks for the comprehensive answer! <3

                                        2. 8

                                          Like with all Design Patterns, ES/CQRS is a means masquerading as an end, and good design will be found to have naturally cherry-picked parts of it without needing to name it as such.

                                          Anecdotally, I’m dealing with a system that is ⅔ ES/CQRS and ⅓ bandaging together expanding requirements, new relationships between entities, increasing latency due to scale – basically everything that wasn’t accounted for at the start. It works, but I wouldn’t choose it over a regular database and a transaction log.

                                          1. 6

                                            anecdotes/stories/tales

                                            As with outcomes, sadly so elusive.

                                            It occurs to me that our industry would be well served with a “Glassdoor” for IT projects. One where those involved could anonymously report on progress, issues and lessons learned. One which could be correlated with supplier[1], architecture type, technologies, project size etc.

                                            [1] supplier, e.g. internal or specified outsourced supplier i.e. Accenture, Wipro, IBM etc.

                                        1. 5

                                          Why should it? It’s another boil-the-ocean solution at the language-level, for people who are obsessed with syntax.

                                          Language designers should aim for less, not more. Go and Clojure embody this: find/build features in libraries, quit looking for party tricks. Lua 5.1 is wonderful because it’s finished. Urbit versions its language by decrementing towards zero, where change will stop.

                                          Libraries, not languages.

                                          projects such as Scaladex, Spores, Scastie

                                          Sounds unpleasant. I guess “Scabs” was taken.

                                          1. 4

                                            It’s another boil-the-ocean solution at the language-level, for people who are obsessed with syntax.

                                            I always describe it as a compiler experiment that escaped the lab, like the rage virus from 28 Days Later.

                                          1. 10

                                            Eating the world considered harmful.

                                            Mercurial seems to have a lot of sentimental support — being the saner and more intuitive DVCS

                                            This is one of those things that’s just repeated over and over and over and assumed to be true. Kind of like “Vim HEAD runs on Windows 95 (and OS/2)!” was assumed to be true in 2013 (spoiler: no one actually tried).

                                            1. 7

                                              This is one of those things that’s just repeated over and over and over and assumed to be true.

                                              For the most part it is definitely true. The git command line is notoriously inconsistent, whereas hg is not. It’s also harder to shoot yourself in the foot with hg.

                                              The one thing that (IMO) is a mess with hg is branching, which day to day has far more impact than remembering an inconsistent command interface. At a previous employer (who I’d convinced to move to hg from svn) we ended up with a horribly complex bookmarking strategy to deal with short-lived feature branches.

                                              I’m marginally sad that hg is losing, because it got some things very right. But git on the whole gets more of the things that matter right, despite the warts.

                                              1. 6

                                                The command line is only one part of it, though.

                                                Conceptually, I find git much easier to understand. I “get it” that it’s a DAG and I can create branches, name them, push them, switch between them, etc. Maybe the command line is obtuse, but nowadays I just use magit anyway.

                                                I spent almost 6 years using Mercurial at a previous job, and never really understood stuff like branching. To be fair, Kiln’s weird forking/branching didn’t help, but it was only part of a bigger problem.

                                                1. 4

                                                  It’s also harder to shoot yourself in the foot with hg.

                                                  My intention is not to put words in your mouth, but IME the Git footguns are somewhat exaggerated.

                                                  Yes, you can squash into a merge commit and mess up the parentage vis a vis the remote, and make other messes, but Git also comes with great recover tools.

                                                  If you manage to make a mess, you can clean it up by cherry-picking into a temp branch and resetting your mess to that --hard. In other instances the reflog can be useful.

                                                  Also not saying a rebase pulling in commits from reflog is Git 101, and many people may just rebuild the commits from their editor’s undo buffer, or from memory.

                                                  I am saying that every argument wrt footguns should come with the note that you should tag backups of your state before a risky endeavor, and that recovery is possible.

                                                  Personally I’ve had only a fistful of real messes (mainly accidents made so tired I should not have been working) during my maybe 10-12 years of Git usage, and remembering the recovery strategies, I can’t remember once rebuilding the commits.

                                                  1. 2

                                                    Just because git gives you the tools to fix the issue, doesn’t mean that the initial mistake is less likely.

                                                    It’s been a while since I used hg in anger, but from memory there’s no real way to accidentally merge the wrong remote branch into your local branch (because in general you push/pull the whole repo, so the concept of ‘remote branch’ doesn’t really exist). I’ve done this a ton of times with git, simply through muscle memory.

                                                    Sure, it’s not a catastrophic mistake, but it’s still one that git allows you to make.

                                                    As I said, I’m in the git camp. Recently I had to merge multiple repos into one, taking only one branch from each, and retaining the history. It was surprisingly painless in git (thanks to merge --allow-unrelated-histories). I’m not sure how it would have gone with hg, but I assume it would have been a lot more painful.

                                                    1. 1

                                                      I’ve had one moment of panic with git. That was when I did git reset --hard on a file after editing it and then remembering I needed those changes. But then I remembered that I had added the changes to the index, and was able to recover them by looking at the dangling objects there.

                                                  2. 4

                                                    Author here: this is the main argument I got from Mercurial users (scrolling through the comments). I use Git myself (and only glanced at Mercurial and have no opinion on it). So maybe my sentence (wrongly) implies that I think this, but I mean to say that Mercurial users tend to give this argument.

                                                    1. 4

                                                      I’ve used a variety of VCS over the years, including both git & hg, the latter of which has always felt easier to use than the former - not so much because hg is particularly easy but because git is particularly confusing. It’s kinda like how Linus’ other major project was for many years until Ubuntu decided to make it easier to use.

                                                  1. 6

                                                    The number of times I’d wished I had this feature on Travis.

                                                    Instead you just end up blindly pushing changes to the branch in the hope that it works :P

                                                    1. 4
                                                      1. 3

                                                        Only on Travis-ci.com (the paid version), and not Travis-ci.org (the free version).

                                                        1. 4

                                                          sr.ht is also a paid service, right?

                                                          1. 4

                                                            It’s up to you whether to pay or run the exact same free software on your own infra.

                                                            1. 2

                                                              Is it easy to run on your own? That’s kind of cool. I may pay them anyway but still run it myself.

                                                              1. 9

                                                                https://man.sr.ht/installation.md

                                                                Reach out to the mailing list if you run into trouble :)

                                                                1. 1

                                                                  Wow, cool! Thanks :)

                                                              2. 1

                                                                You can also run travis-ci.org on your own infra (I currently do this) but there isn’t a lot of info about it.

                                                            2. 3

                                                              The trick is that for public repos, you have to email support: https://docs.travis-ci.com/user/running-build-in-debug-mode/#enabling-debug-mode

                                                              1. 1

                                                                Weird… I guess that they’re trying to prevent wasted containers by adding manual process in the middle?

                                                                1. 2

                                                                  It’s a security risk, especially for public repos.

                                                                  1. 2

                                                                    Eeeek, that’s rough. builds.sr.ht’s SSH access uses the SSH keys we already have on your account for git authentication et al.

                                                                    1. 1

                                                                      You get that from Github, too. But I also think it doesn’t help, because GH projects are liberal with adding people to orgs/repos and while they cam be grouped, there’s no way to assign structures roles. GH as an identity provider is mediocre at best.

                                                                    2. 1

                                                                      Like, in terms of things which they may do in the shells, DDoSing by creating too many, etc? They use your SSH key from GitHub to prevent others from SSHing in, right?

                                                                      1. 4

                                                                        They use your SSH key from GitHub to prevent others from SSHing in, right?

                                                                        Not AFAIR. It gives a temporary login/password in the build log (which is public). And anyone who logs in can see the unencrypted secrets (e.g. API keys used for pushing to GitHub).

                                                                        1. 1

                                                                          oooooooh… yipes. Super dangerous. CircleCI uses SSH keys to improve on this.

                                                                2. 1

                                                                  Aren’t they doing some funky reorganization to eliminate the split? I haven’t looked closely so I might be wrong.

                                                                3. 2

                                                                  I guess I’ve just been too cheap to pay then ;)

                                                                4. 1

                                                                  This feature is on Travis, but their new configuration API is so excruciatingly painful and lacking of reasonable documentation that it fails to help when it’s really needed.

                                                                  1. 1

                                                                    With Gitlab you can debug CI stages on your machine by starting a local Gitlab Runner.

                                                                  1. 15

                                                                    I am really pleased with sourcehut. Neovim project needed CI for OpenBSD and FreeBSD. I was dreading the yak-shaving, but spent ~20 minutes and now we are set up: https://builds.sr.ht/~jmk/neovim

                                                                    1. 2

                                                                      Agreed. It was very simple :)

                                                                    1. 6

                                                                      On the topic of vi historical clones, you should probably mention vip (from 1986) or vi (from at least 87) as vi emulators for Emacs instead of viper. It find it far more fascinating that they already made these things just a few years after the project started.

                                                                      Also, what’s your opinion on modern vi relatives like vis?

                                                                      1. 4

                                                                        Like so many things in technology I like the idea of vis as a second system, an improvement over what we have learned in vi and vim. The issue is that the first system is so entrenched, not only code written (for languages) but habits and automatic actions ingrained in my brain, in the case of Vim.

                                                                        1. 2

                                                                          vis as a second system, an improvement over what we have learned in vi and vim. The issue is that the first system is so entrenched

                                                                          vis is really neat. But people forget that legacy (“entrenched”) is valuable. So when people say “X is better except for lack of ubiquity”, that’s like saying “X is better except it’s not”.

                                                                          1. 1

                                                                            I agree with you. Comparing individual aspects of software it’s possible to say one is better, but including other, non-technical aspects, could allow for a different outcome. It reminds me that software is more than just software, it’s people too.

                                                                      1. 13

                                                                        Vim.org has had a plugin registry … However it wasn’t until about 2008 that the notion of a plugin manager really came into vogue.

                                                                        Indeed. Pathogen is the central enabler of Vim’s sustained popularity in the last 10 years, yet Bram had never even heard of it until someone mentioned it during the “packages” work. Without a runtimepath solution, Vim plugin community would be a small fraction of what it is today. Vimball was atrocious.

                                                                        Dependencies (plugins) are a feature. Emacs discovered this with ELPA/MELPA. Go, Rust, NPM, all are examples of treating dependencies as a primary feature.

                                                                        Version 8 included some async job support due to peer pressure from NeoVim, whose developers wanted to run debuggers and REPLs for their web scripting languages inside the editor.

                                                                        Not sure what “web scripting” has to do with it, but anyways the URL links to a discussion involving zero Neovim developers.

                                                                        Note that you’ll need to mkdir ~/.vim/{swap,undodir,backup} or else Vim will fall back to the next available folder in the preference list.

                                                                        Not needed in Neovim, which does that for you automatically.

                                                                        1. 3

                                                                          A rambling screed with a lofty title. “Economics” is not a useless synonym for “money”, but this “internet veteran” prefers the one with more syllables.

                                                                          npm does not love you. npm cannot love you. npm Inc is a Delaware corporation founded as a financial instrument intended to turn money into more money for a handful of men.

                                                                          Neither do volunteer organizations love you. Or rather their love is empty, because they’re subject to resource constraints like everyone else.

                                                                          Federation spreads out costs.

                                                                          Could have said that in fewer words. If the “mirror” model popular with linux distros can be made turnkey, then yes, you can spread out the costs and pretend that you’re distributed. But you still haven’t solved the incentive problem.

                                                                          1. 2

                                                                            The story is about how constraints and financial requirements shape incentives around NPM Inc, I think “economics” seems an apt term.. but I suppose that’s minor side track.

                                                                            their love is empty, because they’re subject to resource constraints like everyone else.

                                                                            Saying that because both organizational structures are subject to resource constraints means they are both incapable of “love” is equivalent to saying that because parents are subject to resource constraints they are incapable of love.

                                                                            We are all subject to resource constraints. The love shows in what actions you take in response to them.

                                                                            1. 1

                                                                              … is equivalent to saying that because parents are subject to resource constraints they are incapable of love.

                                                                              If the parents claim to love thousands of people they’re never met, then yes, the love will prove as generous as the resources.

                                                                          1. 4

                                                                            It turns out that developers were creating more and more bugs, only to fix them afterward and get the prize.

                                                                            I’m skeptical this ever really happened. Certainly not nearly as many times as I’ve heard the story.

                                                                            Anyway, people are not immediately praised for not creating bugs, but that doesn’t mean there’s no recognition long term. Write a program in a space that has a common class of bugs. Wait for competitions to have those bugs. Not have those bugs. Receive praise.

                                                                            1. 2

                                                                              I think you are right, OpenBSD is based around security, which is really long term bug prevention - and they get praise for it.

                                                                              1. 4

                                                                                The vast majority of unix users are using Linux. Praise of BSDs is rather like a pat on the head.

                                                                                and they get praise for it.

                                                                                “praise” in that sense is quite cheap. If users are praising BSD while using Linux, it means nothing. If praise is your standard for success then it’s easy to DDOS your capacity.

                                                                                1. 0

                                                                                  Well, the openbsd foundation also seem to get a decent amount of donations.

                                                                                  1. 3

                                                                                    Compared to what? SUSE, one of the non-Red Hat companies, was pulling in tens of millions for Linux. Red Hat and IBM were putting in even more. Then there’s whatever the Linux Foundation gets. Then that core infrastructure fund or whatever it was probably invests in lots of Linux projects. OpenBSD gets pennies in comparison.

                                                                                    It’s also funny you use OpenBSD as an example given this.

                                                                                    1. 0

                                                                                      You just listed a 2006 article, the year now is 2019 - Fairly certain things have changed. Obviously it isn’t in the ten’s of millions, but OpenBSD foundation don’t offer commercial services either, so it seems like apples an oranges.

                                                                                      1. 2

                                                                                        In 2008, they got a total of $500,000 in funding per the website. That’s five developers in Silicon Valley. Maybe ten in an area with better cost of living. That’s not much money or praise for a large, high-quality platform quite a few companies depend on and more should probably depend on if it’s security-critical software. Meanwhile, this toy picked at random just got $6 million in funding pledges. The Pebble thing got $20+ million.

                                                                                        “but OpenBSD foundation don’t offer commercial services either”

                                                                                        I’ve always thought that was a mistake. It’s definitely good to not be beholden to companies that might demand cruft to be in there. That could be addressed with a layered or parallel offering. At the least, they could be charging for support, some enterprise features, and so on to generate funding for the project. Maybe some vacation money for the developers, too. :)

                                                                                        The counterexample is OpenVMS. There’s cultural similarities between OpenVMS and OpenBSD teams despite BSD folks’ disdain for it. It was a system built by engineers for engineers with a focus on quality and security. They had dedicated weeks to finding bugs instead of adding features. Instead of free, they actually sold it. Result was a marketable, ultra-reliable system built by engineers that actually got paid. If the developers don’t, some company should try to build something marketable to governments and big companies on OpenBSD doing something similar. They can donate some percentage of their revenues back to the project.

                                                                            1. 8

                                                                              Worth watching. Key ideas:

                                                                              • OOP model is “interacting agents” managing mutable state.
                                                                              • FP model is an input->output pipeline.
                                                                              • To avoid mutable state and other side-effects:
                                                                                1. minimize: avoid/eliminate where possible
                                                                                2. concentrate: keep it in a central place
                                                                                3. defer: queue operations to the last step or an external system