Threads for sarna

    1. 8

      .. with zero days off …

      I’m not sure if this was a brag or not—and if it was, that’s unhealthy. It reminds me of previous generations bragging about not being aware of their mental health, and that the youth these days, aren’t as tough as them.

      That is to say, I’ve come to believe the era typified by the enthusiast programmer—autodidactic, obsessive, and antisocial—is drawing to a close.

      It seems silly and wrong to try to typify that generation as all (at least, those of note, aka 10x developers) having negative qualities like being antisocial. Perhaps they’re the most loud and visible bunch, but that doesn’t mean they’re the majority, or the majority by a huge margin.

      Someone thinking this though, isn’t surprising. When will there not be someone who thinks this, for the relevant field? What about photographers from decades ago vs those using the latest mirrorless cameras?

      Overall, I’m not a fan of this post, and that’s as an enthusiast programmer from a similar time, but who doesn’t share those negative qualities. That’s not saying I don’t have any negative qualities (!) just not that I would be typified in the way the author has of their self. ;)

      Update: I will add that I have thought similarly before about how different it is when I first learnt to program vs now, and there are definitely pros/cons of that. E.g. I share the sentiment that I’m happy that I was able to learn when I did, for many reasons! I can appreciate as well the sentiment that it’s less hacking, discovery fun, as it has become a commercial commodity of the world. That hacking, discovery fun is still there though, and always will be, including with younger programmers.

      1. 15

        I’m not sure if this was a brag or not—and if it was, that’s unhealthy. It reminds me of previous generations bragging about not being aware of their mental health, and that the youth these days, aren’t as tough as them.

        I think that’s completely true. The article also bemoans the fact that ‘passionate’ is no longer considered automatically positive and the author doesn’t seem to understand why. It’s great to have people who are passionate about their work. I, personally, would far rather work with people who care about what they’re doing than people who don’t. The problem is that a lot of companies used this as an excuse to pay people badly and discourage a healthy work-life balance.

        When people join my team, I always have a talk with them where I point out to them that productivity studies have consistently shown that net productivity for people doing ‘knowledge worker’ tasks, peaks at 20 hours a week, plateaus until 40, and then drops off. If you’re working 60 hour weeks, you can very easily make a mistake that takes the whole of the next week to fix. I want the most productive 20 hours from people each week and I don’t care what they do the rest of the time (I also don’t really care where they are for those 20 hours, as long as no one is blocked because they can’t reach them) . This is over a sustained period: it’s fine for a lot of people to work 60 hours a week occasionally to get something finished, but they need to make sure that they then take some time off to recover afterwards. My job as a manager is to ensure that you get those 20 hours when you can make good progress without anything interfering.

        1. 2

          When people join my team, I always have a talk with them where I point out to them that productivity studies have consistently shown that net productivity for people doing ‘knowledge worker’ tasks, peaks at 20 hours a week, plateaus until 40, and then drops off.

          Would you mind posting some links to the studies you mentioned? Based on my experience it’s absolutely true, but I haven’t been able to find any papers myself.

    2. 2

      I don’t find that the responses match the quality of the question. I do think that the value of Lisp pre-90s was that it was an established dynamic language with little to no alternative (see Ron Garrett’s Corecursive interview). As the author mentioned, it’s not the case anymore. I remember talking to people who were professionally working with Lisp/Scheme in Montreal in mid-2000, being amazed that there was a business (voice processing and automata) built using Scheme (an usual choice, there was no Clojure at the time). These guys loved the language but were in the process of rewriting to Java. I asked why, and they said that over time, the Scheme program developed too many abstractions/vocabularies, which made it very hard to maintain and for new people to get on board. I think if you look at Guix vs Nix, you’ll see a similar pattern: Nix has its own weird little language, but it’s easy-ish to pickup. Guix uses a heavily customized Scheme and is quite hard to pick up (I tried both and knows Scheme). I think that this maintainability gap is probably a key contributor to Lisp/Scheme being largely acadamedical today (except for Clojure and SBCL, which were both born out of practical needs).

      1. 2

        You can build up too many abstractions in any sufficiently powerful language (I would put Java 8+ in this camp).

        I don’t agree with DSLs making stuff harder to maintain. Rails is a good example - many wildly successful companies still use it. Companies that switched mostly did it because they needed unusually high perf (supposedly), not because it was hard to maintain. Meanwhile, Shopify has close to 3 millions of lines in their monolith and their Black Friday stats are insane - more than 40 petabytes of traffic in a day. If only Lisps had cuter syntax..

        1. 2

          Ruby smells a lot like a Lisp when you start digging around its internals.

          1. 3

            Actually, Matz himself has admitted this:

            Ruby is a language designed in the following steps:

            • take a simple lisp language (like one prior to CL).
            • remove macros, s-expression.
            • add simple object system (much simpler than CLOS).
            • add blocks, inspired by higher order functions.
            • add methods found in Smalltalk.
            • add functionality found in Perl (in OO way).

            So, Ruby was a Lisp originally, in theory. Let’s call it MatzLisp from now on. ;-)


            Ruby’s kind of a Lisp without S-expressions, with macros replaced by run-time OO metaprogramming. It’s a good language.

            1. 3

              Makes me wonder why Dylan never caught on - that was Apple’s take on Lisp with “normal” syntax. Perhaps just wrong place, wrong time, and no killer app like Rails?

            2. 1

              I worked with Ruby for many years and it is nice in the sense that it is the closest to Lisp that most people can get at a mainstream web dev job. Javascript is still better when it comes to functional programming, though, since ‘function’ is much closer to lambda than anything in Ruby. There’s something in Ruby’s design that leads to the Proc, Block, Lambda mess. I wish Matz truly embraced higher order functions, but hey he made the thing he wanted and it’s pretty OK.

    3. 2

      In particular, image-based development is a rarity nowadays, a Galápagos island feature that is undesirable in many contexts, but it’s the thing that makes it possible to have Turing-complete macros that are defined in the same place as the code, without needing to involve a build system.

      Racket has “Turing-complete macros that are defined in the same place as the code”, and it doesn’t use the image model. I’m pretty sure most (if not all) Schemes let you do that too. Or am I missing something?

    4. 7

      I tried using Fossil for my personal projects but discovered that the the following two Git features have become essential for my workflow: (1) partial file commits and (2) rewriting commit history. Fossil doesn’t have (1) and is specifically designed to disallow (2). As much as I liked the built-in bug tracker and wiki, I couldn’t use Fossil and went back to Git.

      1. 5

        We have Fossil repos at work and aside from the fact Fossil intergrates with almost nothing, these two things are huge pain points, especially lack of history rewrites.

        Most repos have been moved away from Fossil at this point.

      2. 4

        For (1), Fossil docs recommend stashing the changes and splitting them into patches:

        fossil stash save -m 'my big ball-o-hackage'
        fossil stash diff > my-changes.patch

        For each git add -p you’d call git stash diff instead. I agree it’s a bit less convenient, an interactive version would be nice.

        For (2), what’s your use-case? For me Fossil removes a lot of cases for which I would use history rewriting in Git. Note that Fossil also has amend, which changes history non-destructively - you can peek the previous state. As for lack of rebases, there’s a good write-up here.

        EDIT: you can also delete content when really needed.

    5. 10

      Software use is a social phenomenon, not a technical one.

      1. 3

        It is! Fossil was made for projects with a small number of contributors that trust each other, where drive-by contributions are rare and changes are shared often and openly (more info on that here).

        For me, that’s the model for the vast majority of the repositories I contribute to.

    6. 11

      Fossil is like a git+Gitea-all-in-one self-contained-executable and I run it to manage my personal notes at home. It’s pretty wild in that it stores tickets, wiki, user management and such in the repository which is just an SQLite database. This makes backup super easy.

      If you want an example of what the interface looks like, see Pikchr.

      It’s easy to self-host at home and you can run the webserver from your repo locally to update tickets, wiki, markdown docs, and so on and then sync back. It “just works” – but handles syncing a bit differently than git. I don’t need to push when committing, it syncs back automatically if I’m on my network. There’s a bit more to it than this, and it suffers from the same DVCS issues for making games as git such as dealing poorly with large binaries – I know about git-lfs, but Perforce really is as close to gold standard as you can get.

      I might be doing more with Fossil since scraping of GitHub for AI training makes me not want to post code publicly anymore, and it gives me better control if I wanted contributors, to self-host on Linode or something.

      1. 4

        I think it’s worth adding that the entire binary is 4 megabytes.

      2. 3

        Thank you for mentioning these features. I’m easily nerd sniped, but I’ve gotten better at not running after the nearest shiny thing. git works well enough for me (and we use a type of hg at work) and my experience is that I adapt to whatever other people use, and just use it. I think most widely used versioning systems work for most cases about the same.

    7. 6

      I have very much the same point of view. After 10+ years of using only Apple’s ecosystem, their current CEO managed to “break” me and forced me to move to FreeBSD+Linux workstations. There’s just too many hostile UI/UX changes shipped with security patches and Apple for some years now doesn’t feel user-friendly (at least not power user-friendly), but only investor-friendly.

      Their support’s responses to feedbacks and bug reports phrased like “the feature works as intended” shows lack of concern for users’ experience.

      1. 8

        Sneaking changes in the user/vendor power relationship into security patches is the ultimate in cynical perversion.

      2. 6

        As I get older, I do find that it’s more likely that any change to software is a change in a direction I don’t like, because by now I’ve set up a system that works for me.

        1. 2

          I agree with this, and for me at least it goes even a bit further. I can often recognize the value in something new, but I still may not want to start using it. As an example, neovim has a ton of IDE-style abilities (powered by LSP) that part of me thinks are probably objectively good and helpful, but that I never use. Why not? Because it feels like I would have to effectively relearn how to use my editor, and I do not want that. I guess my real point is that “a direction I don’t like” can often involve something I don’t like (overall) even though I recognize its value and (sometimes) even regret that I’m not more flexible.

          1. 3

            There’s just too much new stuff coming out, this industry moves too fast. At least when you’re a bit older you learn to pretty effectively dismiss silly stuff as such. And even then occasionally something cool comes out but you don’t have the time and energy to learn it. So you file it away, knowing it may one day be a useful tool (hopefully not forgetting that “there was this thing” when the time comes that you need it).

            Of course, once in a while you still get blindsided, and the “silly thing” was actually useful and takes over the world (e.g. I dismissed React based on one failed experiment, and dismissed git for having a shitty UI). And other times you do invest a lot of time into something promising and clearly superior to other options, only to see it get ignored and dismissed by others and it fades into obscurity (like in my case Git’s contemporary DVCS competitors, NetBSD, Scheme).

            But you have to manage your energy in some way by investing time strategically, or you’ll burn out. For me, I’m fortunate that I spent so much of my younger years “believing in” Open Source and Linux over proprietary software and Windows/.NET, and later PostgreSQL over MySQL. Those choices paid off dividends (although being too dogmatic probably also cost me a lot of opportunities at the start of my career - in some way it’s better to be a bit more pragmatic and invest time when something you already deem good appears to be growing more popular).

      3. 1

        I’m in a bit of a middle ground at the moment, using macOS/iOS/iPadOS on my laptop and mobile devices, with a reasonably powerful PC running Gentoo (and some scripts to spin up short-lived spot instances on AWS) that I can remote into when I need Linux.

        I like the hardware, and the level of integration when you have multiple Apple products (yes, yes, “you can already build such a system yourself quite trivially by getting an FTP account, mounting it locally with curlftpfs, and then using SVN or CVS on the mounted filesystem” /s). Fortunately, I’ve been lucky enough that recent UI changes haven’t impacted my workflow much.

      4. 1

        What hostile UX changes do you have in mind? I tried looking it up, but couldn’t find anything.

        1. 1

          From the top of my head:

          • OS X installation process was asking whether the user wants to share data with Apple. macOS only informs the user what will be shared. These invasive changes were introduced alongside “dark mode” and tech media focused on the color scheme feature.
          • I held installation files for multiple OS X versions in the /Applications and considered them safe but Apple modified them during one of system updates.

          I switched from macOS some time after the 2017 MBP keyboard fiasco and don’t remember workflow-disrupting UI changes in macOS but as I still use an iPhone, I can mention “rich” text mode in Mail, Notes and several other apps which unnecessarily occupy screen space, “search” button on the screen while swipe down still has the same functionality, unnaturally over-tweaked pictures (3rd party app can be used to access “raw” versions), inability to switch off “volume to loud” in many world regions (Apple’s headphones have good upper volume level but for example JBL are more silent) and this is really the “top of my head” list of things.

          M1 has great performance and battery though and I’m considering buying M1/M2/M3(?) when Asahi Linux will support all of its hardware well.