1. 5

    Regular Expressions are a great example of where Lisp/Scheme’s Code = Data approach shines. Take the Scheme Regular Expression SRFI, or Elips’ rx. The same could be done in other languages, but writing something like

    RE urlMatcher = new SequenceRe(            // start matching an url
      new RE.Optional("http://"),
       new RE.OneOrMore(
          new RE.Group("domain", 
             new RE.Sequence(new RE.OneOrMore(RE.Word), ".")
          )
       ),
       // etc. ...
    )
    

    is far more cumbersome, even if it would allow you to insert comment, structure the expression, statically ensure that the syntax is valid.

    1. 4

      There are libraries that do this: https://github.com/VerbalExpressions

      tester = (verbal_expression.
                  start_of_line().
                  find('http').
                  maybe('s').
                  find('://').
                  maybe('www.').
                  anything_but(' ').
                  end_of_line()
      )
      
      1. 3

        I still find

        (rx bol "http" (? "s") "://" (? "www.") (* (not (any " "))) eol)
        

        nicer, plus it evaluates at compile-time. But good to know that other languages are thinking about these ideas too.

        1. 6

          So the regexp this represents is this one I think?

          ^https?://www\.[^ ]+$
          

          I don’t know … I find the “bare” regular expression easier. Especially the Scheme/Lisp variant essentially uses the same characters, but with more syntax (e.g. (? "s") instead of s?). Maybe the advantages are clearer with larger examples, although I find the commenting solution as mentioned in this post much better as it allows you to clearly describe what it’s matching and/or why.

          1. 3

            This example is rather simple, but in Elisp, I’d still use it, because it’s easier to maintain. I can break the line where ever I want, insert real comments. Usually it’s more verbose, but in one case, I even managed to write a (slightly) shorter expression using rx, than a string literal because of escape-symbols:

            (rx (* ?\\ ?\\) (or ?\\ (group "%")))
            

            vs

            "\\(?:\\\\\\\\\\)*\\(?:\\\\\\|\\(%\\)\\)"
            

            but with more syntax (e.g. (? “s”) instead of s?).

            If that’s the issue, these macros usually allow the flexibility to choose more verbose keywords. The example from above would then become (combined with the previous points)

            (rx line-start
                "http" (zero-or-one "s") "://"	;http or https
                (zero-or-one "www.")		;don't require "www."
                (zero-or-more (not (any " ")))	;just no spaces
                line-end)
            

            Edit: With Emacs 27 you can even extend the macro yourself, to add your own operators and variables.

            1. 2

              Right; that example shows the advantages much clearer. It still looks kinda unnatural to me, but that’s probably just lack of familiarity (both with this method and Scheme in general; it’s been years since I did any Scheme programming, and never did much in the first place: my entire experience is going through The Little Schemer and writing two small programs). But I’m kinda warming to the idea of it.

              One way you can do this in other languages is by adding a sort of sexpr_regex.compile() which transforms it to a normal regexp object for the language:

              regex = sexpr_regex.compile('''
              	(rx line-start
              		"http" (zero-or-one "s") "://"	;http or https
              		(zero-or-one "www.")		    ;don't require "www."
              		(zero-or-more (not (any " ")))	;just no spaces
              		line-end)
              ''')
              

              And then regex.match(), regex.find(), what-have-you.

              Dunno if that’s worth it …

              1. 1

                It would be possible, but you’d lose the fact that rx and similar macros can be expanded and checked at compile-time.

                1. 2

                  You already don’t have that in those languages anyway, so you’re not really losing much. And you can declare it as a package-level global (which isn’t too bad if you’re consistent about it) and throw an exception if there’s an error, so you’ll get an error on startup, which is the next best thing after a compile-time check. You can also integrate it in your test suite.

                  1. 2

                    Well Elisp does (Byte compilation), and some Schemes do too (but SRFI 27 couldn’t make use of it in most cases anyway).

          2. 2

            I agree – the problem with the chained-builder approach is that it’s shoe-horning this abstract idea of a regex through the lens of the syntax of a programming language. Designed from first-principles, a regex syntax is much more likely to look like the s-expression syntax.

            1. 2

              That is really pretty. Is this elisp?

              1. 1

                Yes.

              2. 1

                An aside about compile-time: There exists a C++ compile-time regex parser, surely one of the most terrifying examples of template metaprogramming ever. It’s also quite feasible to compile regexes at compile time in Rust or Nim thanks to their macro facilities.

                That LISP syntax is nice, though.

                1. 1

                  That sounds very appealing to me!

                  1. 1

                    Do you have a link to some site that describes the C++ parser?

                    1. 1

                      No, sorry, or I’d have given it. I just remember a video of a conference presentation, by a woman with a Russian name…

                      Edit: a quick search turned up this, which looks familiar: https://youtu.be/3WGsN_Hp9QY

              3. 1

                I program in Lua, and there, I use LPEG. There’s a submodule of LPEG that allows one to use a BNF-like syntax. Here’s the one I constructed from RFC-3986.

              1. 2

                I think the end result was much worse. But I’m not sure if i just disagree with trying to make regexes more legible with those techniques, or if the specific example is just too simple to illustrate the advantages (the original example regex was perfectly clear to me)

                1. 12

                  To say “YAML has its oddities” is like saying “the ocean is somewhat damp”.

                  I can only assume that every adoption of YAML has been by people who heard about it being used, but had never actually used it in any depth themselves. To believe otherwise, is to believe that those people willingly opted-in to the ridiculous semantics, syntax and parsers that surround YAML.

                  1. 3

                    Agreed. I’d rather use XML than YAML.

                    1. 3

                      I’m not sure.

                      I certainly miss the ability to check whether a document is both syntactically valid and semantically conformant to a schema.

                      But OTOH yaml is so quick and easy to write and read… I think it’s good that it’s being used in stuff like ansible/kubernetes.

                      1. 5

                        But OTOH yaml is so quick and easy to write and read… I think it’s good that it’s being used in stuff like ansible/kubernetes.

                        I don’t think this is true – how can something be “quick and easy to write and read” when even the YAML parsers themselves are disagree with each other on what’s valid YAML?

                        I think it’s good that it’s being used in stuff like ansible/kubernetes.

                        It certainly fits the quality standards of Go software. :-)

                        1. 4

                          I don’t think this is true – how can something be “quick and easy to write and read” when even the YAML parsers themselves are disagree with each other on what’s valid YAML?

                          Because the problematic bits are <1% of the spec and everyone mostly uses the other 99%.

                          1. 2

                            Just imagine how people would absolutely lose their mind if the same was true about XML.

                            1. 2

                              I’m not saying that 1% isn’t a problem. But I think the syntax conveniences of YAML over XML explain its popularity despite the spec issue.

                          2. -1

                            Regarding go software and kubernetes in particular… Can you name a comparably feature rich and production ready open source alternative?

                            I agree that kubernetes introduces its own complexity and everything… Yet it’s probably one of the best alternatives we have right now.

                            1. 3

                              I think the alternative is not using Kubernetes (and perhaps seriously reflecting on all the wrong life choices made that resulted in thinking one needs Kubernetes in the first place).

                              1. -2

                                In other words: you can’t. Case closed.

                          3. 1

                            The use of significant white space instantly discounts any “easy to write” claim as bullshit.

                      1. 5

                        Never heard of Synology, but article linked to a NAS company and didn’t mention if their software is commercial or FOSS?

                        1. 4

                          I’m fairly sure it’s predominantly closed source.

                          1. 1

                            Synology are proprietary, but I think the underlying OS is based on Open BSD.

                            1. 3

                              It is Linux based

                              1. 1

                                Perhaps you’re thinking of TrueNAS which is based on FreeBSD. (A Debian-based version is also in the works.)

                            1. 10

                              My feeling is Nextcloud are compromising the quality of their core features by expanding out to try and do everything else (the shit quality apps Kev talks about). Although for the same reason I don’t mind it lacking a backup app, I’d rather use a first class backup tool outside Nextcloud than rely on them to get that right.

                              1. 1

                                That’s a great point actually - I’d rather use a first class backup solution than a half baked one.

                                1. 1

                                  Nextcloud the company only focuses on some of those apps, and the “Official” label in the app store doesn’t necessarily mean that the company is involved in developing that app. It can be confusing to figure this out though, especially because a lot of people from the company still support and help out with the community-developed apps even though they aren’t necessarily the company’s priority.

                                1. 98

                                  I don’t write tests to avoid bugs. I write tests to avoid regressions.

                                  1. 13

                                    Exactly. I dislike writing tests, but I dislike fixing regressions even more.

                                    1. 6

                                      And i’d go even further:

                                      I write tests and use typed languages to avoid regressions, especially when refactoring.

                                      A test that just fails when I refactor the internal workings of some subcomponents, is not a helpful test – it just slows me down. 99% of my tests are on the level of treating a service or part of a service as a black box. For a web service this is:

                                      test input (request) -> [black box] -> mocked database/services
                                      

                                      Where black box is my main code.

                                      For NodeJS the combo express/supertest is awesome for the front bit. I wish more web frameworks in Rust etc also had this. I.e. providing ways to “fake run” requests through without having to faff around with server/sockets (and still be confident it does what it should).

                                      1. 5

                                        Now the impish question: what is the correct decision if the test is more annoying to write than the regression is to observe and fix?

                                        1. 3

                                          Indeed!

                                          (I research ways[1] to avoid that. But of course they don’t apply when you’ve already chosen a stack and framework for development. In my day job we just make hard decisions about priority and ROI and fall back sometimes to code comments, documents or oral story-telling.)

                                          [1] https://github.com/akkartik/mu1#readme (first section)

                                          1. 2

                                            Every project is different, but ideally you can invest time in the testing infrastructure such that writing a new test is no longer annoying. I.e, maybe you can write re-usable helper functions and get to the point where a new test means adding an assertion or copy / pasting an existing test and modifying it a bit. The tools used (test harness, mocking library, etc) also play a huge role in whether tests are annoying or not, spending time ensuring you’re using the right ones (and learning how to properly use them) is another way to invest in testing.

                                            The level of effort you should spend on testing infrastructure depends on the scope, scale and longevity of your project. There are definitely domains that will be a pain to test pretty much no matter what.

                                            1. 2

                                              In my experience such testing frameworks tend to add to the problem, rather than solve it. Most testing frameworks I’ve seen are complex and can be tricky to work with and get things right. Especially when a test is broken it can be a pain to deal with.

                                              Tests are hard because you essentially need to keep two functions in your head: the actual code, and the testing code. If you come back to a test after 3 years you don’t really know if the test is broken or the code is broken. It can be a real PITA if you’re using some super-clever DSL testing framework.

                                              People trying to be “too clever” in code can lead to hard to maintain code, people trying to be “too clever” in tests often leads to hard to maintain tests.

                                              Especially in tests I try to avoid needless abstractions and be as “dumb” as possible. I would rather copy/paste the same code 4 times (possible with some slight modifications) than write a helper function for it. It’s just such a pain to backtrack when things inevitably break.

                                              It really doesn’t need to be this hard IMHO; you can fix much of it by letting go of the True Unit Tests™ fixation.

                                              1. 2

                                                I don’t disagree, and I wasn’t trying to suggest using a “clever” testing framework will somehow make your tests less painful. Fwiw I even suggested the copy / paste method in my OP and use it all the time myself :p. My main point was using the right tool / methods for the job.

                                                I will say that the right tool for the job is often the one that is the most well known for the language and domain you’re working in. Inventing a bespoke test harness and trying to force it on the 20 other developers who are already intimately familiar with the “clever” framework isn’t going to help.

                                                1. 2

                                                  Fair enough :-)

                                                  I will say that the right tool for the job is often the one that is the most well known for the language and domain you’re working in. Inventing a bespoke test harness and trying to force it on the 20 other developers who are already intimately familiar with the “clever” framework isn’t going to help.

                                                  I kind of agree because there’s good value in standard tooling, but on the other hand I’ve seen rspec (the “standard tool” for Ruby/Rails testing) create more problems than solve IMHO.

                                          2. 4

                                            When fixing testable bugs you often need that “simplest possible test case” anyway, so you can identify the bug and satisfy yourself that you fixed it. A testing framework should be so effortless that you’d want to use it as the scaffold for executing that test case as you craft the fix. From there you should only be an assert() or two away from a shippable test case.

                                            (While the sort of code I write rarely lends itself to traditional test cases, when I do, the challenge I find is avoiding my habit of writing code defensively. I have to remind myself that I should write the most brittle test case I can, and decide how robust it needs to be if and when it ever triggers a false positive.)

                                            1. 3

                                              +1

                                              This here, at the start of the second paragraphs is the greatest misconception about tests:

                                              In order to be effective, a test needs to exist for some condition not handled by the code.

                                              A lot of folks from the static typing and formal methods crowd treat tests as a poor man’s way of proving correctness or something… This is totally not what they’re for.

                                              1. 1

                                                umm…..aren’t regressions bugs?

                                                1. 9

                                                  Yes, regressions are a class of bug. The unwritten inference akkartik made when saying “I don’t write tests to avoid bugs” is that it is refers specifically to writing tests to pre-empt new bugs before they can be shipped.

                                                  Such defensive use of tests is great if you’re writing code for aircraft engines or financial transactions; whereas if you’re writing a christmas tree light controller as a hobby it might be seen as somewhat obsessive compulsive.

                                                  1. 0

                                                    I-I don’t understand. Tests are there to catch bugs. Why does it matter particularly at what specific point in time the bugs are caught?

                                                    1. 8

                                                      Why does it matter particularly at what specific point in time the bugs are caught?

                                                      Because human nature.

                                                      Often times a client experiencing a bug for the first time is quite lenient and forgiving of the situation. When it’s fixed and then the exact same thing later happens again, the political and financial consequences of that are often much, much worse. People are intensely frustrated by regressions.

                                                      Sure, if we exhaustedly tested everything up front, they might never have experienced the bug in the first place, but given the very limited time and budgets on which many business and enterprise projects operate, prioritizing letting the odd new bug slip through in favor of avoiding regressions often makes a hell of a lot of sense.

                                                      1. 5

                                                        Not sure if you are trolling …

                                                        Out of 1000 bugs a codebase may have, users will never see or experience 950 of them.

                                                        The 50 bugs the user hits though – you really want to make sure to write tests for them, because – based on the fact that the user hit the bug – if it breaks again, the user will immediately know.

                                                        That’s why regression tests give you a really good cost/benefit ratio.

                                                        1. 3

                                                          A bug caught by a test before the bad code even lands is much easier to deal with than a bug that is caught after it has already been shipped to millions of users. In general the further along in the CI pipeline it gets caught, the more of a hassle it becomes.

                                                          1. 3

                                                            The specific point in time matters because the risk-reward payoff calculus is wildly different. Avoiding coding errors (“new bugs”) by writing tests takes a lot of effort and generally only ever catches the bugs which you can predict, which can often be a small minority of actual bugs shipped. Whereas avoiding regressions (“old bugs”) by writing tests takes little to no incremental effort.

                                                            People’s opinion of test writing is usually determined by the kind of code they write. Some types of programming are not suited to any kind of automated tests. Some types of programming are all but impossible to do if you’re not writing comprehensive tests for absolutely everything.

                                                            1. 2

                                                              The whole class of regression tests was omitted from the original article which is why it’s relevant to bring them up here.

                                                              1. 2

                                                                The article says “look back after a bug is found”. That sounds like they mean bugs caught in later stages (like beta testing, or in production).

                                                                If you define bugs as faults that made it to production, then faults caught by automated tests can’t be bugs, because they wouldn’t have made it to production. It’s just semantics, automated tests catch certain problems early no matter how you call them.

                                                                1. 1

                                                                  I’m of the same opinion. It means that the reason why we’re writing tests is not to catch bugs in general, but specifically to catch regression bugs. With this mindset, all other catching of bugs is incidental.

                                                          1. 11

                                                            This is one of the best articles I read on the internet. No ifs, no buts.


                                                            Regarding

                                                            deoptimization can happen from the DFG to the Baseline JIT (this is likely also the case for Hotspot C2->C1)

                                                            and taking

                                                            I will often describe an optimization behaviour and claim that it probably exists in some other compiler.

                                                            into account, I think HotSpot only supports deopt-to-interpreter (that is C2→interpreter, C1→interpreter), not deopt-to-another-compile-tier (C2→C1).

                                                            Maybe someone else can chime in?


                                                            Edit: Having checked that person’s CV, I’m now considering a career change, because I obviously suck at computers.

                                                            1. 5

                                                              (I’m author) I did a deeper search on this and forgot to update, but I believe Hotspot does not deoptimise C2 -> C1. There’s no official statement on that but general research into how Hotspot optimizes between tiers and their glossary page indicates that fairly clearly. Also an O’Reilly book seems to state that too (https://www.oreilly.com/library/view/java-performance-the/9781449363512/ch04.html#Deoptimization)

                                                              The process of converting an compiled (or more optimized) stack frame into an interpreted (or less optimized) stack frame. https://openjdk.java.net/groups/hotspot/docs/HotSpotGlossary.html

                                                              I should probably just ask Cliff but guessing is fun (and I think this is pretty conclusive)

                                                              1. 3

                                                                I’ve enjoyed your blog’s random colours!

                                                            1. 13

                                                              Dynamic linking is crucial for a proper separation between platform and applications, especially when one or both are proprietary. Would Win32 applications that were written in the 90s still run (more or less) on Windows 10 if they had all statically linked all of the system libraries? I doubt it. And even if they did, would we really want to require applications to be rebuilt, on their developers’ release cycles, before their users could take advantage of improvements in system libraries? I think this concern also applies to complex free-software platforms like GNOME. (And platforms that target a broad user base do need to be complex, because the real world is complex.)

                                                              1. 16

                                                                I don’t think it’s a binary choice; in the case of Windows, most applications use the system’s system32.dll, user32.dll, and whatnot, but include other libraries like libwhatnot.dll in the application itself. It’s still “dynamically linked”, but ships its own libraries.

                                                                This is also something that the Linux version of Unreal Tournament does for example: it uses my system’s libc, but ships with (now-antiquated) versions of sdl.so and such, which is how I’m still able to run a game from 1999 on a modern Linux machine.

                                                                I think this kind of “hybrid approach” makes sense, and tries to get the best of both. I think it even makes sense for open source programs that distribute binary releases, especially for programs where it doesn’t really matter if you’re running the latest version (e.g. something like OpenTTD). I think this is also what systems like flatpak and such are doing (although I could be wrong, as I haven’t looked at it much).

                                                                1. 8

                                                                  My understanding was that the OP was arguing for a binary choice. I think @ddevault’s reply reinforces that. I actually agree with you about the benefits of a hybrid approach: dynamic linking for platform libraries, static linking for non-platform libraries.

                                                                  1. 1

                                                                    especially for programs where it doesn’t really matter if you’re running the latest version (e.g. something like OpenTTD)

                                                                    Looks like you never played the OpenTTD multiplayer, right? :)

                                                                    1. 1

                                                                      I didn’t even know there is a multiplayer, haha; I actually haven’t played it in years. It was just the first fairly well-known project that came to mind 😅

                                                                      1. 1

                                                                        So, to clarify - OpenTTD requires the same version on client and the multiplayer server to participate in game. And it’s pretty strict in that, you can’t even patch the game retaining the same version number. Same thing goes to the list of installed NewGRFs (gameplay extensions/content), but at least this one can be semi-automatically downloaded at clientside before joining.

                                                                        1. 1

                                                                          Yeah, I assumed as much. I think the same applies to most online games. Still, I can keep using the same old version with my friends for 20 years if it’s distributed as described above, because I want to play it on Windows XP for example, or just because I like that version more (and there are many other applications of course, George RR Martin using Word Perfect 6 is a famous example).

                                                                  2. 3

                                                                    In the case where an ABI boundary exists between usermode libraries, then a lot of the arguments Drew is making here go away. When that occurs 100% of programs are going to need those dynamically linked libraries, so the benefits of code sharing start to become apparent. (It is true though that dynamically resolving functions is going to slow down program loading on that system compared to one where programs invoke syscalls by index and don’t need a dynamic loader.)

                                                                    That said, I think statically linking on Windows is going to offer higher compatibility than you’re suggesting. The syscall interface basically is stable, because any Win32 program can invoke it, so when it changes things break. The reason I’m maintaining my own statically linked C library is because doing so allows my code to run anywhere, and allows the code to behave identically regardless of which compiler is used to generate that code. I’m using static linking to improve compatibility.

                                                                    One thing to note about Win32 also is to compare the commit usage of processes when running across different versions of the OS. The result is huge disparities, where new OSes use more memory within the process context. Just write a simple program that calls Sleep(INFINITE) and look at its memory usage. The program itself only needs memory for a stack, but it’s common enough to see multiple megabytes that’s added by system DLLs. Those DLLs are initializing state in preparation for function calls that the program will never make, and the amount of that initialization is growing over time.

                                                                    1. 0

                                                                      especially when one or both are proprietary

                                                                      Proprietary software is bullshit and can be safely disregarded.

                                                                      Would Win32 applications that were written in the 90s still run (more or less) on Windows 10 if they had all statically linked all of the system libraries?

                                                                      If Win32 had a stable syscall ABI, then yes. Linux has this and ancient Linux binaries still run - but only if they were statically linked.

                                                                      would we really want to require applications to be rebuilt, on their developers’ release cycles

                                                                      Reminder that the only programs that matter are the ones for which we have access to the source code and can trivially rebuild them ourselves.

                                                                      And in any case, this can be turned around to work against you: do we really want applications to stop working because they dynamically linked to library v1, then library v2 ships, and the program breaks because the dev wasn’t around to patch their software? Software which works today, works tomorrow, and works the day after tomorrow is better than software which works today, is more efficient tomorrow, and breaks the day after tomorrow.

                                                                      1. 29

                                                                        Reminder that the only programs that matter are the ones for which we have access to the source code and can trivially rebuild them ourselves.

                                                                        I don’t know, I’ve gotten a lot of mileage out of the baseband code in my phone even though I don’t have access to the source. It’s a security issue, but one whose magnitude is comically smaller than the utility I get out of it. I similarly have gotten a lot of mileage out of many computer games, none of which I have access to the source for. Also, the microcontroller code on my microwave is totally opaque but I count on it every day to make my popcorn.

                                                                        If you want to argue “The only programs that respect your freedoms and don’t ultimately lead to the enslavement of their users are the ones for which we have access to the source code”, that’s totally reasonable and correct. By picking hyperbolic statements that are so easily seen to be so, you make yourself a lot more incendiary (and honestly sloppy-looking) than you need to be.

                                                                        And maybe coming off as a crank wins you customers, since there’s no such thing as bad press, but don’t be surprised when people point out that you’re being silly.

                                                                        1. 1

                                                                          I don’t know, I’ve gotten a lot of mileage out of the baseband code in my phone even though I don’t have access to the source. It’s a security issue, but one whose magnitude is comically smaller than the utility I get out of it. I similarly have gotten a lot of mileage out of many computer games, none of which I have access to the source for. Also, the microcontroller code on my microwave is totally opaque but I count on it every day to make my popcorn.

                                                                          And this is supposed to be evidence that proprietary programs matter and shouldn’t be disregarded? The context in discussion sites like this is that we can decide to change our programming practices for the programs that we have control over. The defining characteristic of proprietary software is that programmers do not have control, so discussion is irrelevant. Bring the production of baseband code into the public sphere and we can debate whether it should be using dynamic linking (I doubt it even does now).

                                                                          1. 1

                                                                            whoops I only meant to post one version of this comment…. my b

                                                                          2. 1

                                                                            I don’t know, I’ve gotten a lot of mileage out of the baseband code in my phone even though I don’t have access to the source. It’s a security issue, but one whose magnitude is comically smaller than the utility I get out of it. I similarly have gotten a lot of mileage out of many computer games, none of which I have access to the source for. Also, the microcontroller code on my microwave is totally opaque but I count on it every day to make my popcorn.

                                                                            So you would like to be able to dynamically link a binary with the microcontroller code in your microwave? Come on. If anything these examples reinforce the point that proprietary programs can be disregarded in discussions like this. I don’t think it’s hyperbolic or silly to say so.

                                                                          3. 19

                                                                            If Win32 had a stable syscall ABI, then yes. Linux has this and ancient Linux binaries still run - but only if they were statically linked.

                                                                            Except Windows and everyone solved this at the dynamic linking level, and this goes far beyond just the syscall staples like open/read, and towards the entire ecosystem. Complex applications like games that APis for graphics and sound are far likelier to work on Windows and other platforms with stable whole-ecosystem ABIs. The reality is real-world applications from 1993 are likelier to work on Win32 than they are on Unices.

                                                                            That, and Linux (and Plan 9) are the aberration here, not the rule. Everyone else stopped doing this in the 90s if not earlier (SunOS added dynamic linking in the 80s and then as Solaris banned static libc in the early 2000s because of the compat issues it caused). FreeBSD and Mac OS technically allow it, but you’re on your own - when Mac OS changed a syscall or FreeBSD added inode64, the only broken applications were static Go binaries, not things linked against libc.

                                                                            That, and some OSes go to more extreme lengths to ban syscalls as static ABI. Windows scrambles syscall numbers every release, OpenBSD forbids non-libc pages from making syscalls, AIX makes you dynamic link to the kernel (because modules can add new syscalls at runtime and get renumbered).

                                                                            1. 4

                                                                              The reality is real-world applications from 1993 are likelier to work on Win32 than they are on Unices.

                                                                              Or the 2000s. Getting Loki games like Alpha Centauri to run now is very hard.

                                                                              1. 1

                                                                                Complex applications like games that APis for graphics and sound are far likelier to work on Windows and other platforms with stable whole-ecosystem ABIs. The reality is real-world applications from 1993 are likelier to work on Win32 than they are on Unices.

                                                                                There are half a dozen articles about WINE running and supporting old Windows programs better than Windows 10.

                                                                                Examples:

                                                                                “I have a few really old Windows programs from the Windows 95 era that I never ended up replacing. Nowadays, these are really hard to run on Windows 10.”.

                                                                                “Windows 10 does not include a Windows XP mode, but you can still use a virtual machine to do it yourself.”

                                                                                I specifically remembering there being a shitshow when Windows 10 came out because many applications straight up didn’t work anymore, that runs under Wine.

                                                                                Try again.

                                                                                1. 8

                                                                                  Sure, we can play this game of hear-say, but it’s hard to argue if you have an application from 1993, Windows 10 will almost certainly be likelier to run the binary from 1993 than almost any other OS would be - and it does so with dynamic linking.

                                                                                  Not to discredit Wine, they do a lot of great, thankless work. I’m more shocked the claim I’m replying to was made, since it seems like it was ignorant of the actual situation with Windows backwards compatibility to score a few points for their pet theory.

                                                                                  1. 2

                                                                                    I’m more shocked the claim I’m replying to was made, since it seems like it was ignorant of the actual situation with Windows backwards compatibility to score a few points for their pet theory.

                                                                                    I’ve never been too interested in Windows as a platform, what I do know is that a whole pile of people in my social group and the social groups I listen to, who use old Windows programs frequently, were ridiculously frustrated that their programs no longer work. And it became a case of “Windows programs I want to run are more likely to work on WINE than they are on Windows”.

                                                                                    Sure, that has since been mitigated, but that doesn’t change the fact that for a time, WINE did run programs better than Windows. I’m deeply hurt by the idea that you think it was made to score points.

                                                                                    1. 1

                                                                                      I’m deeply hurt by the idea that you think it was made to score points.

                                                                                      No, I referred to the parent of my initial comment.

                                                                                  2. 5

                                                                                    Would Wine have ever worked if all Windows programs were statically linked?

                                                                                    1. 5

                                                                                      Wine does take advantage of dynamic linking a lot (from subbing in Microsoft versions of a library to being able to sub in a Wine version in the first place)

                                                                                      1. 1

                                                                                        I think, yes. The more interesting question is, would Wine be easier to write if Windows programs were statically linked. My initial guess is yes, because you can ignore a lot of the system and just sub out the foundations. However, I do know that the Windows team did a lot of really, really abysmal things for the purpose of backwards compatibility, so who knows what kind of monstrosity wouldn’t run on static-Windows Wine simply because of that?

                                                                                        We’ll never know.

                                                                                        1. 1

                                                                                          How would you even write wine if Windows programs were statically linked? As far as I know, Wine essentially implements the system DLLs, and dynamically links them to each exe. Without that, Wine would have to implement the kernel ABI and somehow intercept syscalls from the exe. It can be done, that’s how gvisor works, but that sounds harder to me.

                                                                                          1. 1

                                                                                            I am very likely wrong (since they didn’t decide to go this route in the first place) but I feel that it might be easier to do that. The Kernel ABI is likely a much smaller surface to cover and you have much, much more data about usage and opportunities to figure out the behaviour of the call. As opposed to a function that’s only called a handful of times, kernel calls are likely called hundreds of times.

                                                                                            Of course, this doesn’t account for any programs that {do/rely on} some memory / process / etc. weirdness. Which I gather probably a lot, given on what Chen put down in Old New Thing

                                                                                  3. 4

                                                                                    ancient Linux binaries still run - but only if they were statically linked

                                                                                    Or if you have a copy of the whole environment they ran in.

                                                                                    I guess that’s more common in the BSD world — people running FreeBSD 4.x jails on modern kernels.

                                                                                    1. -9

                                                                                      Proprietary software is bullshit and can be safely disregarded.

                                                                                      Ah yes, the words of someone who doesn’t use computers to do anything anyone would consider useful.

                                                                                      1. 12

                                                                                        I disagree with @ddevault’s position, but can we please not let the discussion degenerate this way? I do think the work he’s doing is useful, even if I don’t agree with his extreme stances.

                                                                                        1. -3

                                                                                          I don’t give leeway to people who are abusive.

                                                                                          1. 21

                                                                                            But responding with an obvious falsehood, in such a snarky tone, just causes tensions to rise. Or do you truly believe that nothing @ddevault does with computers is useful?

                                                                                            I think a more constructive response would be to point out that @ddevault is very lucky to be in a position where he can do useful work with computers without having to use proprietary software. Most people, and probably even most programmers (looking at the big picture), don’t have that privilege. And even some of us who could work full-time on free software choose not to, because we don’t all believe proprietary software is inherently bad. I count myself in the latter category; I even went looking for a job where I could work exclusively on free software, got an offer, and eventually turned it down because I decided I’m doing more good where I’m at (on the Windows accessibility team at Microsoft). So, I’m happy that @ddevault is able to do the work he loves while using and developing exclusively free software, but I wish he wouldn’t be so black-and-white about it. At the same time, I believe hyperbolic snark isn’t an appropriate response.

                                                                                            1. 12

                                                                                              Much of my career was spent writing “bullshit” software which can, apparently, be “disregarded”. This probably applies to most of us here. Being so disrespectful and dismissive of people’s entire careers and work is more than just “incorrect” IMHO.

                                                                                              I like the word “toxic” for this as it brings down the quality of the entire conversation; it’s toxic in the sense that it spreads. I don’t want to jump to mdszy’s defence here or anything, and I agree with your response, but OTOH … you know, maybe not phrase things in such a toxic way?

                                                                                            2. 3

                                                                                              If I had to add a tag to those comments I’d use ‘idealist’ and that’s not necessarily bad. What do you find abusive in his comments?

                                                                                              1. 2

                                                                                                Abuse is about a mixture of effect and intent, and it depends on the scenario and the types of harm that are caused to determine which of those are important.

                                                                                                I don’t think ddevault’s comment was abusive, because of the meaning behind it, and because no harm has been caused. I think the meaning of “Proprietary software is bullshit and can be safely disregarded” was, “I can’t interact or talk about proprietary software in a useful way, so I’ll disregard it”. The fact that it was said in an insulting form doesn’t make it a form of abuse, especially in context.

                                                                                                In context, software made proprietary, is itself harming people who are unable to pay for it, and in a deep way. It’s also harming the way we interact with computers, and stifling innovation and development severely. I don’t think insulting proprietary software, which is by far the most dominant form of software, and the method of software creation that is supported by the deeply, inherently abusive system known as “capitalism”, that constantly exploits and undermines free software efforts, can be meaningfully called abuse when you understand that context. And I think people who are so attached to working on proprietary software that they get deeply hurt by someone insulting it, should take a good long introspective period, and rethink their attachment to it and why they feel they need to protect that practice.

                                                                                                1. 1

                                                                                                  Proprietary does not mean that it costs money.

                                                                                                  1. 2

                                                                                                    Of course not, but monetarily free software that does not provide the source code is worse, because there’s literally no excuse for them not to provide it. They do not gain anything from not providing the source code, but still they choose to lock users into their program, they do not allow for inspection to ensure that there is no personal data being read from the system, or that the system is not altered in harmful ways. They do not allow people to learn from their efforts, or fix bugs in what will soon be an unmaintained trash heap. And they harm historical archival and recovery efforts immensely.

                                                                                                    Every example of “Monetarily free but proprietary software” that I can think of, either does very, very dubious things (like I-Orbit’s software, which is now on most malware scanners lists), or is old and unmaintained, and the only reason why people use it is because either they’re locked into it from their prior use, or because it is the only thing that does that task. Those people will experience the rug being pulled from under them after a year or two as it slowly stops working, and might never be able to access those files again. That, is a form of abuse.

                                                                                                    1. 0

                                                                                                      This is absolutely not as much of a massive societal issue as you make it seem. Perhaps spend your time thinking about more important things.

                                                                                                      1. 1

                                                                                                        That’s a nice redirect you have there. Flawlessly executed too, I literally would not have noticed it if I did not have intimate experience with the way abusers get you off topic and redirect questions about their own actions towards other people.

                                                                                                        Anyway, I’ll bite.

                                                                                                        I live with two grown adults, neither of which touch computers except when they absolutely have to, and I have observed the mental strain that they go to because programs they spent decades using, and had a very efficient workflow with, have stopped working. I also know dozens of other people who experience the same thing.

                                                                                                        One of them literally starts crying when they have to do graphics work, which is part of their job as an artist, because there’s not enough time in the day for them to learn newer image editors, and because all of the newer options for use that actually do what they need, are ridiculously intimidating, badly laid-out, and work in unexpected ways with no obvious remedy, and conflicting advice from common help-sources. True, this could (and should) be solved by therapy, but it’s foolish to disregard the part that proprietary software has to play in this. Maybe you just don’t live around people whose main job is not “using a computer”?

                                                                                                        I do not see what you have invested in proprietary software, such that you feel the need to call someone’s offhand insult against it, “abusive”.

                                                                                                        1. 1

                                                                                                          Kindly tell me more about how anyone who isn’t neurotypical has been welcomed with open arms into FOSS communities. I’ll wait.

                                                                                                          1. 2

                                                                                                            I myself am a neuro-atypical and queer software developer. Do you want to talk down to me some more?

                                                                                                            Again you are redirecting the question towards a different topic. The topic we were originally talking about is “Is insulting proprietary software abusive”, and now you want to talk about “Queer and Neuro-atypical acceptance in Free Software communities”.

                                                                                                            You still haven’t told me how insulting proprietary software is abusive. I’m still very interested in reading your justification for that.

                                                                                                            Just because the culture that’s grown around free software (and, to be honest, that free software has grown around) is very, very shitty, doesn’t mean that non-free software is good, or something worthy of protection. The culture around free software is fundamentally one of sharing, that’s literally the core tenet. The culture around proprietary software is worse, since it’s literally only about gate-keeping, that’s the only foundation it has. Free software can be improved by changing the culture. There is nothing to change about proprietary software.

                                                                                                            It’s a real shame that many of the more prolific founders of free software were libertarians, but that is still a mistake that we can correct through social, cultural changes and awareness.

                                                                                                            Proprietary software is fundamentally an offshoot of Capitalism, and wouldn’t exist without that. It literally only exists under an abusive system, and supports it. The contributions of free software members are preyed upon by capitalist companies for gain, so that they can profit off the backs of those people without giving back.

                                                                                                            1. 1

                                                                                                              Fun fact: not once did I say that ddevault is abusive by saying proprietary software is shit. He’s just abusive. I’ve witnessed him be abusive to my friends by saying they’re awful people for using non-free software.

                                                                                                              Fuck capitalism, fuck ddevault.

                                                                                                              1. 1

                                                                                                                Fun fact: not once did I say that ddevault is abusive by saying proprietary software is shit. He’s just abusive. I’ve witnessed him be abusive to my friends by saying they’re awful people for using non-free software.

                                                                                                                Fuck capitalism, fuck ddevault.

                                                                                                                Ah! I didn’t pick up on that, sorry!

                                                                                                                1. 1

                                                                                                                  I apologize as well.

                                                                                                2. -1

                                                                                                  Labeling ddevault’s position as abusive is itself absusive, even if you think his position is wrong.

                                                                                                  1. 1

                                                                                                    I don’t think someone who genuinely believes that someone was being abusive, and calling that out, can themselves be called “abusive”. Abuse is about a mixture of effect and intent, and it depends on the scenario and the types of harm that are caused to determine which of those are important.

                                                                                                    I don’t think ddevault’s comment was abusive, because of the meaning behind it, and because no harm has been caused. I think the meaning of “Proprietary software is bullshit and can be safely disregarded” was, “I can’t interact or talk about proprietary software in a useful way, so I’ll disregard it”. The fact that it was said in an insulting form doesn’t make it a form of abuse, especially in context.

                                                                                                    In context, software made proprietary, is itself harming people who are unable to pay for it, and in a deep way. It’s also harming the way we interact with computers, and stifling innovation and development severely. I don’t think insulting proprietary software, that is supported by the deeply, inherently abusive system known as “capitalism”, that is by far the most dominant form of software, and the method of software creation that exploits and undermines free software efforts, can be meaningfully called abuse when you understand that context. And I think people who are so attached to working on proprietary software that they get deeply hurt by someone insulting it, should take a good long introspective period, and rethink their attachment to it and why they feel they need to protect that practice.

                                                                                                  2. -8

                                                                                                    Seems like you don’t deserve any leeway yourself, then.

                                                                                                    1. 0

                                                                                                      Okay.

                                                                                            3. 1

                                                                                              How many binaries from Windows 95 are useful today? I’m not sure that’s a strong argument.

                                                                                              Software that is useful will be maintained.

                                                                                              1. 3

                                                                                                This is a short-sighted argument. Obscure historic software has its merit, even if the majority of people won’t ever use it.

                                                                                            1. 23

                                                                                              You download a single binary executable. You run it. There’s no step three.

                                                                                              Step three is pairing your devices. Step four is to setup autostart.

                                                                                              SyncThing is great. No need to oversell it.

                                                                                              1. 13

                                                                                                Step 5: build and maintain a server for syncthing so that the risk of offline devices not catching all the changes is lowered. (Optional I guess).

                                                                                                1. 3

                                                                                                  And then there’s punching holes through networks where you don’t have a peer but want to sync from an Internet server

                                                                                                  1. 5

                                                                                                    It’s not a problem in Syncthing, there’s always a pool of public relays

                                                                                                    1. 2

                                                                                                      A-ha.

                                                                                                      So, how does that effect possible unintended 3rd party storage of your backups?

                                                                                                      1. 3

                                                                                                        It should still be impossible, the relays don’t actually store any data, just act as introducers for your data nodes

                                                                                                        1. 2

                                                                                                          I may be misunderstanding, but don’t they still need to be active to relay the actual information?

                                                                                                          1. 2

                                                                                                            the information they relay is a TLS ciphertext stream, authenticated by the device IDs which are actually sha-256 fingerprints of the TLS certificates

                                                                                                            1. 2

                                                                                                              Ah, that’s much more reassuring. Thank you! :)

                                                                                                      2. 1

                                                                                                        That must have changed since I last used it. Thanks!

                                                                                                    2. 2

                                                                                                      which is only two commands away if you have Arch lying around:

                                                                                                      pacman -S syncthing-relaysrv
                                                                                                      systemctl enable --now syncthing-relaysrv
                                                                                                      
                                                                                                  1. 11

                                                                                                    £9/m for an SEO plug-in is a real eyebrow raiser

                                                                                                    1. 4

                                                                                                      I can’t help if but feel that everything SEO nowadays is a whole bunch of smoke.

                                                                                                      Search engines are incredibly smart in detecting and ignoring specific SEO techniques. Genuine, relevant organic content is still the best SEO you can do.

                                                                                                      1. 3

                                                                                                        I don’t do any SEO, and there’s no requirement to do it for a personal blog. Still, if the added name recognition gets him an additional .1% in salary or lets him shorten a job search by one day, it will pay for itself.

                                                                                                        1. 3

                                                                                                          Yeah, it’s not cheap that for sure. But it’s the plugin I get the most use from. Not only for SEO, but writing assessments too.

                                                                                                        1. 4

                                                                                                          Ive never used E, although I’m rediscovering my childhood Amiga so it’s on my todo list. But E’s author was a friend in the 90s from the doom community, and I maintain one of his other projects: wadc, a functional programming environment for building DOOM maps.

                                                                                                          1. 13

                                                                                                            I think we should also consider the new user experience here. If you join a small instance:

                                                                                                            • there is very little activity
                                                                                                            • searches for hashtags return almost 0 results
                                                                                                            • it’s difficult to find interesting people via the federated timeline (because of its low volume)

                                                                                                            Relays can fix this issue but they still require an instance with a high volume of traffic in order to be useful. Given this, it’s no surprise that users flock to mastodon.social, people are joining mastodon to talk to people, not necessarily to further any FLOSS ideologies.

                                                                                                            1. 9

                                                                                                              Given this, it’s no surprise that users flock to mastodon.social, people are joining mastodon to talk to people, not necessarily to further any FLOSS ideologies.

                                                                                                              Moreover, most non-techies, would ask why one would have to choose an instance at all. While it is ideologically good to have federation to avoid centralization, and it makes it easier to find a stream of toots for like-minded people, for most folks having users across different instances will only be utterly confusing.

                                                                                                              (I don’t know the solution to this usability problem.)

                                                                                                              1. 10

                                                                                                                Even as a techy I’ve struggled with this same problem. I’ve considered trying out Mastodon a few times and whenever I get to the choose a server they lose me. The line “One server will be hosting your account and part of your identity.” is scary. Can I migrate an identity or am I stuck? Why is an Italian server and a French server featured. Does that matter? Which server has people I’d be interested in? When you join Twitter you don’t get forced to decide your interests, you figure that out later. This is a huge barrier to entry.

                                                                                                                1. 5

                                                                                                                  I think you’re misunderstanding something in that it’s not meant to be just a straight up twitter replacement that happen to have servers.

                                                                                                                  It’s like joining a forum, you’re picking a community to join. Not just a server.

                                                                                                                  There’s more effort that goes into it, but that’s the point.

                                                                                                                  1. 10

                                                                                                                    But what if you want to change communities? What if you have no idea what you’re interested in and just want to see what’s out there? I’ve had the same problem with these distributed networks, I never feel I have enough information to effectively choose an instance, so I just choose the instance hosted by the project itself. If there isn’t one, I usually just don’t bother.

                                                                                                                    1. 4

                                                                                                                      Mastodon has built in tools for account migration. When you migrate, your follows and followers come with and your old account shows a link to your new one saying you migrated.

                                                                                                                      It’s really easy. Folks do it quite often.

                                                                                                                      1. 2

                                                                                                                        This needs to be emphasized better I think. Being able to exit your current mastodon host if you don’t like them for whatever reason is a big advantage of the fediverse over twitter, if only this was made clear to people.

                                                                                                                        1. 2

                                                                                                                          That’s fine unless your reason for changing communities is that the old one got sold to someone who is trying to extract maximum profit out of it (in which case they are going to turn that feature off early).

                                                                                                                          I’ve raised this with the mastodon folks (supporting multiple domains pointed at the same host would entirely mitigate this problem) - they agree it’s a problem, but not a priority.

                                                                                                                          I’d be happy to use another host as long as I owned the domain name.

                                                                                                                          1. 1

                                                                                                                            That’s fine unless your reason for changing communities is that the old one got sold to someone who is trying to extract maximum profit out of it

                                                                                                                            There is no precedent for this ever happening. If you’re so worried about that being the case, just be sure to join a small community in the first place with admins you trust not to do something like that.

                                                                                                                            1. 6

                                                                                                                              no precedent

                                                                                                                              That’s fair. It’s previously only happened to email providers, blog hosts, wikis, forums, subreddits and newsgroups. I’m sure mastodon hosts will never be affected.

                                                                                                                              1. 1

                                                                                                                                You say this, but you clearly have never interacted with a large majority of the Mastodon userbase who would sooner chop off their own right leg before selling out.

                                                                                                                                It’s crazy what happens when folks are committed to their values, as much of the Mastodon community certainly is.

                                                                                                                                1. 3

                                                                                                                                  It doesn’t matter what the userbase wants; what matters is what the owners of the domain names want.

                                                                                                                                  Perhaps these ones have better principles than those who came before, perhaps not. It’s ultimately irrelevant because people and principles change given time.

                                                                                                                                  Plenty of gmail accounts are 15 years old. “Don’t be evil” was coined at google 20 years ago. Lots of things that make sense on a month-to-month scale break down when you think about decades.

                                                                                                                                  1. 1

                                                                                                                                    Yes it does? If the users of a community instance (which, again, ideally won’t be too huge) all leave then what does it matter what the admin wants?

                                                                                                                                    This sort of thing has happened quite a bit. Users of an instance get angry at the decisions of an admin so they ditch.

                                                                                                                                    1. 3

                                                                                                                                      If the users of a community instance (which, again, ideally won’t be too huge) all leave then what does it matter what the admin wants?

                                                                                                                                      Mutually assured destruction is an option, but it’s not a desirable outcome. Much that is good gets destroyed in the process - starting with the social fabric which is the reason for the whole thing existing.

                                                                                                                      2. 8

                                                                                                                        It’s like joining a forum, you’re picking a community to join. Not just a server.

                                                                                                                        I disagree, and I think that this has been a big mistake. Most people don’t have special identities for communities, and even if they do, there’s no point for these people to all be one one instance. After all, if federated, there should be no practical difference (expect maybe for technical details such as speed) what server who is on.

                                                                                                                        Instances should be more transparent than they are now. The only relevant things are how well the administration team can manage the server, and how much you trust them.

                                                                                                                    2. 13

                                                                                                                      I tried to join Mastodon.social a few weeks ago; someone linked one of their messages (or “toot”, if you will) here and I felt I had something useful to add.

                                                                                                                      I failed. I signed up on Mastodon.social (I think? the entire sign-up process was confusing) and could kind-of-but-not-quite login, but I couldn’t really reply to the message. I tried to figure it out for 15 minutes and eventually just gave up, as I don’t really care enough to try more.

                                                                                                                      If you really care about decentralisation, I think you need to think very long and hard not so much about the technology of it, but how to pull it off while remaining usable. I think some sense of pragmatism instead of “perfectly decentral” solution would help as well.

                                                                                                                      1. -2

                                                                                                                        I’ve never had nor had anyone ever express to me that they’ve had these kinds of issues, and I’ve run a couple of instances.

                                                                                                                        I think this rests solely as a “you problem” if I’m entirely honest.

                                                                                                                        1. 3

                                                                                                                          To build on mdszy’s reply, some think of “Random person can join the network just to post an off-the-cuff reply”[0] as a feature rather than a bug since it limits drive-by disruptions and eases the moderation burden.

                                                                                                                          Anyone who wants to build a community instead of sell growth numbers to VCs needs sign-up friction if they’re going to filter for other people invested in the community.

                                                                                                                          [0] I assume that arp242 had a well-reasoned and thought-out reply but that would be the exception.

                                                                                                                          1. 4

                                                                                                                            I guess it depends on what your goals are; if you want to create a relatively small high-quality community then having a higher barrier to signups is probably a good thing – this is what Lobsters does with the invite system.

                                                                                                                            But if you want to make inroads in creating a more decentralised internet, and I believe this is what many want with Mastodon, then it’s important – vital even – to make sure the experience is as frictionless as possible.

                                                                                                                            Perhaps the nice thing about Mastodon is that as I understand it you can do both, at least in principle.

                                                                                                                            1. 2

                                                                                                                              Agreed. Decentralization may depend more on social/interpersonal solutions than technical/UX ones, though.

                                                                                                                              Early adopters can handle a few hurdles to joining. We may need an easy invite system to reach the early majority, one where the early adopters make it super-simple to get started by pre-filling information for them. Early Gmail invites had you choose the username for the invitee, for example, and goodness knows Discord has made that easy.

                                                                                                                              Maybe some research into ideal userbase numbers for instances with 1-3 moderators, so that we can figure out when and where proactive new server creation would help (instead of waiting until server splits are forced by discord/burnout).

                                                                                                                            2. 1

                                                                                                                              some think of “Random person can join the network just to post an off-the-cuff reply”[0] as a feature rather than a bug since it limits drive-by disruptions and eases the moderation burden.

                                                                                                                              I most certainly agree. Thanks for this.

                                                                                                                              Also to add, many instances use signups by approval only, where once you sign up, a moderator has to approve your application. This is basically the best solution we have to the spam problem - since I often get applications from literal spam accounts, it’s clear that it’s working.

                                                                                                                              But yes, part of the benefit of the friction is kind of requiring folks to actually care a little bit rather than just see something, make an account, comment on it and never look again.

                                                                                                                      2. 2

                                                                                                                        I have exactly this problem. I’m considering server hopping yet again.

                                                                                                                      1. 1

                                                                                                                        Strangely all hyperlinks on that article are underlined for me in mobile Safari

                                                                                                                        1. 1

                                                                                                                          I still wanted a way of highlighting links within my posts while injecting some colour. So I decided to add a fun animation that swipes a background colour behind my links when they are hovered over.

                                                                                                                          Technically, my links are still underlined, but I really like this design so I’m going to stick with it. 🙂

                                                                                                                          1. 2

                                                                                                                            This reminds me that I only recently learned of the styling restrictions on the css :visited pseudoclass. My blog used forbidden styles (italics visited links) from before the restriction was imposed, and were broken for a long time and I only noticed in the last month or so.

                                                                                                                        1. 3

                                                                                                                          I’ll be honest, I get the idea, but personally I am way more conservative (is that a right word for this) in building my blog. I do not get reasons why would this need to be so convoluted with dependencies on node, etc. but again, I am a UNIX madman. Anything can be easy on the eyes with your own choice of 10 or so lines of CSS.

                                                                                                                          On my blog everything is done by a shell script, again static, and also I manage to have a gopher site and a gopher atom feed.

                                                                                                                          I would hate on your page if it did not render in glinks, but as it does… whatever, it’s a blog - if it works and is readable on ereaders then its fine by me.

                                                                                                                          1. 3

                                                                                                                            On my blog everything is done by a shell script, again static, and also I manage to have a gopher site and a gopher atom feed.

                                                                                                                            I had a look at the script – it seems awfully like it would drive on and potentially upload a partial or corrupt version of the blog in the face of the failure of any the commands that it runs?

                                                                                                                            1. 2

                                                                                                                              That could apply to any code and that is something you accept and suppose. That is why you have backups.

                                                                                                                              1. 2

                                                                                                                                I guess? You might want to investigate the errexit and pipefail shell options, is all.

                                                                                                                                1. 2

                                                                                                                                  Pipefail does not exist in dash(1). I take responsibility for my choices, and all the problems I’ve had with this setups resulted from my own laziness of not reading the specifications of the standards. Other than that, over past 2 years it has been a trusty and reliable setup that works both on http and on gopher.

                                                                                                                                  Anyways, I’ll set -e as that does not hurt :).

                                                                                                                                  1. 4

                                                                                                                                    Sure! I would venture, though, that this is one possible answer to why people might prefer a more complex body of software; e.g., if that software checks for and handles errors in all of the operations it performs which can fail, is structured to ensure the system only moves from one well-defined state to another, etc.

                                                                                                                                    1. 2

                                                                                                                                      So why hamstring yourself to dash? To guarantee compatibility with a random /bin/sh should you ever port to a long dead platform? (Except Solaris of course, where /bin/sh wasn’t posix compatible anyway). Bash has been a de facto standard and available almost everywhere for decades.

                                                                                                                                      1. 1

                                                                                                                                        It allows for most compatibility in terms of code. It would run on ksh (and I use openBSD a lot in my setups) and on zsh and on bash.

                                                                                                                                        1. 1

                                                                                                                                          It sounds like your answer is “because I like the challenge” which is perfectly fine, just be aware to yourself that it’s why you’re doing it.

                                                                                                                              2. 2

                                                                                                                                It’s like we are from different planets. I run a little static blog myself, and I understood barely half of this. I’m from a completely different culture.

                                                                                                                                1. 4

                                                                                                                                  The original post gave me the same feeling. To me, that site looks like it’s written in very simple, plain HTML, without any interactivity or complex styling that would make it difficult or even onerous to write by hand.

                                                                                                                                  It is, in fact, an artifact of an alien culture, produced by intricate machinery that I’ve never heard of.

                                                                                                                                  1. 2

                                                                                                                                    What do you mean, in reference to what?

                                                                                                                                  2. 2

                                                                                                                                    I would do the same if it wasn’t for the math I want rendered statically. I use KaTeX for this (and remark to parse the markdown). I’m not a fan of node.js and javascript in general, but it’s fine for this purpose, I guess.

                                                                                                                                    1. 1

                                                                                                                                      Wouldn’t it be better to use HLatex which would then have some sign character and just connect the output of two preprocessors into one nice single solution (I have no experience with HLatex though, just was looking of how it would be done).

                                                                                                                                      1. 2

                                                                                                                                        I’m not familiar with HLatex and the only things I can find are packages to generate LaTeX files in Haskell, and to use Hangul (the Korean alphabet) in LaTeX. Neither are what I want.

                                                                                                                                        What you’re describing (using a sign character and merge the output of two tools) is basically what I’m doing: https://github.com/rubenvannieuwpoort/static-site-generator/blob/master/scripts/format-blogpost.js#L71

                                                                                                                                        (Not the most easy-to-read code, but the idea is that I use a regex to extract inline and display math to a separate array, and replace them by a signal character in the text. Then I process the array and the text seperately and replace the signal character by the entries of the array)

                                                                                                                                    2. 1

                                                                                                                                      Yeah I understand, nothing wrong with a shell script if it does its job. I did think about doing something similar, not that barebones though, a generator like Zola maybe. That would produce an even smaller site but I’d still need to figure out where and how to deploy it. ci/cd, etc. This was basically the easiest thing I could do considering I do React for a living.

                                                                                                                                    1. 0

                                                                                                                                      It’d be nice to have a little more clue what Hermes is before clicking.

                                                                                                                                      1. 1

                                                                                                                                        I will make sure to include more info next time I post something.

                                                                                                                                      1. 2

                                                                                                                                        That package caching trick is super fragile. It will break on any package that has postinst scripts, causes dpkg triggers to fire, etc. Safer would be to cache the .deb and install them every time, although slower.

                                                                                                                                        1. 1

                                                                                                                                          It’s a shame GitHub Actions / Azure Pipelines (they’re the same with marginally different YAML configs) does not provide a way of snapshotting the CI VM. I have a small project where installing the dependencies is around 75% of total CI time. I’d love to snapshot the VM (or, ideally, just the filesystem used for builds) at this point and then on the next run start again. Or test incremental builds by snapshotting after every successful build and then applying the patch to a copy of that snapshot and then building from there.

                                                                                                                                          I’m aware of one company that built an in-house FreeBSD CI system using ZFS snapshots like this. Roughly, each successful build was snapshotted with the git hash as the ZFS snapshot name. The trigger for the new build walked up the history until it found a snapshot that existed in the history of the PR, then cloned that snapshot and updated the source tree to the current commit. They had a clean build that took a couple of hours but normally hit an incremental build time of under a minute. The CI machines could then spend most of their time running tests, not rebuilding the same files. I believe they also had a nightly job that did a clean build and fixed occasional issues that came out of this, but didn’t require a complete rebuild on every PR.

                                                                                                                                          For accelerating the build, a few big companies have in-house build systems that use cloud storage to cache all of the intermediate build results. I’d love to see something like this integrated into CMake, automatically caching every build result and skipping rebuilds of everything whose sources hadn’t changed from the last generated result.

                                                                                                                                        1. 32

                                                                                                                                          This is a good read and a good point.

                                                                                                                                          It’s also the first time I’ve encountered the phrase “start a localhost” and it made me cringe.

                                                                                                                                          1. 2

                                                                                                                                            I use Firefox on my iPhone as my “default” browser although it’s not the Default, and this is less of a problem than it would be on a desktop. Apps normally try to display web pages in an inline browser anyway, with a button to launch it in safari. This is right next to the “share” button which has Firefox behind it, so it’s two short taps instead of one.

                                                                                                                                            1. 3

                                                                                                                                              I wrote a cron job that fetches RSS feeds and pipes new items into a folder in my emails.

                                                                                                                                              Advantages:

                                                                                                                                              • Most mail clients (well, the ones I use) support basic styling, HTML & images
                                                                                                                                              • Search is already implemented (by the mail host)
                                                                                                                                              • Read / unread tracking is already implemented, and syncs across devices
                                                                                                                                              • Clients can be configured to prefetch attachments, so you can read offline and sync up the read state afterwards.
                                                                                                                                              • The fetch script can work on things that aren’t RSS via chromedriver

                                                                                                                                              Disadvantages:

                                                                                                                                              • Getting attachments to display inline on a variety of clients took too much work.
                                                                                                                                              • It’s kind of a hack
                                                                                                                                              1. 3

                                                                                                                                                I use Newsboat as a backend for fetching RSS items.

                                                                                                                                                I wrote Newsboat-Sendmail which taps into the Newsboat cache to send emails to a dedicated email address.

                                                                                                                                                To make sure the IDs of the emails’ subjects are kept whenever the server asks me to wait before sending more emails, I wrote Sendmail-TryQueue. It saves emails that could not be sent on disk (readable EML format, plus shell script for the exact sendmail command that was used).

                                                                                                                                                Finally I use Alot to manage the notifications/items.

                                                                                                                                                1. 2

                                                                                                                                                  …so basically Thunderbird.

                                                                                                                                                  1. 1

                                                                                                                                                    Thunderbird is one client.

                                                                                                                                                    I can also use it via the fastmail web ui, or my phone.

                                                                                                                                                    Lastly, the chromedriver integration means I get full articles with images, instead of snippets.

                                                                                                                                                    1. 1

                                                                                                                                                      Ah, I think I misunderstood its features and your workflow. And now I’m curious. How does the non-RSS bit work? Do you customize & redeploy when adding new sources? In other words, how easy or hard is it to generalize extracting the useful bits, especially in today’s world of “CSS-in-JS” where sane as in human-friendly class names go away?

                                                                                                                                                      1. 1

                                                                                                                                                        So, the current incarnation has several builtins, each wrapping a simpler primitive:

                                                                                                                                                        • The simplest is just ‘specify a feed url and it’ll grab the content from the feed and mail it to you’.
                                                                                                                                                        • The next simplest-but-useful is ‘specify a feed url and it’ll grab the link from the feed, fetch the link, parse it as html, extract all content matching a css selector, inline any images, and mail it to you’. This works well for eg webcomics.
                                                                                                                                                        • The third level replaces ‘fetch the link’ with ‘fire up chrome to fetch the link’ but is otherwise similar.

                                                                                                                                                        My planned-future changes:

                                                                                                                                                        • Use chromedriver but specify the window size and content coordinates; this should work around css-in-js issues by looking for boxes of approximately the right size / position in the document. I’m not currently following any feeds that need this, though.
                                                                                                                                                        • Store values and look for changes. I plan to use this to (eg) monitor price changes on shopping sites.
                                                                                                                                                  2. 2

                                                                                                                                                    haha, I like this one. You’ve turned RSS into newsletters!

                                                                                                                                                    1. 1

                                                                                                                                                      mailchimp sells this as a feature.

                                                                                                                                                    2. 1

                                                                                                                                                      I use rss2email which basically does the same thing.

                                                                                                                                                      1. 1

                                                                                                                                                        I wrote a rss reader which is meant for cronjobs, which is btw the reader I use.

                                                                                                                                                        https://gitlab.com/dacav/crossbow

                                                                                                                                                        The version 0.9.0 is usable. Soon I plan to release version 1.0.0

                                                                                                                                                      1. 3

                                                                                                                                                        Great read. I couldn’t adopt this folder structure though. I’m too wedded to having year folders (2020,2019 etc) at the roots.