1. 2

    Very cool! My editor of choice was WadAuthor, and then Doom Builder:

    https://doomwiki.org/wiki/File:WadAuthor_screenshot.png

    http://www.doombuilder.com/images/db2_screenshot1.png

    1. 14

      A personal favourite is that Linux to this day maintains in-kernel support for AX.25, a completely different mode of networking from TCP/IP, primarily for amateur radio. Once you create an interface with a KISS-enabled TNC (modem) and external radio you can write ordinary C networking software (disclosure: my blog) - except with AF_AX25 and SOCK_SEQPACKET to establish connections.

      You know it’s a real socket too because you can look at it with netstat!

      $ netstat -A ax25
      Active AX.25 sockets
      Dest       Source     Device  State        Vr/Vs    Send-Q  Recv-Q
      *          VK7NTK-1   ax0     LISTENING    000/000  0       0     
      

      Or if you like, you can assign an IP address to the interface and Linux will do TCP/IP over connectionless AX.25 frames. (Note: AX.25 is hardly unique in being an old technology that still exists in Linux, but it’s fun that a handful of people still use it.)

      1. 3

        A similar situation is the CAN bus - https://www.elinux.org/CAN_Bus. This is used for a bunch of things, but primarily shows in cars.

      1. 4

        Doom Emacs (https://github.com/hlissner/doom-emacs) works well for me currently. What I like (compared to other Emacs distros) is that it isolates user configuration to three simple files: config.el (where you typically insert your setqs), packages.el (where you can easily add 3rd party modes that aren’t packaged yet by Doom) and init.el (with Doom-specific configuration). I just need to store those three files in a git repo and I can regenerate my Emacs configuration from scratch if something goes wrong.

        Unfortunately, things often go wrong in these Emacs distros :/ but at least Doom has facilities for rescuing things.

        1. 5

          Another happy Doom-er here (ok doomer?)

        1. 18

          If hosting a NAS at a friends house isn’t an option, Backblaze B2 might be a nice option ($5/1TB/m *)

          1. 8

            +1 for Backblaze B2. I use it with Restic and it works great.

            1. 7

              I use B2 and have been happy with it. I use rclone to interact with it.

              1. 3

                rclone seems neat. Thanks for the pointer!

              2. 3

                I’d recommend using fiber inside your house for faster backups.

                1. 2

                  The only issue I found with Backblaze is that it requires your phone number upon registration.

                  This is a serious turn-down.

                  1. 3

                    Why is this a problem? Genuine naive question.

                    1. 6

                      because they don’t need it

                      1. 2

                        Not OP but for me:

                        • I don’t give my number out to anyone except people I know
                        • I have a Google voice number which a lot of companies flag as “not a real number” for since reason
                        1. 1

                          Spam, data collection, identification

                        2. 1

                          Auch, didn’t know that. That’s indeed a negative. Especially when you have to link payment details anyway.

                        3. 2

                          Backblaze B2 + Duplicati backs up all the non-application files on my laptop for ~$0.10/month. It’s been a few months and I don’t even think they’ve billed me yet since the monthly bill is so small.

                          1. 2

                            @timvisee after reading your and other recommendations, I’m trying out BackBlaze. I have a lot of photos, around 60,000. I’m currently backing them all up individually (as in, I just point to the folder and sync the folder). Should I be creating tars of them and backing those up? Thanks!

                            1. 2

                              Tarring would probably be faster yes, to prevent lots of expensive file creations.

                          1. 33

                            My position has essentially boiled down to “YAML is the worst config file format, except for all the other ones.”

                            It gets pretty bad if your documents are large or if you need to collaborate (it’s possible to have a pretty good understanding of parts of YAML but that’s not always going to line up with what your collaborators understand).

                            I keep wanting to say something along the lines of “oh, YAML is fine as long as you stick to a reasonable subset of it and avoid confusing constructs,” but I strongly believe that memory-unsafe languages like C/C++ should be abandoned for the same reason.

                            JSON is unusable (no comments, easy to make mistakes) as a config file format. XML is incredibly annoying to read or write. TOML is much more complex than it appears… I wonder if the situation will improve at any point.

                            1. 23

                              I think TOML is better than YAML. Sure, it has the complex date stuff, but that has never caused big surprises for me (just small annoyances). The article seems to focus mostly on how TOML is not Python, which it indeed is not.

                              1. 14

                                It’s syntactically noisy.

                                Human language is also syntactically noisy. It evolved that way for a reason: you can still recover the meaning even if some of the message was lost to inattention.

                                I have a mixed feeling about TOML’s tables syntax. I would rather have explicit delimiters like curly braces. But, if the goal is to keep INI-like syntax, then it’s probably the best thing to do. The things I find really annoying is inline tables.

                                As of user-typed values, I came to conclusion that everything that isn’t an array or a hash should just be treated as a string. If you take user input, you cannot just assume that the type is correct and need to check or convert it anyway, so why even bother having different types at the format level?

                                Regardless, my experience with TOML has been better than with alternatives, despite its flaws.

                                1. 6

                                  Human language is also syntactically noisy. It evolved that way for a reason: you can still recover the meaning even if some of the message was lost to inattention.

                                  I have a mixed feeling about TOML’s tables syntax. I would rather have explicit delimiters like curly braces. But, if the goal is to keep INI-like syntax, then it’s probably the best thing to do. The things I find really annoying is inline tables.

                                  It’s funny how the exact same ideas made me make the opposite decision. I came to the conclusion that “the pain has to be felt somewhere” and that the config files are not the worst place to feel it.

                                  I have mostly given up on different config formats and just default to one of the following three options:

                                  1. Write .ini or Java properties-file style config-files when I don’t need more.
                                  2. Write a dtd and XML when I need tree or dependency-like structures.
                                  3. Store the configuration in a few tables inside an RDBMS and drop an .ini-style config file with just connection settings and the name of the config tables when things get complex.

                                  As of user-typed values, I came to conclusion that everything that isn’t an array or a hash should just be treated as a string. If you take user input, you cannot just assume that the type is correct and need to check or convert it anyway, so why even bother having different types at the format level?

                                  I fully agree with this well.

                                2. 23

                                  Dhall is looking really good! Some highlights from the website:

                                  • Dhall is a programmable configuration language that you can think of as: JSON + functions + types + imports
                                  • You can also automatically remove all indirection in any Dhall code, converting the file to a logic-free normal form for non-programmers to understand.
                                  • We take language security seriously so that your Dhall programs never fail, hang, crash, leak secrets, or compromise your system.
                                  • The language aims to support safely importing and evaluating untrusted Dhall code, even code authored by malicious users.
                                  • You can convert both ways between Dhall and JSON/YAML or read Dhall configuration files directly into a language that supports a native language binding.
                                  1. 8

                                    I don’t think the tooling should be underestimated, too. The dhall executable includes low-level plumbing tools (individual type checking, importing, normalization), a REPL, a code formatter, a code linter to help with language upgrades, and there’s full blown LSP integration. I enjoy writing Dhall so much that for new projects I’m taking a more traditional split between a core “engine”, and then pushing out the logic into Dhall - then compiling it at a load time into something the engine can work with. The last piece of the puzzle to me is probably bidirectional type inference.

                                    1. 2

                                      That looks beautiful! Can’t wait to give it a go on some future projects.

                                      1. 2

                                        Although the feature set is extensive, is it really necessary to have such complex functionality in a configuration language?

                                        1. 4

                                          It’s worth understanding what the complexity is. The abbreviated feature set is:

                                          • Static types
                                          • First class importing
                                          • Function abstraction

                                          Once I view it through this light, I find it easier to convince myself that these are necessary features.

                                          • Static types enforce a schema on configuration files. There is almost always a schema on configuration, as something is ultimately trying to pull information out of it. Having this schema reified into types means that other tooling can make use of the schema - e.g., the VS Code LSP can give me feedback as I edit configuration files to make sure they are valid. I can also do validation in my CI to make sure my config is actually going to be accepted at runtime. This is all a win.

                                          • Importing means that I’m not restricted to a single file. This gives me the advantage of being able to separate a configuration file into smaller files, which can help decompose a problem. It also means I can re-use bits of configuration without duplication - for example, maybe staging and production share a common configuration stanza - I can now factor that out into a separate file.

                                          • Function abstraction gives me a way to keep my configuration DRY. For example, if I’m configuring nginx and multiple virtual hosts all need the same proxy settings, I can write that once, and abstract out my intention with a function that builds a virtual host. This avoids configuration drift, where one part is left stale and the rest of the configuration drifts away.

                                          1. 1

                                            That’s very interesting, I hadn’t thought of it like that. Do you mostly use Dhall itself as configuration file or do you use it to generate json/yaml configuration files?

                                        2. 1

                                          I finally need to implement Dhall evaluator in Erlang for my projects. I <3 ideas behind Dhall.

                                        3. 5

                                          I am not sure that there aren’t better options. I am probably biased as I work at Google, but I find Protocol Buffer syntax to be perfectly good, and the enforced schema is very handy. I work with Kubernetes as part of my job, and I regularly screw up the YAML or don’t really know what the YAML is so cutty-pasty from tutorials without actually understanding.

                                          1. 4

                                            Using protobuf for config files sounds like a really strange idea, but I can’t find any arguments against it.
                                            If it’s considered normal to use a serialisation format as human-readable config (XML, JSON, S-expressions etc), surely protobuf is fair game. (The idea of “compiled vs interpreted config file” is amusing though.)

                                            1. 3

                                              I have experience with using protobuf to communicate configuration-like information between processes and the schema that specifies the configurations, including (nested) structs/hashes and arrays, ended up really hacky. I forgot the details, but protobuf lacks one or more essential ingredients to nicely specify what we wanted it to specify. As soon as you give up and allow more dynamic messages, you’re of course back to having to check everything using custom code on both sides. If you do that, you may as well just go back to yaml. The enforced schema and multi language support makes it very convenient, but it’s no picnic.

                                              1. 2

                                                One issue here is that knowing how to interpret the config file’s bytes depends on having the protobuf definition it corresponds to available. (One could argue the same is true of any config file and what interprets it, but with human-readable formats it’s generally easier to glean the intention than with a packed binary structure.)

                                                1. 2

                                                  At Google, at least 10 years ago, the protobuf text format was widely used as a config format. The binary format less so (but still done in some circumstances when the config file wouldn’t be modified by a person).

                                                  1. 3

                                                    TIL protobuf even had a text format. It sounds like it’s not interoperable between implementations/isn’t “fully portable”, and that proto3 has a JSON format that’s preferable.. but then we’re back to JSON.

                                            2. 2

                                              JSON can be validated with a schema (lots of tools support it, including VSCode), and it’s possible to insert comments in unused fields of the object, e.g. comment or $comment.

                                              1. 17

                                                and it’s possible to insert comments in unused fields of the object, e.g. comment or $comment.

                                                I don’t like how this is essentially a hack, and not something designed into the spec.

                                                1. 2

                                                  Those same tools (and often the system on the other end ingesting the configuration) often reject unknown fields, so this comment hack doesn’t really work.

                                                  1. 8

                                                    And not without good reason: if you don’t reject unknown fields it can be pretty difficult to catch misspellings of optional field names.

                                                    1. 2

                                                      I’ve also seen it harder to add new fields without rejecting unknown fields: you don’t know who’s using that field name for their own use and sending it to you (intentionally or otherwise).

                                                  2. 1

                                                    Yes, JSON can be validated by schema. But in my experience, JSON schema implementations are widely diverging and it’s easy to write schemas that just work in your particular parser.

                                                  3. 1

                                                    JSON is unusable (no comments, easy to make mistakes) as a config file format.

                                                    JSON5 fixes this problem without falling prey to the issues in the article: https://json5.org/

                                                    1. 2

                                                      Yeah, and then you lose the main advantage of json, which is how ubiquitous it is.

                                                      1. 1

                                                        In the context of a config format, this isn’t really an advantage, because only one piece of code will ever be parsing it. But this could be true in other contexts.

                                                        I typically find that in the places where YAML has been chosen over JSON, it’s usually for config formats where the ability to comment is crucial.

                                                  1. 4

                                                    It’s always been slightly weird to me that all these patch algorithms seem to operate at the level of individual lines of text, yet programs are not semantically broken up into lines.

                                                    Fortunately for VCS systems, programmers tend to write programs so that semantically separate chunks are on successive lines, so these merge algorithms do the right thing most of the time. But wouldn’t an algorithm that was capable of breaking patches on lexical boundaries for the programming language in question give better results?

                                                    1. 2

                                                      AFAIK even semantic merging of XML documents is still a hard problem being researched in academia. Perhaps with some more advanced AI, we’ll eventually get there…

                                                      1. 2

                                                        Yes, but it would no longer be a general difftool; you now need to know which lexer to use. Eg Git supports choosing a difftool based on file extension.

                                                        Given that different versions of the same language can have different rules, you will soon have a difficult task maintaining a list of known digging tools ?

                                                        1. 2

                                                          Sure, but at the same time we’re currently in a situation where whether a merge is correct can depend on the indentation choice the programmer made. That’s not exactly an ideal situation either!

                                                        2. 1

                                                          If your comment means “why don’t the tools know about the structure of the language”, the problem is - languages change. Syntactic constructs get added and removed, so the parser for code today may not parse that same code a year down the line. Thus baking language specific constructs also requires baking a lot more information in, and potentially dealing with diffs between two different schemas of ASTs, rather than just two different ASTs of the same schema.

                                                        1. 2

                                                          This link requires a subscription. Could someone share the non-subscription version?

                                                          1. 2

                                                            LWN articles are only freely available one week after publication.

                                                            1. 1

                                                              My understanding is that there is a special link that can be used to link a free version of the page. LWN don’t exactly encourage it, but they don’t discourage it. Could be wrong, been a while since I was a member.

                                                              1. 2

                                                                Fixed.

                                                              1. 3

                                                                Did anyone ever manage to come up with a watertight formalization of Darcs’s patch theory?

                                                                1. 3

                                                                  Isn’t that what Pijul is trying to do?

                                                                  1. 1

                                                                    Pijul is coming up with a watertight theory of version control, rather than Darcs patch theory.That said, both use patches, so maybe there will be some cross-polination.

                                                                    1. 1

                                                                      Pijul is doing patch theory. They base it on pushouts.

                                                                      I honestly don’t see how this helps, but they seem to be trying.

                                                                      1. 1

                                                                        I know they are doing a patch theory, but I was trying to mention that there’s nothing there that tries to implement Darc’s theory of patches. I may be adding more confusion than necessary, though.

                                                                  2. 2

                                                                    Camp has been working on a Coq proof of it.

                                                                    1. 2

                                                                      Sounds like a cool project, but it doesn’t seem to be active anymore. The repo hasn’t been updated in two years and the mailing list is full of spam. Have the changes been merged into darcs already?

                                                                  1. 5

                                                                    It says that users will be informed when they brew update, so that doesn’t seem particularly silent to me? I think the title of this post is throwing more fuel on a fire than necessary.

                                                                    1. 1

                                                                      I used to use fish, but I switched back over to zsh with ohmyzsh. I use konsole as my terminal emulator, as it supports ligatures provided by Pragmata Pro.

                                                                      1. 3

                                                                        Is it really so hard to make a Haskell library callable from Python? If you really need C linkage you can export a C interface from a Haskell program, couldn’t you just call that from Python?

                                                                        1. 3

                                                                          We now have https://github.com/nh2/call-haskell-from-anything, but that didn’t exist in 2013

                                                                          1. 1

                                                                            A serialization library (especially a performance-oriented one) probably shouldn’t be calling a serialization library.

                                                                        1. 2

                                                                          I wonder if starting with the functor and the free monad is actually helping here - it seems like a lot of work with the potential for pitfalls. Would it be better to start by defining a canonical representation for the effect (r->a for reader, [message] for logging) and then defining the injection from that into Free and then defining the inverse such that Free composition does the correct thing (i.e. doing the “normalization by evaluation” in reverse). Or equivalently, just defining what monadic bind does for our type i.e. not using the Free construct at all?

                                                                          The main advantage of the Free construct is that we have composition by Functor composition. But that only punts the problem of composition to the interpreter. I’m not sure whether “‘algebraic” as defined here is equivalent to being distributive/commutative, but effects that commute with / distribute over each other are trivial to handle in any number of ways. The difficult part is when they don’t, e.g. when combining try/catch with logging, it’s not at all clear whether we should skip a log message in an operation that’s skipped due to an exception. In mtl-land the order of stacking the monads expresses the order in which their effects will be applied, and it’s cumbersome because it forces the user to actually express that. The linked paper seems to duck the problem by saying that catch is not an algebraic effect. “the denotation of a blurb of code is the composition of the denotation of its pieces” helps us not at all because the denotation of a try/catch and the denotation of a log don’t compose.

                                                                          Straightforward composition of effects that distribute is pretty trivial (I did so in scalaz-transfigure; I did require the user to use type annotations to express the desired order but that could easily be dropped), and “distributes over any monad” effects can be expressed straightforwardly via a typeclass; I don’t know whether this page represents innovation in terms of doing it efficiently (less interested in that). But I think it’s completely ducking the really relevant question, which is finding good ways to express effects that don’t necessarily distribute.

                                                                          1. 1

                                                                            The problem is how do you know to start with r -> a or [message]? That was not obvious to me, but this gives me a systematic approach to getting there. Which is the linked paper? I’m not seeing a link :)

                                                                            1. 1

                                                                              Sorry, I meant http://gallium.inria.fr/blog/lawvere-theories-and-monads/ .

                                                                              To me it’s more natural to think “I want a reader, this is r -> a” than to think “I want a reader, this is something that obeys the law (get >>= (λ s → get >>= λ s' → k s s' )) ≃ (get >>= λ s → k s s ). I guess if you have an algebraic definition rather than an extensional one and you can’t see a type that "happens” to be the quotient you want then the technique would be useful - I’m just not convinced that would happen in practice.

                                                                          1. 1

                                                                            I’ve gotta say, if there’s one thing the article made me agree with, it’s the original quote:

                                                                            The first rule of C is don’t write C if you can avoid it.

                                                                            This stuff looks tricky! :)

                                                                            1. -1

                                                                              Eh. This guy is just a standards wanker who doesn’t actually offer anything of value. Yes technically a byte is not always 8 bits. But the original title was “How to C in 2016” not “How to C in the Dark Ages.” The only thing I felt was actually worth saying was the bit about ptrdiff_t / intptr_t.

                                                                              1. 6

                                                                                Can we be critical without resorting to unnecessary ad hominem attacks? While the article is certainly towards the more pedantic side, I think it’s important to truly understand what’s going on in any language I work in.

                                                                                1. 1

                                                                                  I agree that it’s good to understand a language. But as a critique of the original post, I don’t find this post all that useful. The original provided a decent practical basis for modern C programming. It has it’s problems, and the author responded to those problems as they were brought up, including the legitimate criticisms from this critique. But for a critique that states it “is intended to be constructive criticism,” it sure contains a lot of stuff that doesn’t matter at all.

                                                                                  And I stand by my claim of standards wankery, because he writes stuff like this:

                                                                                  By emphasizing “modern” systems, you ignore both old systems (like the old x86 systems that exposed segmented addressing with “near” and “far” pointers) and possible future systems that might be perfectly compatible with the C standard while violating “modern” assumptions.

                                                                                  The old systems are so old that a modern programmer shouldn’t try for code compatibility with them—at all—unless they have a particular reason to. It’s a waste of time.

                                                                                  And while the possible future systems case is interesting to speculate about, they are so far off that planning for them in this way is an even bigger waste of time. Right now size_t equals pointer size which is 32 or 64 bits. No mainstream hardware manufacturer is going to get around the 32 bit address space with segmentation. For 64 bits we won’t need anything like segmented memory again until our address spaces grow larger than 16,777,216 terabytes.

                                                                                  I’ll admit that I don’t have much patience for standards wankery, maybe because I’ve read a lot of it already, but I would have liked the critique a lot more if he hadn’t spend any time reminding all of us that float technically could be 64 bits.

                                                                                2. 1

                                                                                  There are current day architectures where char isn’t 8 bits. For instance the SHARC DSP which has 32 bit chars.

                                                                                1. 3

                                                                                  I’m sure you can figure out ways to break it, but it’s a waste of your time (and you need to get a life).

                                                                                  I’m sure it was meant innocently but this really doesn’t read well. Never attack your readers.

                                                                                  1. 2

                                                                                    Indeed, and I think learning how to break this would be a very rewarding and educational experience. That - to me - is a life.

                                                                                    1. 2

                                                                                      yep that definitely came across too harsh.

                                                                                    1. 6

                                                                                      I made a remark about this on Twitter, but - now that I think about it - it’s probably worth repeating here. The essence of the problem here is that selection is not a homomorphism, and that is leading to confusion. A homomorphism is somewhat of a “structure preserving” operation. For example, I have

                                                                                      length [a,b] = length ([a] + [b]) = 2

                                                                                      length satisfies the homomorphism property, as

                                                                                      length ([a] + [b]) = length [a] + length [b] == 2

                                                                                      The main source of API confusion comes because querySelectorAll is not a homomorphism, though intuitively we expect it to be (I would consider that an API bug).

                                                                                      Being aware of this basic mathematical property goes a long way when it comes to designing APIs.

                                                                                      1. 7

                                                                                        To understand ScottyT, you’d need to understand ReaderT because that’s basically what it is

                                                                                        I fundamentally disagree with this. To understand scotty you need to understand there is some context you can work in that gives you the computations you need to build a web application. That context is formed by ScottyT over some other monad. You do not need to understand the intricate details of monad transformers, and you certainly don’t need to understand the monads that Scotty was built on just to use Scotty.

                                                                                        1. 4

                                                                                          You could use Scotty without understanding it, but my point is precisely about understanding.

                                                                                          You’ll thrash around and give up when you need to do something you can’t copy from previous demonstrations.

                                                                                          The point is about learning and equipping people to learn the rest of the ecosystem. I don’t care that somebody could follow along with a prebaked example, type it in, and fire it up and have a web server unless they are somehow learning the language in the process. In my experience, that’s not what results from doing this and it often leads to disappointment.

                                                                                          My co-author’s 10 year old son prefers learning Haskell with our book to the Minecraft modding tutorials he has precisely because he feels like he’s actually learning how things work in our book, whereas the Minecraft stuff is having him type in Java stuff without explaining how anything works. These are commercial Minecraft modding tutorials ostensibly written for children.

                                                                                          I explicitly mention an alternative approach that lets somebody dive into “practical” projects more quickly but it requires things like an in-person teacher to work more than rarely.

                                                                                          1. 2

                                                                                            I don’t care that somebody could follow along with a prebaked example, type it in, and fire it up and have a web server unless they are somehow learning the language in the process.

                                                                                            That is a hyperbole, and not what I was suggesting. You can teach someone the ability to learn an API from nothing without having them open up the source code and learn how it works. You can teach them how to develop an intuition for navigating the types in Haddock documentation - for example, what does it mean for a function to be an “entry point” (something akin to runScotty)? What do I have to provide this entry point, and how do I produce it?

                                                                                            Then you can teach them a little more about extension points and type classes, and how they can discover these through documentation. Haddock now generates concrete type signatures for instances, so your liftIO example can be discovered if one was to just expand the MonadIO type class and see liftIO :: IO a -> Scotty a, or whatever scotty’s monad is.

                                                                                            I don’t think what I’m suggesting here is giving people a prebaked example, but rather teaching them how to read the maps that are already presented to them, in order to get to their chosen destination as quickly as possible. I see similarities in what we’re suggesting, but my preference is a little more “top down”, whereas you want to approach things from the bottom up.

                                                                                            My co-author’s 10 year old son prefers learning Haskell with our book to the Minecraft modding tutorials he has precisely because he feels like he’s actually learning how things work in our book

                                                                                            Great, and I’m all for this style of learning - I share a common ground with that 10 year old son in that I also like to learn how things work. That said, many people - myself included now - simply do not have the time to learn how things work under the hood if we’re trying to learn something for a career change. It’s also worth noting that the 10-year old son does not have many years of experience building programs in other languages (my assumption could be false). When you already have such a comfort building software in other languages, I think taking so much rigour without letting them play openly could be a quick path to having them put the book down. Of course, these people may not be in your target audience, in which case this point can be dismissed.

                                                                                            I think there is a balance to be reached between exhaustive understanding and tangible results.

                                                                                          2. 4

                                                                                            Failing to be effective at Scotty because I didn’t understand monad transformers was why I stopped using Haskell for a project and just went back to Common Lisp; and the project I couldn’t finish after a few weeks in Haskell took a few hours — simply because I’m more fluent in CL. It was definitely a stumbling block for me.

                                                                                            1. 5

                                                                                              To build on what you’re saying, how are you supposed to know how to use an arbitrary IO action unless you know liftIO? You can’t so much as print something without knowing to use liftIO when you’re in ScottyT.

                                                                                              It turns into turtles(turtles := cargoCulting) all the way down if you follow this unconstructive method of teaching the language.

                                                                                              1. 2

                                                                                                I had a really good experience “cargo culting” Scotty using this post as a guide. It said use “liftIO” for one thing, so whenever I ran into something like that, I used liftIO! And after a while of using it (and finding lots of places I couldn’t use it, and figuring out why), I understood what it does. But that’s just how I learn, I know it doesn’t work for everyone.

                                                                                                1. 0

                                                                                                  You were already a programmer to begin with right?

                                                                                                  We’re writing for a larger audience than just experienced programmers and even among those I’ve seen many get frustrated with not knowing what’s going on. I actually just had someone in IRC mention they’re happier with the book than LYAH because they felt like they weren’t being shown what was actually going on.

                                                                                                  Your experience isn’t implausible to us at all, but that’s part of the reason we wanted to write this post, many programmers take their experiences for granted as representing what most learners will experience if they attempt to learn the same concepts.

                                                                                          1. 15

                                                                                            I appreciate the sentiment here, but it seems to forget two points:

                                                                                            1. A lot of the software we write was never intended to really have users. It scratches our itch, and we move on. If a user picks this software up, it feels incomplete, but at that point we’ve moved on.
                                                                                            2. What does it mean to even be complete? Software doesn’t really ever become finished, so finishing software feels like a very difficult goal to reach.
                                                                                            1. 15

                                                                                              libxml2 is complete. Look at the release history, there hasn’t been a major feature added to the library in years despite it being one of the most popular packages. 98.96% of hosts with popcon installed have libxml2, yet most of the work put into it is security fixes and OS/400 compatibility.

                                                                                              1. 13

                                                                                                I really love software that is complete.

                                                                                              2. 5

                                                                                                Yes fully agree. Half of the stuff I have on github is there mostly cause why not. I have zero plans to support it, or even finish half of it. Most of the time you get further into the weeds and realize, well I really don’t need this that bad for the effort it will take to do this right. Or you decide to backburner it and give it a think because you don’t feel you understand the problem well enough yet.

                                                                                                Also I worry about this idea that software is never finished. I feel like we’re in a weird perpetual loop or maybe a rut just spinning our wheels and not going anywhere with all this churn going on. I want done tools. Or tools that do the thing they need to well enough they are effectively done. All this constant wheel reinvention is maddening.

                                                                                                1. 3

                                                                                                  I hear you, on that, and I see both sides. I think software is at the intersection of tool and art. One should strive to reach the goals that are reachable, and covering the same ground every decade is beyond frustrating.

                                                                                                  My personal pet peeve is self-describing data formats, which, with the amount of ire they have always drawn, surely would not keep being reinvented if they didn’t fill a real need… The lack of obvious progress is quite demotivating.

                                                                                                  Just to balance that with a less depressing example, if one looks at language design, there have been enormously many new ideas, and everything’s being explored at once right now, and it’s kind of wonderful the sheer variety that exists. If languages are ever a “finished” problem, computing will be in a much better state than it is today.

                                                                                                  On the gripping hand, I don’t actually think that stability in software is always a sign of it being good. I get frustrated with tools like bash, which does get its job done, and if it’s had any movement in the past decade I haven’t noticed, … but I absolutely hate using it. I wouldn’t view that as a problem the authors of bash are obligated to address; clearly, they’ve made the artistic statements they have to make, and it’s unreasonable to expect them to have more indefinitely. But that leaves room in that area for somebody else to say something new.

                                                                                                  1. 4

                                                                                                    I will admit, self describing data formats to me really probably should end up being a lisp or maybe a scheme, just evaluate/execute the data and get what you want out of it. But I’m a weird person that way. >.< I’ll admit I haven’t thought this through that much.

                                                                                                    And agree on languages, I’m not saying stop developing new languages to be sure, nor do I think herding all those cats will be possible to ever do. Take the recent Haskell AMP change. Language theory in the past decade has changed how we view things, and for the better in my opinion. Onward and upwards in that regard, even if it is painful. I hope for more pain in getting to dependent types in Haskell. But I don’t consider languages to be in the tool category.

                                                                                                    As for bash, I’ll agree that it sucks (joking here), but i’m a zsh heathen so of course i’d say that. But more to the point, I would consider most shells to be “done” at this point. bash for example does keep piling more stuff on, as does zsh, but fundamentally, they tend to still serve their original and by this point ancient root purposes of a command shell and interpreter. Keeping backwards compatibility here is a required feature.

                                                                                                    For it not being modern, sure I don’t disagree but replacing it might mean a lot of false starts and we might end up back where we started. If whomever decides to create a “better *sh” doesn’t look at the plan9 shell, tcsh, csh, ksh, bash, zsh, fish, i’m sure I missed some unicorn shell or other, I think we’ll just end up where we’re at, with 40% solutions that in the end leave a bitter taste.

                                                                                                    I know I feel that way about some new things recently. That or maybe I’m just old now and need kids to get off my lawn. :D

                                                                                                    But the more I look and read papers from the 60’s, 70’s, and so on, the more I realize we reinvent the same crap over and over again. More often than not poorly, malign say APL all you want but it is a great language and there is a LOT to learn from it that directly relates to our problems today.

                                                                                                    1. 2

                                                                                                      Heh - I’ve thought it through quite a lot. I don’t feel up for giving the elevator pitch of my pet project right now, but the thing about using a procedural rather than a declarative description of data is that you lose the ability to operate on it with tools that know some things about its structure, without knowing everything - for example, “this is an array and here is a comparator for its items; sort it”. Anyway, I think solutions are possible.

                                                                                                      Languages are indeed definitely much more art than tool, but at the same time they need to work, or they’re useless. The thread at first wasn’t distinguishing between types of software - “finish your stuff” doesn’t say what type of stuff - and I’m glad it has now become more nuanced. :)

                                                                                                      I agree with you also that replacing things doesn’t automatically make them better. This is why I feel the art analogy is so strong. There have been periods in art history where everything got very cliquish and, despite there being a lot of people making things, they were all making more or less the same thing, over and over. Somebody has to actually have a new, better idea, at some point. And that almost always has to be informed by what has gone before, or it’ll just restate what’s already been said.

                                                                                                      So, yes, I was specifically not making a call to arms to rewrite bash. Not unless somebody feels they understand the problem space and the solutions we’ve seen so far, at least! I feel like in many ways the Lisp-based shell on Genera was better than anything one can easily run today, but I also don’t think it would be useful today. It’s a hard problem.

                                                                                                      And yeah, I am sure I’ve read different things than you but I make it my business to read papers on historical innovations in computing. I completely agree that a great deal of “new” work isn’t.

                                                                                                    2. 1

                                                                                                      i think a lot of it ties in to the other point the article was making, which is to define the scope of your project so cleanly and narrowly that it is self-evident when it is finished. that project can then be used as a clean, reliable building block in other software. i have to admit that it is a very attractive ideal to aim for, even if i seldom get there in my own stuff. (in large part, because the tooling ecosystem encourages developing projects as monoliths; developing, versioning and testing parts independently can get fairly messy.)

                                                                                                1. 8

                                                                                                  I agree that having to use grep to find packages is tedious, but do note there are alternatives. One is nox - which provides a cache and a nicer interface to search. The other is using the web based search page - but that does impose a bit of a context switch.

                                                                                                  1. 3

                                                                                                    Georges Dubus actually did reply on Twitter, pointing out nox. I’ve updated the story to reflect that. Thanks for mentioning these alternatives.