Threads for gnubison

    1. 17

      Eh, when I’m composing longer text like this, I tend to write in Markdown, with a line per sentence. It’s easier to rearrange sentences, and when collaborating, git works much better, and the diffs are much easier to read.

      1. 14

        I do the same, it’s what is called Semantic Line Breaks which effectively makes the argument in the OP moot.

        1. 8

          It’s a really good idea, but I have to wonder if the time you save rearranging sentences actually makes up for the extra time spent explaining semantic line breaks to contributors.

          In the end I always read these things (Semantic line breaks and the OP) and think like “yeah, I should do that!” and then I end up forgetting a few days later and going back to one space.

          1. 2

            It’s difficult, especially if you don’t have a linter that can point it out to people and enforce it.

            You don’t necessarily need to go through and change all old text. You can just migrate text to use sembr whenever you edit it.

          2. 1

            that’s why having the website’s nice 😄

          3. 1

            I don’t save much time rearranging sentences (though I do save some) but I save a lot of time from not having merge conflicts when two people edit different sentences in the same paragraph. Telling contributors ‘one line per sentence, it will make merging easier’ doesn’t take long.

        2. 5

          SemBr appealed to me at first, but I disliked it when I tried using it. When you have short clauses, the source ends up looking weirdly like poetry (or like it’s trying to be a poem).

          1. 1

            I tend to not be that strict with sembr and tend to mostly put in a line break when I input a period or sometimes a comma, if I’ve written a long sentence.

        3. 4

          I only use SemBr with line-based markup formats, which these days is only mandoc. When writing markdown I try to make it look as much as possible like I intended the source to be the version that people read - after all that was the point of markdown, and it fits my habits from learning usenet netiquette at an impressionable age.

        4. 4

          I use that too when I’m writing manual pages — the roff(7) syntax assumes that intraline periods are for abbreviations so it won’t split the line at them. It’s a very convenient convention, I just find it distracting to read my own text with the linebreaks.

          Anything else, I two space and get the benefits of greater readability and vim/emacs/fmt/etc compatibility.

        5. 2

          I don’t care if document authors use semantic line breaks “behind the scenes,” but when they escape into the rendered version—i.e., when the user-visible paragraphs have line breaks in the middle of them—they are distractingly unprofessional-looking.

          This tends to be especially common on Stack Exchange sites, for some reason; maybe their Markdown parser renders intra-paragraph line breaks as real line breaks.

          1. 2

            Funnily enough, many markdown generators consider two spaces at the end of a line as a line break. I typed enter once before this sentence, but it’s not in a new paragraph. It obeyed semantic line breaks. But now I’ll type two spaces after this colon:
            This sentence also had only one newline before it, but the two spaces at the end of the line before it added a line break.

            1. 3

              Yeah, I think that’s a fairly widely-supported Markdown thing. Trailing whitespace is usually considered sloppy, if not an outright mistake; this is the only context I know of where trailing whitespace is actually semantically meaningful.

          2. 1

            GitHub’s markdown renderer does this too. I seem to remember hacking my markdown exporter to spit out paragraphs as a single line to get around it.

            1. 2

              Hmm? I think that’s an issue on comments in PRs, but not for Markdown files in repos.

              I’ve collaborated on a couple of books in Github repos, and the one with line-per-sentence / semantic line breaks looked as expected, while the one with paragraph-per-line had frequent merge conflicts.

              1. 1

                Oh, maybe. Comments and also the PR summary itself. (Which I tend to go to town with.) Weird that they should use different renders for Markdown files vs Markdown text in comments etc, huh?

        6. 1

          Glad to know there’s several of us weirdos. Though I always broke down my lines to be 80 characters, a sentence spanning multiple if need be. Not all editors will navigate a wrapped line every effectively.

        7. 1

          Is it though? AFAIU SemBr is not just about one line per sentence, it’s that plus adding line breaks where it makes sense.

      2. 6

        Git doesn’t work better or worse with semantic line breaks as such, but its built-in diff and merge algorithms are indeed line-based. The reason I’m making this distinction is that you can use different diff and merge algorithms per file type, which opens up using things like using word-based diffing tools (e.g. wdiff or wiggle) for Markdown/org/whatever files while keeping line-based diffs elsewhere.

        1. 2

          I knew about wdiff, but not wiggle! That’s … potentially useful? It’s hard to get a sense of it from the repo.

          1. 1

            Behind-the-scenes, wiggle works effectively by exploding all spaces to line breaks, doing a diff on that, and then stitching it back together for editing/approval/etc. This does occasionally break in fascinating ways, but it’s served me well for…what, gotta be near a decade at this point.

      3. 3

        When I learned LaTeX, this was recommended because it made CVS work better. I kept using it with Markdown and AsciiDoc, and Subversion and git. It significantly reduces editing conflicts, means I don’t need to reflow text, and makes moving sentences around easier.

        When I got the draft of my first book back from the copy editor, entirely full of annotations telling me to move text around, I was very grateful to whichever LaTeX guide told me to do this. I’d made my life much easier.

        I’ve since insisted on it in other papers and in docs for my open source projects. Usually people grumble, think it’s weird, then find it’s saved them a pile of work and start doing it elsewhere.

      4. 1

        This works well. Brian Kernighan recommended it on page 10 of UNIX for Beginners in 1974 :)

    2. 22

      The real reason: the name is just an identifier, a type can be an identifier and many other sorts of things like *i32 or [u8] or Vec(T) or all sorts of stuff depending on what your language is. It’s easier to make a parser rule unambiguous when it starts with something as specific as possible, which lets you easily make types more complicated without them needing a crazy-pants syntax like C function types. So if your type rule is complicated and maybe can look a lot like a non-type value in some cases like foo[4], then having

      vardecl ::= IDENT type [EQUALS expr] SEMICOLON

      is a lot easier to handle nicely than

      vardecl ::= type IDENT [EQUALS expr] SEMICOLON

      It’s not a huge difference, you can make the latter work, but it’s enough of a difference that there’s no real reason to do int age. It also makes type inference syntax easier, since you can just omit the type but every var decl still starts the same way, instead of either needing a keyword like auto or trying to read “thing-that-may-be-an-ident followed by an ident but if it’s not followed by an ident you might have to backtrack a bit and try something else”.

      Having the syntax be let ... = ... makes this even easier, but afaict the purpose of that is more to make it possible to not need a return statement every time you want to return a value. Otherwise you might end up with an expression that’s just x; and your parser doesn’t know whether it’s declaring a variable or returning the value of it.

      1. 9

        It is really crazy how the generation who made Unix and the IP stack and all these other elegant software and protocols made C’s function pointer and variable modifier syntaxes. There must be some insight that eludes me.

        1. 17

          The insight is that C was a big improvement over what they were using before. This was not a time of 100 languages to pick from, and the hardware determined what you could even pick. They could do PDP assembly, maybe BCPL, and after that they had to invent something better from scratch. C was objectively better.

          Plus they were absolute cream of the crop computer scientists, to whom such a thing as a slightly awkward syntax did not matter in the grand scheme of things. They had much bigger fish to fry, and they did fry them.

          1. 14

            The insight is that C was a big improvement over what they were using before

            Was it? Pascal has been around, and some might argue that it was a better language. C won simply due to UNIX betting hard on it, not out of merit, at least that’s what I have seen some older folks mention in retrospect.

            1. 11

              Why Pascal is not my favorite programming language by Brian Kernighan has some (IMHO valid) points about the Pascal of the day.

          2. 3

            Plus they were absolute cream of the crop computer scientists, to whom such a thing as a slightly awkward syntax did not matter in the grand scheme of things.

            They were humans, not prophets. Still are, some of them. They’re allowed to make mistakes.

            1. 2

              Other things might’ve been a mistake, I just don’t agree that a syntax decision in one. Otherwise entire languages are mistakes, even some modern ones that are just starting out are inventing incredibly awkward syntax.

              The fact that they were humans is precisely what I meant to say in the first paragraph: they did the best with what they had. It is undeniable, however, that their best is better than almost any other “best”, probably evidenced by the huge impact C/UNIX still has 50 years later.

        2. 8

          I think a good part of it is that in the 70’s C had a much simpler type system, and it grew by accretion. Would be interesting to do some archeology and try to find out for real though.

          1. 1

            I’m re-reading SICP and in one footnote there’s a reference to abstract data types being researched, but the first cited papers are quite late from around 1978 (if I recall correctly). Maybe that’s when it started getting more widespread traction. ML started earlier than that, but my guess would be that by early 1970s it’d be restricted to provers and more theoretical applications.

            1. 5

              See Barbara Liskov’s Turing lecture. She cover the history of abstract data types in some detail.

              1. 1

                Great tip, will check it out.

        3. 8

          The type syntax is not “type var1, var2” but “simple-type expr1, expr2”, where simple-type is something like an int, double, or struct. For example:

          int c, *p, arr[5];
          

          Here the expressions c, *p, and arr[5] (shhhh, ignoring that the fifth element is out of range) will all have type int. A function pointer declaration like

          int (*myfunction)(int, int);
          

          is saying that (*myfunction)(some_int, some_int) will have the simple type int.

        4. 4

          I believe Chomsky hierarchies and similar theories were not well known by them at the time. That’s why C is infamously not context-free from a grammar standpoint.

          1. 7

            Not really, we got BNF with Algol (the language which I think introduce Algol-like type name declaration syntax).

            Also, I think C, as originally designed, was LL(1). Remember, you used to need to struct MyStruct my_var, so every type starts with a keyword. It’s only after later introduction of typedefs(a later feature) that the grammar becomes ambiguous and starts requiring the lexer hack.

    3. 19

      This place scratches the itch for me.

      1. 6

        I love how quiet Lobsters is compared to HN or big subreddits. It’s marvelous how good links roll in without the fanfare of other forums’ flamewars.

      2. 4

        Agree, I almost wonder what answers they expected.

        Sure we can mention Fosstodon and infosec.exchange and floss.social and functional.cafe but I don’t actually use any of those directly… I use Lobsters.

      3. 3

        Yep, this and one, maybe two discord servers that I got drawn into by pure chance and are technically not about general computing.

    4. 5

      I love the note on discontinuing Ultrix support: “the last release was in 1995.” 28 years of Perl support for an obsolete platform is quite different from the compatibility guarantees of more “modern” languages…

      1. 3

        Yeah and this is a large part of why I’m writing my personal Gemini & Web server thing in Perl 5. It’s far from my favourite language, but lets me be a lot more flexible in choice of development and deployment OS.

      2. 1

        I wonder when someone tried actually building it on ultrix the last time or did they actually have a CI build for that?

    5. 3

      curl –proto ‘=https’ –tlsv1.2 -sSf -L https://install.determinate.systems/nix | sh -s – install

      Can we please please stop this

      1. 24

        we’ve had this discussion before, it always comes full circle:

        What is your solution to improve upon this shell piping ?

        • Verify Checksum: Coming from the same source, you have to trust them
        • download a .deb: Coming from the same source, you have to trust them and they have to provide a ton of different package formats
        • apt install: it’s new, there is no shipped version
        • docker: this isn’t made for docker - also docker doesn’t mean security
        • VM: neat, but how do I get from the “vm” testing step to my actual installation ? I could just pipe this shell command in the VM anyway, so no need to provide a VM image (which format?).
        • snap/flatpack/appimage.. which one of them, are they even useful for this installer ? also you’re still trusting the authors and now the snap servers too

        You can only argue that downloading it to disk and then executing can leave a trace on the disk, so you can inspect the malware later on. We’re still downloading debian ISOs straight from the host, not that big of a difference. And I bet no relevant percentage checks the signatures or checksums.

        1. 5

          I sometimes see:

          curl --proto '=https' --tlsv1.2 -sSf -L https://install.determinate.systems/nix >installer
          less installer
          sh installer install
          rm installer
          

          This way the website can’t detect whether it’s being piped to sh, and you don’t get screwed over if the connection fails halfway through.

          1. 3

            Downloading it first or putting it at some reliable hosting (where you can’t change the download for specific people by random just through compromised developer hosts) is definitely one step better. But it doesn’t change much in terms of the core complaints: Running shell scripts from the internet.

            1. 5

              Well, pretty much all free software boils down to “Running [executable code] from the internet”.

              The question is whom you need to trust. The connection is HTTPS and you need to trust both, your TLS setup and determinate.systems anyway to use this installer, so not much is left.

              Getting a an external signature/checksum would guard you against a compromised web server, but only if you can get that checksum out of band or from multiple, independent sources.

      2. 18

        If you need an offline install, or you’d prefer to run the binary directly, head to https://github.com/DeterminateSystems/nix-installer/releases then pick the version and platform most appropriate for your deployment target.

      3. 13

        Agree, it’s missing a sudo!

      4. 7

        What’s a better alternative?

      5. 5

        Sure—just as soon as all software is verifiable, which projects like this are an important step towards :)

    6. 17

      I have become one of those boring people whose first thought is “why not just use Nix” as a response to half of the technical blog posts I see. The existence of all those other projects, package managers, runtime managers, container tooling, tools for sharable reproducible development environment, much of Docker, and much more, when taken together all point to the need for Nix (and the need for Nix to reach a critical point of ease of adoption).

      1. 28

        Well, maybe there’s a reason why nix hasn’t seen significant adoption?

        1. 10

          The Nix community has been aware of the DX pitfalls that prevented developers to be happy with the tooling.

          I’ve made https://devenv.sh to address these and make it easy for newcomers, let me know if you hit any issues.

          1. 3

            +1 for devenv. It’s boss. The only thing I think it’s truly “missing” at the moment is package versioning (correct me if I’m wrong).

          2. 2

            Love it! (As in: I haven’t had a reason to try it yet, but this is definitely the way to go!)

          3. 1

            it doesn’t appear to support using different versions of runtimes—which is the entire point of asdf/rtx in the first place. I’m not sure why I would use devenv over homebrew if I didn’t care about versions.

            1. 5

              I think the idea is a devenv per-project, not globally, like a .tool-versions file; as you say, it’d be a bit of a non-sequitor otherwise

            2. 2

              Devenv, just like Nix, support that OOTB. You simply define different shell per project.

        2. 6

          Primarily the bad taste the lacking UX and documentation leaves in people’s mouths. Python development is especially crap with Nix, even if you’re using dream2nix or mach-nix or poetry2nix or whatever2nix. Technically, Nix is awesome and this is the kind of thing the Nix package manager excels at.

          1. 2

            I’ve found mach-nix[1] very usable! I’m not primarily working with Python though.

            [1] https://github.com/DavHau/mach-nix

        3. 5

          Yes, it’s way too hard to learn!

        4. 6

          because the documentation is horrible, the UX is bad, and it doesn’t like it when you try to do something outside of it’s bounds. It also solves different problems from containers (well, there’s some overlap, but a networking model is not part of Nix).

      2. 12

        I’ll adopt Nix the moment that the cure hurts less than the disease. If someone gave Nix the same UX as Rtx or Asdf, people would flock to it. Instead it has the UX of a tire fire (but with more copy-paste from people’s blogs) and a street team that mostly alienates 3/4 of the nerds who encounter it.

        1. 7

          Curious did you try https://denvenv.sh yet?

          1. 4

            https://devenv.sh for those clicking…

          2. 3

            No, thanks for the link! This looks like a real usability improvement. I don’t know if I am in the target audience, but I could see this being very useful for reproducing env in QA.

      3. 10

        It’s like using kubernetes. Apparently it’s great if you can figure out how to use it.

        I’ve given up twice trying to use nix personally. I think it’s just for people smarter than me.

        1. 7

          Heh, that’s a good counterpoint. I would say, unlike with k8s I get very immediate benefits from even superficial nix use. (I do use k8s too, but only because I work with people who know it very well.) I assure you (honest) I’m not very smart. I just focus on using nix in the simplest way possible that gives me daily value, and add a little something every few months or so. I still have a long way to go!

          The How it works section of the rtx README sounds very much like nix + direnv! (And of course, I’m not saying there’s no place for tools like rtx, looks like a great project!)

      4. 4

        Nix is another solution that treats the symptoms but not the disease. I used asdf (and now rtx) mainly for Python because somehow Python devs find it acceptable to break backwards compatibility between minor versions. Therefore, some libraries define min and max supported interpreter version.

        Still, I’d rather use rtx than nix. Better documentation and UX than anything Nix community created since 2003.

        1. 4

          It’s clearly out of scope for Nix (or adf, rtx…) to fix the practices of the Python community?

          1. 1

            Sure. It’s good that a better alternative for asdf exists, although it would be better that such a program wasn’t needed at all.

      5. 4

        Isn’t it somewhat difficult to pin collections of different versions of software for different directories with Nix?

        1. 7

          Yes it is difficult. Nix is great at “give me Rust” but not as great at “give me Rust 1.64.0”. That said for Rust itself there aren’t third party repos that provide such capability.

          1. 5

            I think you meant s/aren’t/are :)

            1. 2

              Correct. Bad typo. :)

          2. 4

            I think you are pointing out that nixpkgs tends to only ship a single version of the Rust compiler. While nixpkgs is a big component of the Nix ecosystem, Nix itself has no limitations prevent using it to install multiple version of Rust.

            1. 4

              Obviously nix itself has no limitation as I mentioned there are other projects to enable this capability. While you are correct I was referring to nixpkgs, for all intents nixpkgs is part of the nix ecosystem. Without nixpkgs, very few people would be using or talking about nix.

          3. 3

            I thought that was the point of Nix, that different packages could use their own versions of dependencies. Was I misunderstanding?

            1. 5

              What Adam means here is that depending on what revision of Nixpkgs you pull in, you will only be able to choose one version of rustc. (We only package one version of rustc, the latest stable, at any given time.)

              Of course, that doesn’t stop you from mixing and matching packages from different Nixpkgs versions, they’re just… not the easiest thing to find if you want to be given a specific package version.

              (Though for Rust specifically, as Adam mentioned, there are two projects that are able to do this easier: rust-overlay and Fenix.)

              1. 3

                This is a great tool to find a revision of Nixpkgs that has a specific version of some package that you need: https://lazamar.co.uk/nix-versions/

                That said, it’s too hard, and flakes provides much nicer DX.

              2. 3

                for Rust specifically, […] there are two projects that are able to do this easier: rust-overlay and Fenix

                The original https://github.com/mozilla/nixpkgs-mozilla still works too, as far as I know. I use it, including to have multiple versions of Rust.

              3. 3

                Alright, thanks!

        2. 3

          No I wouldn’t say so, especially using flakes. (It gets trickier if you want to use nix to pin all the libs used by a project. It’s not complicated in theory, but there are different and often multiple solutions per language.)

      6. 2

        Any pointers on how I can accomplish the same functionality of asdf in Nix?

        1. 3

          These are some quick ways to get started:

          Without flakes: https://nix.dev/tutorials/ad-hoc-developer-environments#ad-hoc-envs

          With flakes: https://zero-to-nix.com/start/nix-develop

          And add direnv to get automatic activation of environment-per-directory: https://determinate.systems/posts/nix-direnv

          Or try devenv: https://devenv.sh/

          (Pros: much easier to get started. Cons: very new, doesn’t yet allow you to pick all old versions of a language, for example.)

      7. 1

        I have become one of those boring people who just downloads an installer and double clicks it.

    7. 3

      I like asciidoc a lot, but the PDF rendering makes me long for LaTeX. Actually, LaTeX is the only FOSS software I’ve found that creates good-looking PDFs. But it can’t really do HTML (I know you can make it work), and it’s super verbose. I can’t help but feel that both

      <ul>
        <li>Foo</li>
        <li>Bar</li>
      </ul>
      

      and

      \begin{itemize}
        \item Foo
        \item Bar
      \end{itemize}
      

      were the things that markdown was created in reaction to.

      1. 6

        SILE produces beautiful PDF output (better than TeX) and has pluggable front ends. If an AsciiDoc tool chain can generate XML, then it may already be able to produce something that SILE can consume.

        1. 4

          ASCIIDoc is meant as a simplified representation of DocBook format, so any ASCIIDoc document should have canonical XML representation.

          1. 3

            If I remember correctly, SILE can consume DocBook XML, so that seems like a nice path to generating pretty PDFs. If anyone makes that work, please post a link on lobste.rs: I’d probably end up using that flow for a bunch of things.

            1. 2

              It appears that’s a pretty easy output format. I can say I’ve used it but I’ve always found asciidoc a joy to write in, and it’s pdf output has generally been good enough for me. https://docs.asciidoctor.org/asciidoctor/latest/docbook-backend/#convert-docbook-to-pdf

      2. 2

        <ul>

        <li>Foo</li>

        <li>Bar</li>

        </ul>

        But what if you did it this way?

        <ul><li>Foo
        </li><li>Bar
        </li></ul>
        

        Can’t un-see 😉

        1. 2

          found the haskell programmer (:

          1. 2

            I’ll take that as a compliment 🤣

            I actually use this ^ form because it’s much easier for cut & paste, re-ordering, and so on.

            1. 2

              This is why I favour Python enforced indentation.

              Because there is always one person, with one weird style, and they’ll have a defence for it…

              Congrats. Today, it’s you.

              1. 3

                I don’t format HTML files (etc.) like that.

                But when I’m dealing with an HTML list in (e.g.) a JavaDoc comment, then sometimes I will use an approach like this, because it dramatically helps the readability of the (e.g.) Java file. This is important because that file’s text is going to be read and used by someone who is not currently in “HTML mode”. Having to parse HTML in one’s head – just to read a comment – is a significant attack on the reader’s senses. It’s also one of the reasons why we made the early decision to use markdown in Ecstasy comments, instead of using raw HTML.

                At any rate: Context matters.

                There’s always one person wanting to make themselves feel better by lobbing insults, instead of just taking the time to think through ideas that are foreign to them and from which they could (occasionally) gain the advantage of a new point of view.

                Congrats. Today, it’s you.

                (Don’t feel too badly; it’s usually me doing the exact same thing 😢 despite being committed to stopping this habit.)

                1. 2

                  It really wasn’t meant to be an insult!

                  You gave a good, legit reason for that style. I accept it. It’s a valid point. However, it is, shall we say, quite unlike more conventional layout styles. I’m sure you’d agree, or else why post it?

                  And that’s the thing. Everyone’s style has good reasons for it for them. But people want different things.

                  At my last job, we had a fixed style for XML and a tool to reformat code like that, but my preferred editor occasionally reformatted it all itself, on a whim. In general it was a massive PITA.

                  So I think the only solutions is to either enforce it by making it significant, so everyone must do it the same or break it, or have the tools do it automatically for you. Or both, of course.

                  1. 2

                    Thanks for the thoughtful response. I’m a stickler for keeping things clean, because I know how expensive it is to have random (even competing) standards being arbitrarily enforced by different members of a team.

                    1. 1

                      I get it and I agree.

                      I suppose one could also make an argument in favour of cleaner, simpler formats for that reason.

                      When oXygen reformatted my XML and I didn’t notice and pushed it, the diffs were unreadable. It’s hard to back that out, partly because I’m not a programmer, I’m a writer, and to me, Git is basically black magic.

                      Xkcd applies: https://xkcd.com/1597/

                      Because, in part, XML is complex and thus indenting it is complex and so you need small diffs to be readable at all.

                      Whereas in ADoc, in places indentation is significant, like code blocks, so you avoid it, and it’s much less noisy which means it’s much more readable.

        2. 2
          <ul><li
          >Foo</li><li
          >Bar</li
          ></ul>
          

          I have done this in actual code.

      3. 2

        I am actually curious, what are my choices of layout engines if I want to produce PDF documents?

        That is, my understanding is that PDF as a format comes with layout precomputed (input has elements with absolute positions). What are my choices for something which I can feed into a layout engine and get that to compute absolute positions. I know of the two:

        • browsers, with HTML+CSS as a decent input language, great layout capabilities, and OK-ish typography.
        • TeX, with TeX as a language that mixes source, semantics, presentation, and computation, OK layout capabilities and beautiful typography. Is there some way to use TeX’s layout capabilities without TeX-the language?

        Are there other notable things in this space?

        1. 4

          https://www.princexml.com/ is another interesting option in this space.

        2. 2

          I would suggest roff, which commonly compiles to ps, pdf, ascii, and HTML well.

          1. 2

            Note that the only non-hack HTML rendering is mandoc and that it only consumes -man or -mdoc input. The traditional -mm (“memorandum”), -ms, and -me packages will look nice with groff (the Linux default) but groff can’t really do HTML.

            1. 1

              groff can do HTML ok (grohtml(1)). Also don’t forget that man and mdoc are for documentation, and that all roff implementations are good at producing PDFs (the question).

        3. 1

          There are a few options, none of them cheap, because it’s a problem a lot of businesses have, and requires a fair bit of work many programmers aren’t really qualified to do.

        4. 1

          There is of course prawn, which is what asciidoctor-pdf looks like under the hood. But it is OK for layout and has mediocre (worse than browsers) typography. On the other hand, you can use it with asciidoc…

          There are also some docbook (and hence asciidoc) stylesheets to convert to PDF, but I was never impressed with the quality.

          1. 2

            On the other hand, you can use it with asciidoc…

            In practice, what I’ve found works best for me is to make AsciiDoctor output html, which I then style and print to pdf via browser.

      4. 2

        If you’re using TeX to produce PDFs for anything except academia (and even then), I’d go with ConTeXt (here are some example documents). It’s about 10 years younger than LaTeX and feels, oh, about half a century more modern, probably because it was and still is developed by someone who is an actual publisher. Some simple things it can do that are hard in LaTeX:

        • If you want a one-off red title, you can write \section[color=red]{My important section}.
        • If your document contains rubric sections that are always red, define \definehead[rubric][section][color=red] and write \rubric{On the naming of cats}.

        It can also handle more advanced stuff like fonts and font features; typesetting on a grid; custom backgrounds or overlays for pages, paragraphs, or words; shaped paragraphs; and writing some or all of your command logic or document with Lua.

      5. 1

        My text editor, KeenWrite, transforms Markdown to XHTML then passes that XML document to ConTeXt for typesetting against various themes. I made a few tutorials that demonstrate many of KeenWrite’s features:

        https://www.youtube.com/watch?v=8dCui_hHK6U&list=PLB-WIt1cZYLm1MMx2FBG9KWzPIoWZMKu_

    8. 11

      I think a better analogy for variable sigils is noun inflections. Indeed there is a direct parallel between @ and plural markers like English’s -s:

      • They can add information. For example if you say “my apples” you’re conveying the information that you have at least 2 apples. In Raku (and Perl) if you declare a variable with @ you’re saying there are multiple items (which could be 0 or 1 item).

      • When they don’t add information they are still required. For example, in “two apples” the -s adds no new information, but it is still required (“two apple” is ungrammatical). In Raku (and Perl) you always have to refer to a variable with the sigil, even when the name itself is unambiguous. (As the article said it’s possible to create sigilless variables, but the “social norm” is such that you’ll use sigils).

      I find the analogy with $day_job and #hashtag a bit flawed, because the sigils in $day_job and #hashtag definitely add information, while Raku sigils don’t always do that.

      Noun inflections are not strictly necessary, and personally, as someone building a language with the $ sigil (shameless plug for Elvish) I think sigils make much more sense in languages that also support unquoted strings (also known as barewords): you need a sigil to disambiguate echo PATH and echo $PATH.

      Perl has unquoted strings (even though long deprecated) so need sigils too. AFAIK Raku doesn’t support unquoted strings, but it inherited the sigil system from Perl (albeit a somewhat rationalized version, for example indexing @a is @a[1], not $a[1]).

      All these said, if you grow up speaking a language with noun inflections, you’ll find it weird to not have them. Indeed if you look at recent artificial languages (human languages, not programming languages) invented by Europeans, they tend to have very few inflections except for a plural suffix, despite it being more rational to do away with all inflections (and use a word meaning “multiple” to make the distinction when absolutely necessary). I feel the author and many other Raku and Perl users probably have a similar relationship with sigils. I don’t mean this in a bad way: programming languages don’t have to be 100% rational, they can have quirky cultural attributes that appeal to some but not everyone.

      1. 4

        I like this take. I would say, the justification for $a[0] is that the sigil is for the output, not the input — i.e., you get a scalar output value with $a[0], unlike @a[0, 2, 5] which gives you a list. This is analogous to “my first apple”: it’s derived from “my apples” but the noun inflection shows what is “returned”.

      2. 0

        Reminds me of Lingua Romana Perligata - maybe if you have a properly inflected language then you can abandon sigils…

    9. 3

      The examples in “2. Raku with sigils” are @foo[5] %foo<key> &foo() but the reader seems to get the same context from just foo[5] foo<key> or foo(). Other languages have no problems with interpolation without sigils. I’m not going to avoid a language because it has these but they don’t seem to add much value.

      1. 1

        I agree that the sigil in @foo[5] doesn’t communicate anything. The point I was attempting to make is that the sigil in @foo communicates that I can write @foo[5] (or other code that expects to be handed an ordered, numerically-index-able collection). Does that make a bit more sense/seem more useful?

        1. 1

          As an author the IDE knows the context so the sigil is not providing much added value. As a reader the text after the variable provides the context and the IDE has it also.

          1. 4

            Regardless of the merits of sigils or not (I am not personally convinced but can see why others would hold such an opinion) it is important for conversations about languages to by upfront about whether there is a separation between the programming language from the IDE.

            For many programmers, there is no separation: languages that do not have IDE autocomplete are useless. For some programmers the IDE is fundamentally divorced from the language. C didn’t need an IDE. But then Java absolutely does. Unison takes it one step further essentially has a coding assistant running at all times that you ask questions of to figure out what you want to do.

            TL;DR I think the OP is not considering IDEs in the conversation and you are, and this leads to different conclusions neither of which is necessarily off-base.

            1. 1

              Good point. In the walled off context the author sets they make their point. Considering sigils in the wider context of programming practice includes IDEs and I don’t see as Raku an academic research project language.

          2. 2

            Even if your IDE can show you context on a mouse-over, isn’t there a little value in a compact (just a single character), mandatory (at least in Perl) type indicator which you can always see? The nice thing about sigils is that they’re there whether you use ed, print out your code, or just have your hands off the keyboard and mouse.

    10. 4

      I’m surprised there wasn’t a “technical writing” tag or similar.

      I’m looking at mdoc to generate man pages and an html page for sr, my spaced repetition program. I think having a manual page locally accessible is great and consistent formatting is even better. Makes me wonder actually why more haven’t adopted this or extended it carefully…!

      1. 1

        My impression is that many projects use the -man macro package because … many other projects use the -man macro package. For example, the Linux man-pages project:

        New manual pages should be marked up using the groff an.tmac package described in man(7). This choice is mainly for consistency: the vast majority of existing Linux manual pages are marked up using these macros.

        Which is a shame, because they suck. They’re harder to write, make for uglier source code, and have worse output.

        The BSD folks have seen the light, though. Take a look at their manual pages for role models. No SCREAMING_CASE_FOR_EVERY_ARGUMENT; synopses that help you use the program; typographically correct ‘’ curly quotes instead of straight quotes or, god forbid, ` and '.

        It’s hard to go back.