1. 38
  1. 4

    Reminded me of the Mastering Emacs writeup - something like the other side of the same coin.

    1. 2

      I have a special “.PHONY: phony” rule, that allows me to write:

      clean: phony
          rm -rf ./output
      

      Instead of the usual:

      .PHONY: clean
      clean:
          rm -rf ./output
      

      Note that this trick can slow down huge Makefiles.

      I didn’t know that phony tags are inherited, or how does this work?

      Also, if you’re already using GNU extensions, you might like to replace

      FIGURES = $(shell find . -name '*.svg')
      

      with

      FIGURES != find . -name '*.svg'
      
      1. 3

        My understanding is that PHONY rules are like any other rule, it just skips the check for whether the file exists. You can already depend on a file that doesn’t and will never exist, for example:

        clean:
                rm -rf ./output
        

        Now make clean will run the rule like you might expect. The catch is that someone could create a file called “clean” and now your script won’t run. This is what PHONY solves: even if a file “clean” exists, it’ll pretend like it doesn’t.

        From there, you can also depend on a rule that depends on a file that will never exist. For example, a clean-all rule could reuse the clean rule as follows:

        clean:
                rm -rf ./output
        
        clean-all: clean
                rm -rf ./other
        

        This is all that .PHONY: phony rule is doing. It almost acts as if it’s inheriting the phony status, but that’s just a consequence of how Make handles transitive rules (if a sub-dependency doesn’t exist, it’ll re-run the whole chain of rules after that one).

        The part I find interesting is that they say it slows down larger Makefiles, which I wouldn’t expect to be the case, at least not significantly.

        Cheers for the != thing! I hadn’t seen that one before but it seems very useful.

      2. 2

        virtualedit is a nice trick! I wish I’d known about that a long time ago. It makes ASCII art and tables much easier.

        1. 1

          Sorry if I step out of the main topic here, but I’m just curious: did you made ASCII art by hand? If that’s so then I must say I’m really impressed, it requires lots of patience and determination to achieve; if you have them close to you I would like to see it.

          1. 1

            Nothing I’ve done has ever been impressive enough to share outside of their original context as most of it was for practical purposes.

        2. 1

          If you don’t mind my asking, what solutions do people here have for matching quote marks and other punctuation minutiae in vim? This bit me back when I wrote a sci-fi novel with vim and Pandoc.

          1. 1

            That looks nice and with markdown the spellchecking becomes a lot easier in vim than with LaTeX. The last time I wrote a longer text in LaTeX with vim, spellcecking was not so nice. Things may have improved in the meantime though.

            1. 1

              What was your problem? Was it spellchecking in general that was the problem, or was it trying to spellcheck LaTeX keywords?

              1. 1

                the latter, but it has been a long time since I tried

                1. 2

                  That sounds like a plugin issue: the filetype syntax should mark prose as contains=@Spell and keywords as contains=@NoSpell, see :h spell-syntax

                  1. 2

                    I think I had a similar issue with word count in Emacs counting LaTeX tokens. I write humanities essays not scientific ones so I kept the editor but got around the problem by writing in markdown instead.

                  2. 1

                    The problem is that LaTeX is a Turing-complete language and so you can’t tell if a word in the source will actually appear in the output without compiling the entire document (which Vim’s spellcheck does not do!). For example, consider something like:

                    \figure[caption={My shiny figure}]{somefig.pdf}
                    

                    The spell checker should check the string ‘My shiny figure’, but not ‘somefig.pdf’. I think this example works in vim, because there is some hard-coded knowledge of figure, but if you wrap it in another macro that breaks. For most of my books, I used macros like this:

                    \clisting{someexample.c}{1}{42}{An example of a thing}
                    

                    This would insert lines 1-42 of someexample.c with a caption looking something like this: “Figure 5: An example of a thing [from someexample.c]”. The spell checker just tried to check all of these words, so I’d end up with red highlighting on someexample.c.

                    1. 1

                      Absolutely, LaTeX is a great piece of software but it’s a shame regardless of the technical reasons why it doesn’t work the way people want or expect. I never really liked real-time spellchecking anyway, so tend to stick my finished document through aspell or something and rely on human proofreaders. Obviously for a lot of things this is pretty overkill. I wonder if it would be feasible to have an extension which did go through and compile the file to show spelling corrections in the right place? I presume so, but I doubt it’d be elegant or efficient.

                      1. 2

                        I don’t really write using LaTeX, I write with a set of semantic macros that happen to be implemented in LaTeX (I also wrote a tool to translate them to XHTML for ePub a while ago). That makes it a bit more tractable for editors.

                        I keep hoping SILE will gain more traction. I met Simon at FOSDEM a few years ago and he’s done exactly what I wanted to do if I ever had a spare year or so: he’s gone back to all of the Knuth papers about typsetting in TeX and implemented the algorithms that Knuth said he wanted to implement but couldn’t because of the CPU constraints. For example, TeX uses a really nice dynamic programming algorithm to lay out words in a paragraph but uses a simple greedy algorithm for laying out paragraphs on a page because Knuth calculated that he’d need over 1MiB of RAM to implement this for something the size of one of his books with TeX. SILE runs on systems where 1MiB of RAM is considered a tiny amount so can happily use that. It’s also implemented in Lua, whereas most of TeX is implemented in a horrible environment where Knuth looked at a Turing machine and thought ‘yup, that’s the abstract machine programmers want’, so it ends up being a lot faster but still makes it possible to hook into any part of the system and replace bits.

                        SILE has many nice features, but the most relevant is that it completely decouples the input markup language from the rest of the pipeline. If you want to handle custom annotations, you write specific hooks for them in Lua, but you can write XML, TeX-style markup, or any other markup language that can encode those annotations. This eliminates a lot of the problem editors have with TeX: the interleaving of program and markup.

                        1. 1

                          I’d forgotten about SILE, after reading quite a lot and using it a little a year or so ago. I do like the project, especially its drive to update TeX to use modern features. It looks like it does a decent job at a lot of things, so I might look into using it more frequently, just to be another person to be able to testify for it (as I said above, I don’t have particularly complicated typesetting needs).