1. 50
  1.  

  2. 10

    i would’ve used grep -v, this i think i prefer.

    1. 4

      If you want to do more comlex filtering in a somewhat less-like viewer, and are ok with getting back to grep etc. for that, then you might like to see also my up project. Though I kind of wonder now if I would have written it if I knew about the & trick in less… for sure would be harder to justify the effort.

      1. 4

        I’ve seen up before and like the idea a lot. It is a general solution to the problem that & fixes. Very much in line with “UNIX philosophy”.

        Edit: Inspired by up, I whipped together a vi-based alternative called vp. It’s not fully as convenient as up, but it does have all the benefits of a full-fledged text editor.

        1. 1

          Lol, haven’t tried it, but quickly skimmed, fun idea :) and small enough that I’m starting to wonder if it’d be possible to compress it even more, to some vim oneliner to put in aliases :D also IIUC, before appending, would it make sense for g to also del all # lines?

          1. 2

            Yeah, g does delete the # lines, I just forgot to mention it :-)

      2. 2

        I had no idea about & let alone &! — this is so handy. Thanks!

        1. 3

          A shameless plug if you want more less tips: https://blog.einval.eu/2018/09/less-can-do-more/

        2. 2

          Nice, but it’s a lot less useful if we can’t exclude more than one regexp at a time. (When I tried to redo the command, it replaced the previous filter i.e. brought those initially-filtered lines back again.)

          1. 7

            I’ll add it to the text of the article, it is in the gif, but you can use a regex. So to filter out a and b, you’d type

            &!a|b
            

            (Terms split by a pipe)

          2. 1

            Small world! I just learned this trick as well.

            To add another bit to it: if you use &regexp<Enter> (or &!regexp<Enter>) to filter things, another &<Enter> (or equivalently &!<Enter>) then it will clear out the filter.

            The only problem I’ve found with this (that I haven’t been able to resolve yet) is that I can get less locked up in a really large file when filtering. The usual Ctrl-C or qqqqqqqs don’t seem to do anything. So that’s the only caveat and where grep is still useful: large files.

            1. 1

              Related to this, I’ve always wanted a bit of a “notebook” version of this kind of operation, where I’m previewing stuff, typing in exclusions into a thing, getting real-time feedback, but also persisting it in some way for easy editing.

              Also stuff like grouping quickly etc…. I know some people’s solution to this is stuffing it into a SQL db, but I bet you could get far with some sort of in-memory thing with a nice CUI

              1. 2

                Maybe a combination of script(1) and up (from @akavel) is what you are looking for?

                1. 1

                  It sounds like you might want to go for a full blown ipython notebook.

                2. 1

                  This is neat! I’ve seen some comments about wanting to be able to do this interactively in a way that permits saving and editing. I think kakoune does a pretty good job there.

                  If you open the text of the log in kakoune, you can type % to select the whole buffer, and then you can pass it through an external filter like ripgrep: |rg -v <term>. The result is that your selection (the whole file) is replaced by the output of your filter (only lines that match). The great thing is that you can repeat this operation any number of times without your regular expression needing to get any longer, and it’s trivial to save the current matching contents of the file to a new file (:w name) or an in-memory buffer (%y:e -scratch<ret>p) at any point in the process.