Threads for player1537

  1. 2

    This is pretty cool! I just wish it included some photos of it in operation. It’s a shame it has to rely on ngrok but at the same time I understand the trade-off.

    1. 6

      There’s a video in the Tweet link.

      1. 1

        For completeness (since I didn’t see a direct link to the tweet in the article):

    1. 2

      Some of the examples are compressed and end up with something like this shell


      in order to keep the post under 140 charaters

      It would be nice if as a user if one was presented with uncompressed code and the compressed code

      Apart from that very interesting, similar to shadertoy

      1. 2

        I don’t know if this has changed in the last 20 minutes, but if you go to the new beta UI ( ) then there is a toggle to switch between the compressed and uncompressed versions.

        1. 1

          Thanks, you are right it is in the beta I only checked the default site

      1. 3

        This was absolutely fascinating to me, from beginning to end. It’s also very neat to see that the author is the one who developed the Newer SMB game(s) that I played with my family years ago.

        I think it’s interesting to see a successful game company actually structures their code and how much of that code gets reused for later games. Perhaps the most surprising is that, while the naming and functional conventions appear misguided or archaic (why still use _c as a suffix for all classes? Why have the d prefix at all?), it clearly didn’t get in the developer’s way very much.

        Based on all the details in this post, I wonder if it’d be possible or useful to create a standalone game engine like the one in NSMB.

        To me, one of conventions I wouldn’t have thought of is the separate lists/queues for things that need to be deleted, created, executed, or drawn; and moreover, that this list might need to keep around things that still need to be acted on. I would have thought that if something did get added to the “to delete” list, then it would just be immediately deleted then removed from the list, rather than asking it to delete itself and postponing deletion until the next tick if necessary. I wonder how much of that decision is based on (I’m assuming) a technical decision to not have multithreading but instead use a form of cooperative multitasking (which is common for games that need to keep their core game loop within a certain number of milliseconds to be ready for outputting the next frame).

        1. 4

          These are useful examples, i.e. 2 failed attempts and the right one.

          I have found that quoting and evaluation is sort of a “missing topic” in programming education. I think I got exposed to it through Lisp but then it took awhile for my brain to transfer that knowledge to strings, Python, shell and C. It’s very important for security, i.e. understanding SQL injection, HTML injection (XSS), and shell injection.

          For example this post has all sorts of quoting/evaluation errors, like manipulating shell code with sed and then piping directly to sh:

          I have used that pattern in the past, but I’ve moved away from it in favor of xargs, and I never put it in a shell script.

          I responded to it here:

          Fun fact: bash ALMOST does its quoting correctly with printf %q or ${x@Q} and the “not quite inverse” printf %b to unquote.

          But it doesn’t work if you have a newline. It sometimes emits the $'\n' strings, but doesn’t understand them. These are the most general type of string (e.g. POSIX single quoted strings can’t contain single quotes).

          So the fact that bash doesn’t do this correctly is more evidence that even the authors of languages are confused about quoting and evaluation.

          Oil has QSN instead:

          I think that some people wondered why Oil even has QSN at all! It is so you can quote and unquote correctly 100% of the time. You don’t have to worry about data-dependent bugs, like when you strings contain spaces, newlines, single quotes, double quotes, or backslashes.

          It’s just Rust string literals, which are a cleaned up version of C string literals. Most people understand 'foo\n' but not necessarily


          (the way to write a newline in POSIX shell)

          That is, there is a trick to concatenate \' in POSIX shell, but it doesn’t have the property of fitting on a single line.

          I think what would be useful is to have a post on the relationship between “quoting and evaluation” and say JSON serialization and deserialization. They are kind of the same thing, except the former is for code, and the latter is for data. It’s not an accident that JSON was derived from the syntax of JavaScript, etc.

          1. 5

            Actually a really practical example of where this comes up is SSH quoting:



            So the problem here is to serialize an argv array as a string. And SSH does it naively by concatenating and separating with a space! This leads to the problem where you arguments with special characters are mangled!


            Top comment:

            I think the fact that SSH’s wire protocol can only take single string as a command is a huge, unrecognized design flaw.

            1. 2

              Indeed, I’ve also noted this design flaw of Ssh (and that it’s concatenating the wrong way) in my how to do things safely in Bash guide. So not entirely unrecognised, for what it’s worth.

            2. 2

              After reading your xargs post, I think I’m of the exact opposite opinion: I only use find -exec and haven’t touched xargs in years. I understand xargs enough to know that it needs to be treated carefully depending on what data is being passed around, and it doesn’t work well with my idea of iteratively building up commands. Consider going from find . -type f where it simply prints out the names, then moving on to using xargs requires changes to both the find command and to xargs: find . -type f -0 | xargs -0 rm. Of course, in this trivial example, it would just be better as find . -type f -exec rm {} + (for symmetry with \; I usually write it as \+).

              Instead, I’ve taken to using a strategy where I go straight from find back into the shell. The pattern is kind of obtuse, admittedly, but there’s never a case where the filenames get passed around through a pipe and where delimiters have to be considered. The simple example above would be:

              $ find . -type f -exec bash -c 'rm "$@"' '<bash -c>' {} \+

              It’s a bit of a mouthful, but it then lets you use any shell features inside of the bash -c command, which I prefer because I already think in terms of shell expansions and commands. I use this a lot when I need to rename files with a weird convention. For example, I’ve used this before to convert a folder full of world.2017-01-01.converted.bin files that should be converted to world/converted/2017-01-01.bin could be written as:

              $ mkdir -p world/converted
              $ find . -type f -name 'world.*.converted.bin' \
              >     -exec bash -c 'for src; do dst=$src; dst=${dst%.converted.bin}.bin; dst=${dst#world.}; dst=world/converted/$dst; mv "$src" "$dst"; done' '<bash -c rename>' {} \+

              The '<bash -c>' or '<bash -c rename>' argument is needed because it sets Bash’s argv[0] which is shown in process listings. Without it, the first argument gets lost.

              At this level of effort, I think it could be better to just use shell completely and make use of shopt -s globstar. I think it would look like this:

              $ shopt -s globstar
              $ for src in **/world.*.converted.bin; do dst=$src; dst=${dst%.converted.bin}.bin; dst=${dst#world.}; dst=world/converted/$dst; mv "$src" "$dst"; done

              But then you lose both: all of the extra features within find and being able to more easily build up the command iteratively. Plus, my brain prefers to go straight to find when I need to recursively go through directories, and globstar is more of an afterthought.

              Side note: I realized that it could be somewhat straightforward to write a “find to bash” (or “find to posix sh”) converter to remove any find dependency altogether, something like the following:

              $ find2bash . -type f -name 'world.*.converted.bin' -mmin 10 -exec echo Removing {} now... \;
              #!/usr/bin/env bash
              tempdir=$(mktemp -d)
              printf -v escaped 'rm %q' "${tempdir:?}"
              trap "${escaped:?}" EXIT
              # find -mmin 10
              # I think this has to be GNU's touch
              touch --date="-10 minutes" "${tempdir:?}/-mmin 10"
              shopt -s globstar
              for arg in ./**; do  # find .
                  # find -type f
                  [ -f "${arg:?}" ] || continue
                  # find -name world.*.converted.bin
                  case "${arg:?}" in
                  (*) continue;;
                  # find -mmin 10
                  [ "${arg:?}" -nt "${tempdir:?}/-mmin 10" ] || continue
                  # find -exec echo Removing {} now... ;
                  echo Removing "$arg" now...


              1. 1

                That’s definitely a valid way of doing it and I will concede that find -exec \+ doesn’t have the gotcha of newlines in filenames, which xargs -d $'\n' does.

                However my responses:

                1. xargs -d $'\n' composes with other tools like grep and shuf:
                2. It’s nice to preview and separate the two issues: what to iterate on, and what to do. It’s basically like Ruby / Rust iteration vs. Python.
                3. xargs -P is huge; can’t do this with find
                4. find is its own language which I think is annoying. It has globs, regexes, and printf. I’d rather just use shell globs, regex, and printf.

                Although your -exec bash idiom is very similar to the $0 dispatch pattern I mention. I use xargs to “shell back in”, and you are using find to “shell back in”.

                Discussed here btw:


            1. 1

              Small world! I just learned this trick as well.

              To add another bit to it: if you use &regexp<Enter> (or &!regexp<Enter>) to filter things, another &<Enter> (or equivalently &!<Enter>) then it will clear out the filter.

              The only problem I’ve found with this (that I haven’t been able to resolve yet) is that I can get less locked up in a really large file when filtering. The usual Ctrl-C or qqqqqqqs don’t seem to do anything. So that’s the only caveat and where grep is still useful: large files.

              1. 16

                vipe is pretty useful. Can do things like:

                ls | vipe | wc -l

                To manipulate intermediate results with your $EDITOR. I’ve re-implemented this tool in Haskell:


                1. 1

                  That’s funny, i thought it was based upon the moreutils author. I guess vipe predates Joey’s desire to write everything in Haskell?

                1. 1

                  (last week:

                  $WORK: Managed to get a lot done on the paper during the week and then almost none during the weekend. I think I needed a break from working on it because I kept finding myself getting distracted on things that aren’t important (e.g. I tried to write my own mapping library to work around problems I found in Leaflet, but now I’m going to throw that code out because it’s probably got more problems of its own).

                  I start my new internship today and have a lot of on-boarding stuff to do (training, paperwork, etc). I’m hoping that I can start working on some of the work I’ll be doing, if I can get onto all of their systems I’ll need. So this week I’m just adjusting to all the new things and trying not to look too bad.

                  $HOME: Re-made that salad dressing from last week and it was even better. Sometime this week, we’ll make it another time but writing down measurements we use. I think it’s also possible we’ll finish watching Bones and can switch to another series. We’re missing the final/most recent seasons of 4 shows now. Some friends and I just finished our playthrough of Divinity: Original Sin 2 and we loved it. We’re looking to find another game like it to play.

                  1. 1

                    I’ve spent a long time finalizing my home directory. I use a variation of the FHS for all my local files, though it took a while to really appreciate and make the most out of it.

                    Every time I reinstall my OS, I go through the same process of removing all the Documents, Pictures, Downloads, etc directories and create the ones I prefer. Most of them stay gone, but Desktop and Downloads often come back. The biggest perpetrators are Gnome and Chrome, respectively, but this is solved with a few settings that I always have to look up.

                    ~/bin/ - for any scripts I write and use, and especially for wrapper scripts to fix things I don't like
                    ~/mnt/ - for `sudo mount /dev/sdb ~/mnt/`
                    ~/opt/ - for software and tools that might need compiled (like ~/opt/dwm, ~/opt/Write, ~/opt/yEd)
                    ~/share/ - for writings or drawings that I create (like ~/share/documents/*.svg, produced by Write)
                    ~/src/ - for any project I work on (every directory here is a git repo)
                    ~/src/*_paper/ - for research papers I'm working on
                    ~/src/tmp/ - for any projects that I don't intend to commit to git (like re-clones of my work projects for trying new things)
                    ~/tmp/ - for any files I download
                    ~/tmp/*/ - for any completely throwaway projects (e.g. ffmpeg scripts for making hour long sleep videos)
                    ~/etc/ - for any configuration files for tools I use

                    I want to use the $XDG_CONFIG_HOME and $XDG_DATA_HOME directories but I never spend enough time making sure it all works, so I don’t really trust them to work right. I also just don’t like the idea of hiding config files in a hidden directory.

                    One thing I used to do that I no longer do is to keep a very organized ~/src/ directory. I had a script to clone every project into an exact path and then update symlinks to be easier to use. The trouble I ran into is that it was a little too regimented and didn’t make it easy to write throwaway code because I always had to create a GitHub repository when I wanted to work on a project.

                    ~/src/$owner/$repo - for any GitHub projects cloned
                    ~/src/$owner/$repo - for any BitBucket projects cloned
                    ~/src/$repo -> ~/src/$owner/$repo - symlinks for easier access to the full path

                    I also try to have a small home directory as most of my work is on limited drives, so I need a solution to managing large datasets. The trick I use there is to make heavy use of symlinks to a data drive. For example, a project ~/src/foo would have ~/src/foo/data pointing to /mnt/mylargedrive/foo/data and ~/src/foo/gen would point to /mnt/mylargedrive/foo/gen. Then I can document the data source and formats in a /mnt/mylargedrive/foo/README and everything stays nice and clean.

                    1. 2

                      (last week:

                      $WORK: I got a ton of work done this past weekend on the code for a paper we’re writing. I got to learn and practice a lot of OpenGL stuff and made a ton of progress.

                      I’m hoping to get a couple more features done and continue the writing process so it can be done before I start my internship next week.

                      I’m also very excited about the direction our work is going and think that it’s going to be a great foundation for my thesis. For the past few years, I haven’t really known what I wanted to do for my thesis, so this is the first time I’ve really felt like I get it and that has been nice.

                      $HOME: I made more of those coffee bean style cookies and this batch turned out more like I was wanting. They’re a little dryer and crunchier which is what I was trying for. My wife and I also made some Asian salad dressing like we used to get from one of our favorite restaurants. We had been trying to make it for a while, and this batch turned out the best so far, so I’d like to write the recipe down. I didn’t start the #100DaysToOffload writing like I wanted, but that’s okay, maybe this week.

                      1. 1

                        (last week:

                        $WORK: Finished the paper! I’m very pleased with how it turned out, especially that it marks the end of an 18 month dry spell of writing and working. A lot of my moodiness with being unmotivated was resolved by having consistent work to do.

                        Now onto the next one, and then taking a writing break to focus on code by working with something entirely different. Today is about collecting some baseline results to compare against and writing a short form of the paper for a class. The rest of the week, I’ll be continuing to get enough data together to write the full paper.

                        $HOME: Now that that paper is done, I’m feeling really motivated to start Kev’s #100DaysToOffload challenge. I think I’ll be writing some about software and cooking mostly. This weekend, I made little coffee bean shaped cookies but I had to tweak the recipe a lot, so I want to consolidate my changes and also start making large batches to share with family/friends.

                        1. 2

                          (8 weeks ago… a lot longer than I remembered

                          $WORK: It’s paper deadline time again! We’re submitting one paper on Thursday, so this week is all about wrapping that up. Then the next day, I have a final project presentation for a class before doing the last sprint on another paper due early May. After that, I start my virtual summer internship.

                          Quarantine has been getting to me more lately, so I’ve been resolving to try to have more of a schedule and get more time outside in the sun. I’m finding it pretty hard to stay motivated sometimes, but having this work schedule with the papers helps a lot, surprisingly.

                          $HOME: I had a lovely anniversary weekend with my wife, and I really enjoyed having time completely away from work, so I think I’ll keep that up. That said, I’ve been reading a professor’s blog and am feeling very motivated to do more writing on my blog and I’d love to start that up again. I felt very accomplished after finishing the last post I wrote up.

                          Other than that, it’s just going to be more crime drama bingeathons and relaxing.

                          1. 1

                            I try to write a few blog posts here and there at . I tend to follow the style of “let the code do the talking” without much exposition. For example, I just finished on the creation of compute-heavy scientific microservices which summarizes a lot of the things I’ve learned about embedding C code into Python servers, mostly in terms of how the code is written. My old blog includes a few more posts in that style.

                            1. 2

                              I have a special “.PHONY: phony” rule, that allows me to write:

                              clean: phony
                                  rm -rf ./output

                              Instead of the usual:

                              .PHONY: clean
                                  rm -rf ./output

                              Note that this trick can slow down huge Makefiles.

                              I didn’t know that phony tags are inherited, or how does this work?

                              Also, if you’re already using GNU extensions, you might like to replace

                              FIGURES = $(shell find . -name '*.svg')


                              FIGURES != find . -name '*.svg'
                              1. 3

                                My understanding is that PHONY rules are like any other rule, it just skips the check for whether the file exists. You can already depend on a file that doesn’t and will never exist, for example:

                                        rm -rf ./output

                                Now make clean will run the rule like you might expect. The catch is that someone could create a file called “clean” and now your script won’t run. This is what PHONY solves: even if a file “clean” exists, it’ll pretend like it doesn’t.

                                From there, you can also depend on a rule that depends on a file that will never exist. For example, a clean-all rule could reuse the clean rule as follows:

                                        rm -rf ./output
                                clean-all: clean
                                        rm -rf ./other

                                This is all that .PHONY: phony rule is doing. It almost acts as if it’s inheriting the phony status, but that’s just a consequence of how Make handles transitive rules (if a sub-dependency doesn’t exist, it’ll re-run the whole chain of rules after that one).

                                The part I find interesting is that they say it slows down larger Makefiles, which I wouldn’t expect to be the case, at least not significantly.

                                Cheers for the != thing! I hadn’t seen that one before but it seems very useful.

                              1. 2

                                (3 weeks ago:

                                $WORK: I’ve got a lot of stuff going on this week. I have a small presentation on containers and their history on Thursday which I have to throw together. In 3 weeks, I have my conference presentation, so I hope to start on that today and get it moving along so that I have enough time to practice and prepare. I also need to work on open sourcing the code that the conference paper/presentation is about, which I haven’t really started on yet.

                                I think I’m going to need to punt on a project I’m excited about for now, so that I’ll have enough time to get everything else done. Because everything is happening all at once this month, I’m going to have to get really good about managing my time. I’ve started journaling via to keep track of that time better and I’m hoping to do some nightly reviews of the journal to see where I’ve lost time. I tend to treat myself poorly during busy months, which I’d like to improve on by making sure to give myself time to decompress and relax. After last year’s busy March month, I needed several months just to decompress, during which I didn’t really do anything, and I’m trying not to let that happen again.

                                $HOME: I’ve been playing with my 3D printer again. I finally have a copy of SolidWorks so I can start 3D modeling stuff. So far, I haven’t printed many useful things, but it’s been really nice to be able to do this again. There’s something really relaxing about taking measurements of things and recreating them in SolidWorks; it’s very systematic and the end result is always nice. Right now, I’m designing a little faux sink for my cat to play in so we don’t have to feel bad about wasting water (it’ll get reused and re-pumped through the system).

                                I want to get my personal server in use again. I didn’t get into blogging again like I’d hoped, and I think that’s because I made the system a little too DIY. I’d like to throw in someone else’s server that supports MicroPub that way I can blog from my phone, instead of having to get out my laptop. At the end of the day, it probably still won’t get much use, but it’s still fun to play around with.

                                I bought a laptop a week or two ago, a ThinkPad T420. I used to own one of those but ended up selling it to a friend, and I’ve been missing it. Unfortunately, the one I have doesn’t have all the nice specs the old one had (an i7, an SSD, maxed out RAM), but it works well enough once it gets going. It’s my main SolidWorks machine, from above. It also has a DVD drive, so I’m thinking about ripping some of the DVDs I own and setting up a plex server, perhaps on my gaming PC attached to the TV.

                                Lastly, a good friend is leaving for Australia for a few weeks, so we’re hanging out tonight before he goes. We’ve been playing through the game Divinity, which has a map/campaign editor that we’re hoping to play with and maybe port some simple One-Off D&D campaigns to, which should be fun.

                                1. 2

                                  (2 weeks ago:

                                  $WORK: I finished revising my paper and submitted it and it’s completely done! I now have about 6 or 7 weeks to make and prepare a presentation for the conference, and also polish up the code for release.

                                  I’m getting some more work done on my other paper about graph rendering. For this week, my focus is to get an E2E MVP built and start evaluating it.

                                  $FUN: My website is passably working now: personal page and blog, all with appropriate tags to each other that can be automatically parsed.

                                  I spent most of the weekend playing around with IndieWeb concepts: MicroPub and MicroSub, specifically. I’m excited for them because I’m hoping to build a “personal indexer” of sorts. I read lots of articles online and I always forget what I’ve read and where. It’d be nice if my browser automatically saved the contents of pages I visit, saving them in either a plain text or microformat way. Further, if someone is interested in the things I read about (or more likely, if I’m interested in what others wrote), they could subscribe to my personal archive and query it too. I suspect that it’d be possible to replace a lot of my Google searches with this, without sending ever more data to Big Google all the time.

                                  1. 2

                                    (last week:

                                    $WORK: Over the weekend, I got some work done on the paper like I wanted, both coding and writing. With the initial draft out of the way, we should be able to start iterating on it and getting some work done. I also finished handing off that other project, so that’s off my plate.

                                    One of my other papers got conditionally accepted with some minor revisions, so that’s my main focus this week: fixing the things our reviewers asked for and polishing for publication, plus starting on the presentation part of it so that I’m prepared for March. I will also need to get the code running again and make sure nothing has bitrotted.

                                    $FUN: I didn’t get around to adding IndieWeb stuff to my website, but maybe I’ll find a few minutes here or there this week to at least get IndieAuth working. I solved my problem of “hard to access my self-hosted apps if they’re running from my network” the easy way: just run it on another network.

                                    For my website/blog posts, I’m currently trying to decide between using lots of HTML things like microformats or just using plain text files. On the one hand: if I marked everything up, I could make good use of things like the recipe microformat and keep my recipes machine readable, and also have nice semantic links to other blogs or posts. However, then I have to give up the super easy “just throw text files on the internet” approach that I’ve been liking. The approach I’m leaning towards is to have things like recipes on their own standalone HTML pages and just link to them from my posts.

                                    1. 2

                                      $WORK: Finally handing over a project I’ve been working on for a while so that I can work on something else. The last step is getting AWS instances to create their own Docker Swarm and connect to one another. After that’s done, I get to work on drafting a couple of papers and maybe start writing some code for them.

                                      $FUN: I’m hoping to work on my website and add some IndieWeb/microformats stuff to it, plus finding a publishing workflow I like. I have something that’s mostly working so I can write some blog posts already, but I’d like it to be more streamlined.

                                      I’m also looking into self hosting some services at home. One problem I’ve run into is: if you’re hosting your website on your local network, you can’t view your website at its domain name (easily) because it pulls up the router’s admin page. I’ve tried to fix this with a local DNS entry on my pi-hole, but I wish there was something a little easier, especially since I expect to host many services on the same device, all with different domain names. Right now I have to add an entry manually for each one.

                                      1. 1

                                        A command-line productivity boost I recently added was a commit script to my zsh config:

                                        commit() {
                                          git add -A :/ && git commit -m "$1" && git push

                                        Being able to simple type commit 'Summary of changes' instead of

                                        • typing git a
                                        • tapping the up arrow to autocomplete the entire contents of commit script
                                        • tapping the left arrow and then backspace/delete to replace the previous git message with an updated message

                                        has made my life so much easier.

                                        1. 3

                                          git commit -a more or less does the same, although you’ll still need to add new files with git add (arguably a feature or annoyance). The downside of your approach is that it’s harder to write “good” commit messages with more context, since everything will always be on a single line.

                                          1. 3

                                            The downside of your approach is that it’s harder to write “good” commit messages with more context, since everything will always be on a single line.

                                            Agree. Two useful bits of git to share:

                                            1. if you do git commit -m "message" you can append as many more -m’s as you want to add lines to the commit message.

                                            2. additionally (I only learned this recently), you can append -e to open the commit message in your editor. So, git commit -m "commit message" -e will open your editor with “commit message” at the start. It makes it easy to start a commit on the commandline and bail out to your editor to write more.

                                            FWIW, here are similar aliases I use:

                                            c = commit
                                            cm = c -m
                                            ca = c -a
                                            cam = c -am
                                            cmp  = "!f() { git cm \"$@\" && git p; }; f"
                                            camp = "!f() { git cam \"$@\" && git p; }; f"

                                            So, my git camp (commit all with message and push) is similar to u/netopwibby’s commit function, but it’s composable, and additional arguments to git can be provided.

                                            (p is an alias that pushes, automatically setting the upstream if necessary.)

                                          2. 3

                                            A nice, tiny improvement to scripts like this is to use the ${var:?error message} pattern. I use that a lot where I write a script that has to have an argument that I might forget, and forgetting would break the whole thing, but also where doing the full if [ -z "$var" ]; then printf $'bad\n'; fi pattern is overly verbose.

                                            commit() {
                                              git add -A :/ && git commit -m "${1:?need commit message}" && git push
                                            $ commit 'Summary of changes'
                                            $ commit
                                            -bash: 1: need commit message

                                            You can also improve the interface just a little bit if you use "$*" instead of just "$*" (at the cost of weird characters sometimes messing things up if you don’t use quotes, like asterisks).

                                            commit() {
                                              git add -A :/ && git commit -m "${*:?need commit message}" && git push
                                            $ commit 'Summary of changes'  # works like normal
                                            $ commit Summary of changes  # also works
                                            $ commit
                                            -bash: *: need commit message
                                            $ ls
                                            bar  foo
                                            $ commit Use * instead of 1  # same as commit 'Use bar foo instead of 1'
                                            1. 1

                                              Oh that’s nice, I’ll try it.

                                          1. 12

                                            My favourite awk oneliner I have memorized, is for extracting contents between some specific begin and end patterns/fences in muliple files:

                                            awk '/begin-regex/{p=1}; p; /end-regex/{p=0}'

                                            (I think you don’t need curly braces, but not sure now) For example, contents of all init functions in all Go files:

                                            awk '/^func init/{p=1}; p; /^}/{p=0}' *.go

                                            By swapping the expressions between semicolons, you can make it include or exclude the fence lines in the output.

                                            Explanation: variable p is 0 (i.e. false) by default. Default action for a condition with no action is to print current line, so the sole p in the middle expands to equivalent of: p!=0 {print}.

                                            1. 3

                                              I think you don’t need curly braces, but not sure now

                                              Since assignment is an Action, you would have to use the curly brackets to change the value of p.

                                              Unless you want this to go over files, you could also just do

                                              /begin-regex/, /end-regex/

                                              which uses “Pattern Ranges”, and don’t require the extra auxiliary variable. If you still would want it to match patterns between multiple files, you’d probably have to use a pipe and concatenate the files beforehand.

                                              1. 2

                                                Note that you can also achieve this very concisely with sed, including across multiple files:

                                                $ sed -n '/begin-regex/,/end-regex/p' file1 file2 ...
                                                1. 1

                                                  Can this let me exclude the begin and/or end fence line from the output? Given that the sed language is Turing-complete, I suppose there is some way, question is how easy? In “my” awk expression, this is a matter of changing the order of the sub-expressions.

                                                  1. 1

                                                    I haven’t looked this up recently, but I believe the canonical sed version is:

                                                    sed -ne '/start/,/end/ { /start\|end/ !p }'

                                                    I thought there was another solution by abusing labels and gotos but I can’t seem to get one written.

                                              2. 3

                                                Thanks, this tip led me to refactor some of my awk code :) I like that pattern too, but I forgot that the booleans can go on the left too. I think always think of Awk as “patterns and actions” but it’s really “predicates and actions”.

                                                Context: as part of hollowing out the Python interpreter, I use this Awk snippet to extract the C struct initializers in the CPython codebase.


                                                Then I parse that very limited language of {"string", var1, var2, ...} with a small recursive descent parser.

                                                Overall I’ve found good use for awk in 5-10 places over the last couple years, i.e. NOT the typical “field extraction” use case of { print $5 }.

                                                Now that I know awk more, I like it more than I used to. On the other hand, I’ve also written a few hundred lines of Make from scratch in the last couple years, and I think less of it than I used to :-/ Make always seem to give me half-working and slow solutions, whereas Awk gives you a precise and fast solution.

                                                style patch:

                                                (I also didn’t know about the implicit { print } but that seems way to obscure for me :) )

                                                1. 2

                                                  (I think you don’t need curly braces, but not sure now)

                                                  you can do this without any explicit action statements:

                                                  awk -- '/start-reg/ && (p = 1) && 0; p; /end-reg/ && p = 0;'

                                                  only because assignment is an expression that returns the lvalue value that gets assigned to the rvalue. but mind order of operations.

                                                1. 3

                                                  Although it certainly wasn’t anywhere near the specs of that computer, this post brings back good memories of using my Eee PC netbook (is that really the correct capitalization, Wikipedia?) to run DF while waiting to tutor people in high school. I didn’t have to kill off any extraneous processes to even run it, thankfully, but I did have to live with low framerates. Also, with not being good at the game.

                                                  One of my favorite tricks with that netbook was that I installed a utility that gave me an expanded desktop, so I could have larger windows open and by pushing my mouse along the edges of the screen, I could move to different areas of the screen. Looking it up just now, I think I found the exact tool I used: “Infinite Screen.” This way I could keep my DF window larger than my screen size and still be able to see everything (though not all at once).

                                                  1. 1

                                                    One of my favorite tricks with that netbook was that I installed a utility that gave me an expanded desktop, so I could have larger windows open and by pushing my mouse along the edges of the screen

                                                    When I first installed Linux in the late 1990s it came with FVWM (or maybe this was a feature of XFree86?) with a “virtual desktop” exactly like you described. Of course being used to Windows 95/98 at the time it was the first thing I disabled.

                                                    1. 1

                                                      I’ll have to look that utility up. I think I accidentally triggered similar behaviour in Xorg many years back, but I’ve never been able to recreate it.

                                                      My laptop screen is 1366x768, which can sometimes be annoying if I want to screenshot things taller than this. My favourite workaround:

                                                      xrandr --output yourscreenname --scale 2x2

                                                      2732x1536 is where it’s at, take that 1920x1080.

                                                      1. 2

                                                        Oh, to be clear, that’s a Windows-only utility. I should have said that originally.

                                                        The only place I’ve seen that resolution is a Thinkpad T420 with the larger screen mod. Any chance that’s the same for you? I’m lucky in that, when I had that laptop and needed to take large screenshots, I either had a big external monitor or a friend with a retina display.

                                                        That xrandr trick is neat! I just tried it out and visually it works well, but my mouse didn’t want to move into the bottom right corner of the screen, interestingly enough.

                                                        1. 1

                                                          Ooh, I might have to try this on Windows.

                                                          My favourite Windows change is to install bb4win, so I get proper virtual desktops. I have not tried it on Win10, but presumably it’s a reliable way of keeping cortana away as well. I used to use Asuite for a program-launching menu, because the one built into BB is pretty crappy.

                                                          The only place I’ve seen that resolution is a Thinkpad T420 with the larger screen mod.

                                                          1366x768? It’s the standard for almost all cheap laptops these days. Mine is an 11.6” (small) so it not’s a bad option here, but sadly it’s also used for bigger screened laptops.

                                                          my mouse didn’t want to move into the bottom right corner of the screen

                                                          Eep, that’s a bug.

                                                          Are you on Nvidia by any chance? If so: you probably have to stick to using Nvidia’s utilitiy for screen management, not xrandr. I remember having this problem in my days of ATI Catalyst.

                                                        2. 1

                                                          I may be wrong, but wasn’t this the default behaviour on X11 (maybe XFree86 before Xorg took it’s place) at some point?

                                                          I recall often accidentally running into this problem many times when my graphics card drivers weren’t properly installed yet, and my card only reported basic 640x480 or 800x600 support through the BIOS.