1.  

    I think one of those was a Xen vulnerability that affected some of the AWS data centers as well.

    1.  

      you are moving logic that should reside in individual tests into a shared global setup code. Now each test is paying the penalty of having to be aware of this global setup logic whether it cares about exit code or not. Again, as I mentioned in my previous comment, what if one of the test need to issue a http call?

      I find it’s usually a small price to pay, just an if.

      I don’t understand your HTTP call question? You mean the code being testing calls an HTTP API or some such?

      A very common use-case that table driven test won’t work is testing API handlers that talks to databases. You will need to provide a DB transaction to each test while every api handler test would have completely different test logic depending on behavior of the API. Now imagine having to create temporary db with predefined schema before injecting new transaction to each test.

      I’ve done this several times, and it works fairly well. Basically the same method as above, see for example this and this.

      Another example is adding go routine leak check at the end of all tests, you can’t possibly fit all your tests into one giant table do you?

      You can just add a function call for this, right? If you want it for literally every test you can define TestMain().

      Again, all of the above is achievable through the builtin testing package with custom code. But if you do that, you will end up with gtest.

      I’m not so sure about that; the “plumbing” for these kind of tests is a lot smaller, and typically doesn’t use reflection.

      1.  

        My real gripe is, why is Slack better than this, why all the brouhaha about how slack was changing the way we work and saving the world when people have been doing this for years. Reading the comments on his blog/Reddit go further with several people with similar solutions that have been in use for years.

        1.  

          If you want to go beyond parsing, reimplementing TCL is a manageable task. Look at picol. 500 lines of C.

          1.  

            You can throw around “rules” to no end to “win” your point. So esr wrote this book 20 years ago, so what? In spite of what esr may think of himself he is not, in fact, the ultimate arbitrator of truth regarding these things, or anything else.

            No matter what’s in esr’s book, if you go on GitHub and download CLI tools like this then a lot of them won’t read from stdin, or they will only read on stdin when very explicitly told to do so. That’s the reality of the world today. You can either choose to ignore that reality that and pretend that esr’s 20-year old book is reality – which it’s not and probably never has been – or deal with reality it as it is. I choose the latter.

            1.  

              So we ended up keeping things as is.

              See https://github.com/cdr/slog/pull/73#issuecomment-564806085 regarding the opencensus coupling and https://github.com/cdr/slog/issues/70 regarding the levels.

              1.  

                Have you considered a machine-readable output format? According to the examples, the output is a table/relation, so the CSV is probably the simplest usable format. Or you can generate XML or Recfile – they are also simple to generate. XML can be read from almost any language/platform and Recfiles can be processed in Recutils or in Relational pipes (can read also CSV and XML).

                1.  

                  many programs don’t read from stdin (or read on stdin only when explicitly told to with -)

                  No. Actually, reading from STDIN and writing to STDOUT (i.e. program can be used as a filter) is the recommended and standard Unix way. See Rule of Composition:

                  Design programs to be connected with other programs.

                  Unix tradition strongly encourages writing programs that read and write simple, textual, stream-oriented, device-independent formats. Under classic Unix, as many programs as possible are written as simple filters, which take a simple text stream on input and process it into another simple text stream on output.

                  I am not so strict about the textuality (binary streams and formats are sometimes better) but I fully support this filter approach (read from STDIN and write to STDOUT) as default behavior.

                  However, this is a general advice, that should apply most of the (CLI) software – and your program might be exceptional and might require different approach…

                  1.  

                    Hmm. I might actually use Basalt for something, someday. Thanks for sharing it! I make a fair amount of diagrams in my work, but I’ve found it’s usually easier to just draw them freehand. There are definitely exceptions, though.

                    Just skimming through your post, I’m a little surprised to see no mention of TeX, from which TikZ/PGF has sprung. There are quite a few TeX packages and macros which use the constraint-solving machinery of the system to good effect, e.g. asymptote and other more domain-specific packages. I can understand why TeX may not fit your use cases, but it might be worth looking through CTAN for ideas anyway. I think having interactive sliders and instant feedback is very helpful, since (in my experience) modeling the visual optimization problem in sufficient detail is often more work than it’s really worth. Even if you’re going for a fully automated solution eventually, having a ‘visual REPL’ is very helpful for development.

                    As for iterative ‘force-directed’ (effectively gradient descent) graph layout, it seems to be a very common feature of web-based graph rendering libraries nowadays. GraphViz of course does constraint solving of some sort, but I’ve never looked into the details.

                    1.  

                      Wonderful. Feature requests if they don’t already exist:

                      1.  

                        HEAVY BLACK HEART is the name of the red heart, as it was named as such before emoji gained color. For older Unicode characters (before color), “white” means outlined and “black” means filled in.

                        1.  

                          Oh! Thabks. I didn’t notice the formatting was broken. I do know how to use markdown, I am just less likely to check the result when I’m on my phone and typing is already hard . :)

                          1.  

                            “Never write *, always write ./*”

                            I prefixed each * with a \.

                            1.  

                              Thank you. Using twitter for posting long-form content is probably one of the stupidest things to come out of this decade.

                              1.  

                                That’s actually specified in the Unicode CLDR (“Common Locale Data Repository”):

                                $ grep poop en.xml
                                <annotation cp="💩">dung | face | monster | pile of poo | poo | poop</annotation>
                                

                                It contains many useful aliases, for example for the pirate flag:

                                <annotation cp="🏴‍☠️">Jolly Roger | pirate | pirate flag | plunder | treasure</annotation>
                                

                                I just haven’t added support for that.

                                1.  

                                  FWIW, the built-in “describe-char” function in Emacs is quicker to use, and will bring up a buffer with all of this information and more. I suppose it depends on the use case which is more convenient.

                                  Here’s an example where the font I use in Emacs doesn’t support the glyph:

                                               position: 199390 of 199390 (100%), column: 7
                                              character: 🌍 (displayed as 🌍) (codepoint 127757, #o371415, #x1f30d)
                                                charset: unicode (Unicode (ISO10646))
                                  code point in charset: 0x1F30D
                                                 script: symbol
                                                 syntax: w 	which means: word
                                               category: .:Base
                                               to input: type "C-x 8 RET 1f30d" or "C-x 8 RET EARTH GLOBE EUROPE-AFRICA"
                                            buffer code: #xF0 #x9F #x8C #x8D
                                              file code: #xF0 #x9F #x8C #x8D (encoded by coding system utf-8-unix)
                                                display: no font available
                                  
                                  Character code properties: customize what to show
                                    name: EARTH GLOBE EUROPE-AFRICA
                                    general-category: So (Symbol, Other)
                                    canonical-combining-class: 0 (Spacing, split, enclosing, reordrant, and Tibetan subjoined)
                                    bidi-class: ON (Other Neutrals)
                                    decomposition: (127757) ('🌍')
                                    mirrored: N
                                  

                                  And here’s an example where it does:

                                               position: 4 of 4 (75%), column: 4
                                              character: 😈 (displayed as 😈) (codepoint 128520, #o373010, #x1f608)
                                                charset: unicode (Unicode (ISO10646))
                                  code point in charset: 0x1F608
                                                 script: symbol
                                                 syntax: w 	which means: word
                                               category: .:Base
                                               to input: type "C-x 8 RET 1f608" or "C-x 8 RET SMILING FACE WITH HORNS"
                                            buffer code: #xF0 #x9F #x98 #x88
                                              file code: #xF0 #x9F #x98 #x88 (encoded by coding system utf-8-unix)
                                                display: by this font (glyph code)
                                      xfthb:-VL  -VL Gothic-normal-normal-normal-*-14-*-*-*-*-0-iso10646-1 (#x3EB0)
                                  
                                  Character code properties: customize what to show
                                    name: SMILING FACE WITH HORNS
                                    general-category: So (Symbol, Other)
                                    canonical-combining-class: 0 (Spacing, split, enclosing, reordrant, and Tibetan subjoined)
                                    bidi-class: ON (Other Neutrals)
                                    decomposition: (128520) ('😈')
                                    mirrored: N
                                  
                                    1.  

                                      The searching seems to need some tweaks, though. E.g. looking for a regular smiley, none of “smile”, “smiley”, “happy” give the wanted result, while “face” lists too many. It turns out the right search word is “smiling”, but maybe there should be some form of aliases?

                                      Yeah, adding more search terms is marked as “TODO” in the code. It’s a bit tricky as it’s very easy to get way too many matches and/or pollute the output with a lot of keywords, which isn’t useful either. This is one reason I worked on a GUI emoji picker based on this code last week, but I had a lot of problems getting GTK to show ZJW sequences well, so I kind of gave up on that for now, but basically I’m running in to the limitations of dmenu’s plain text filtering.

                                      I rarely use uni e <search> by the way, but instead use the “emoji-common” groups from dmenu-uni which reduces the number of emojis to a more manageable number (from about 1600 to 200).

                                      I also had trouble with the regular red heart, but that may be of a different kind? [..] How would I find this using search?

                                      Just in case this wasn’t clear – and the documentation should probably make this a bit clearer – but the print, search, and identify commands work only on codepoints. They have no concept of multiple codepoints combing to form a single character (or “graphmeme”, if you wish). I basically use identify mostly as a “Unicode-aware hexdump -C”.

                                      At any rate, it shows up with e.g. uni emoji heart, or uni emoji ‘red heart’for an exact match. It's a bit hidden in there, because apparently we need hearts in 20 shapes and colours 🤷‍♂️ You have the same when you type:heart` in e.g. WhatsApp, but because the emojis are shown in colour and quite large it’s reasonably obvious. This is again kind of running in to the limits of what you can do with this kind of plain text search.

                                      1.  

                                        Thanks so much for this tool, I love having a command line utility to query the unicode database!

                                        1.  

                                          The search problem is pretty tough to solve, as some of the unicode descriptions use a particular english dialect, for instance:

                                          $ uni s poop
                                          no matches
                                          

                                          damn British! :)

                                          One possible solve would be to augment the descriptions with information from another free source, like wikipedia