1. 1

    I see article titles like this and think “hmm, that’s interesting, I wonder what they’re doing with a bunch of dated Opterons…”, and am then disappointed when I realize that’s not actually what it’s talking about.

    1. 3

      While I don’t think I agree that it’s a good idea, note that the RISC-V ISA also allows division by zero, producing all-bits-set instead of a trap. (See the commentary in section 6.2 here for their rationale.)

      1. 2

        “We considered raising exceptions on integer divide by zero, with these exceptions causing a trap in most execution environments. However, this would be the only arithmetic trap in the standard ISA (floating-point exceptions set flags and write default values, but do not cause traps) and would require language implementors to interact with the execution environment’s trap handlers for this case. Further, where language standards mandate that a divide-by-zero exception must cause an immediate control flow change, only a single branch instruction needs to be added to each divide operation, and this branch instruction can be inserted after the divide and should normally be very predictably not taken, adding little runtime overhead.

        The value of all bits set is returned for both unsigned and signed divide by zero to simplify the divider circuitry. The value of all 1s is both the natural value to return for unsigned divide, representing the largest unsigned number, and also the natural result for simple unsigned divider implementations. Signed division is often implemented using an unsigned division circuit and specifying the same overflow result simplifies the hardware.”

        1. 1

          Does it also set a flag? If so, that seems perfectly reasonable. The value returned shouldn’t matter as long as you can use out-of-band info to check for divide by zero. Although, I suppose you could just check for zero before the divide… Hmm. So I guess really it doesn’t matter at all. It makes sense to have the result be whatever is easiest to implement.

          1. 1

            It appears it just returns the special value with no flags or traps. You have to explicitly check for it on every division. They claim this is to simplify circuitry.

      1. 2

        Honest question (I know basically nothing about this area): are recommendation algorithms ever used for anything other than…selling things on the web?

        1. 5

          They only became “recommendation algorithms” recently. Before sub-specializing, they were used in information retrieval. If you recall, about 10 years ago there was the Netflix Prize to advance the field (and boy did it cause a lot of papers to be written!) by putting a million dollars out there to craft accurate prediction engines out of existing or novel new information retrieval algorithms. The team that won (submitting 24 minutes before the 3 year long deadline) was BellKor’s Pragmatic Chaos (paper here). At its heart, this super-tuned recommendation algorithm is actually an ensemble of pretty traditional IR algos: Restricted Boltzmann Machines (RBMs), matrix factorization with temporal dynamics, and a bunch of basic predictors brought together with gradient boosted decision trees (GBDT). These are general-purpose techniques that were tuned and blended to produce the recommender (RBMs can just as easily be used in computer vision and credit scoring, for example).

        1. 2
          1. There are scoping blocks where you can tell the compiler not to optimize things. e.g. #pragma optimize or function attributes
          2. This code is Too Clever By Half(TM) but sometimes you need it
          3. You can get this kind of thing even without optimizations - I forget the details now, but I once had to dig through some code for running an experiment that was originally written for PowerPC macs and was now being run on Intel macs and there was a Too Clever By Half(TM) bit that, it turned out, only worked properly on PowerPC.
          1. 2
            1. There are scoping blocks where you can tell the compiler not to optimize things. e.g. #pragma optimize or function attributes

            Though note that as of 7.1, GCC’s documentation describes __attribute__((optimize(...))) as “[to] be used for debugging purposes only…not suitable in production code” (and given that the corresponding pragma is described in terms of the attribute, the same would presumably apply to it as well).

          1. 1

            Yes, UB is not completely without any merit. But it is the wrong default.

            bzip2 can compile with -fundefined-overflow or whatever to take advantage of optimization, exactly like -ffast-math which is not enabled by default. Everyone else should benefit from getting rid of overflow UB.

            1. 3

              Ah, but as soon as you say there is a default… you suddenly have defined the behaviour and we having a different conversation.

              As I keep hammering at people, everything we do has undefined behaviour and it’s a good thing.

              If you rip the cover off your PC and yank a random chip off the motherboard, you’re deep in the realm of Undefined Behaviour, and it would be foolish for the designers to sit there saying, “Hmm, what should the behaviour of this system be if the user yanks that chip out?”

              There are vast swathes of life where the answer is and should be “Don’t do that, and if you do, it’s your problem not mine”.

              1. 1

                Well, the actual default (in the absence of command-line flags explicitly requesting otherwise) is -O0 (for both gcc & clang as far as I know), so…everything is peachy?

              1. 5

                As mentioned in the comments on the post, git’s diff commands have flags (--word-diff, --color-words) that make dealing with line-wrapped prose (e.g. LaTeX, documentation, etc.) vastly more pleasant.

                1. 6

                  Regardless of whether the intended effect is reasonable or not, I don’t think this one-word change would really help. There’s still way too much ambiguity:

                  • is the list of “possible(/permissable)” behaviours list supposed to be exhaustive? The words “ranges from” suggest not.
                  • is “ignoring the situation completely with unpredictable results” any different from what optimising compilers currently do in the presence of undefined behaviour? I’d argue it’s not - they ignore the possibility that the undefined behaviour can be triggered, with unpredictable results when it is.

                  There’ve been a number of posts recently trying to argue that C should essentially do away with undefined behaviour; I think it’s time for people to move on and accept that the undefined behaviour has been inherent in the standard for some time, and made use of for optimisation by compilers for some (slightly lesser) time, and it’s here to stay. Code which relied on particular integer overflow behaviour, or aliasing pointers with incompatible types, or so on, was never really correct C - it’s just the compiler once (or at least usually) generated code which did what the code author intended. Now people are getting upset that they can’t use certain techniques they once did. In some cases this isn’t ideal - I’ll grant that there needs to be a simple way in standard C to detect overflow before it happens, and there currently isn’t - but it’s time to accept and move on. Other languages provide the semantics you want, and compiler switches allow for non-standard C with those semantics too; use them, and stop these endless complaints.

                  As for making the overflow behaviour “sane”, the notion that you could add two positive integers and then meaningfully check whether the result was smaller than either was bat-shit crazy to begin with.

                  1. 2

                    As for making the overflow behaviour “sane”, the notion that you could add two positive integers and then meaningfully check whether the result was smaller than either was bat-shit crazy to begin with.

                    Wow, so all that work in finite field theory is bat-shit crazy?

                    The C standard defines “int” as fixed length binary strings representing signed integers and even has a defined constant max value. C ints are not bignums and C does not ask the compiler to detect or prevent overflows or traps or whatever the architecture does. As a consequence of the definition of ints, x+y > x cannot be a theorem. If it was a theorem, it would follow that ints can represent infinite sets of numbers which would be a great trick with a finite number of bits.

                    Can people stop “explaining” that making C into Java would be hard and would lose performance or that C ints are not really integers or other trivia as attempted justifications of these undefined program transformations?

                    1. 1

                      As for making the overflow behaviour “sane”, the notion that you could add two positive integers and then meaningfully check whether the result was smaller than either was bat-shit crazy to begin with.

                      Wow, so all that work in finite field theory is bat-shit crazy?

                      That’s… not what I said.

                      Can people stop “explaining” that making C into Java

                      I’m afraid you’ve crossed your wires again. Nobody was talking about making C into Java.

                      1. 1

                        so from the C standard I can both conclude that sizeof(int) == 4 or 8 and for int i, i+1 > i is a theorem so a test if(i+1 <= i) panic(); is “bat-shit crazy”? Think about it. Testing to see if addition of fixed length ints overflows is not only mathematically sound, but it matches the operation of all the dominant processors - that’s how fixed point 2s complement math works which is why almost all processors incorporate an overflow bit or similar. Ints are not integers.

                        1. 1

                          so from the C standard I can both conclude that sizeof(int) == 4 or 8 and for int i, i+1 > i is a theorem so a test if(i+1 <= i) panic(); is “bat-shit crazy”?

                          The test “if (i + 1 < = i)” doesn’t make sense mathematically because it is always false. If the range of usable values of (i + 1) is limited, then it is always either false or undefined.

                          Testing to see if addition of fixed length ints overflows is not only mathematically sound

                          It’s very definitely not mathematically sound. Limited range ints only have mathematically sound operation within their limited range.

                          1. 1

                            Ints are not mathematical integers. They are not even bignums. Try again.

                            Here is a useful theorem for you: using n bytes of data, it is impossible to represent more than 2^{8*n} distinct values.

                            In mathematics whether i+1 > i is a theorem depends on the mathematical system. For example in the group Z_n, it is definitely not true. Optimization rules that are based on false propositions will generate garbage.

                            “Limited range ints only have mathematically sound operation within their limited range.” - based on what? That’s absolutely not C practice and certainly not required by the C standard. It doesn’t follow mathematical practice and it’s way off as a model of how processors implement arithmetic.

                            1. 1

                              Ints are not mathematical integers.

                              Right, they have a limited range. Within that range, they behave exactly as mathematical integers.

                              Here is a useful theorem for you: using n bytes of data, it is impossible to represent more than 2^{8*n} distinct values.

                              Irrelevant.

                              1. 0

                                Right, they have a limited range. Within that range, they behave exactly as mathematical integers.

                                what do you base that on? And you know they don’t behave like the mathematical integers mod 2^n because? Even though that’s how the processors usually implement them?

                                There is nothing in the C standard that supports such an approach. In fact, if it were correct, then x << 1 would not be meaningful in C.

                                1. 1

                                  what do you base that on?

                                  I base that on how the C language defines operations on them; for +, for example, “The result of the binary + operator is the sum of the operands”. It does not say “… the sum of the operands modulo 2^n”.

                                  And you know they don’t behave like the mathematical integers mod 2^n because?

                                  For unsigned types, the text says: “A computation involving unsigned operands can never overflow, because a result that cannot be represented by the resulting unsigned integer type is reduced modulo the number that is one greater than the largest value that can be represented by the resulting type” (C99 6.2.5). Therefore, the unsigned integers do behave like mathematical integers mod 2^n. However, there is no equivalent text for signed types, and C99 3.4.3 says: “An example of undefined behavior is the behavior on integer overflow”. Specifically, 6.5 says: “If an exceptional condition occurs during the evaluation of an expression (that is, if the result is not mathematically defined or not in the range of representable values for its type), the behavior is undefined.” (emphasis added).

                                  I’m sure you will be able to find the corresponding sections in C11 if you wish.

                                  There is nothing in the C standard that supports such an approach.

                                  Not except for the text which describes it as such, as reproduced above.

                                  In fact, if it were correct, then x << 1 would not be meaningful in C.

                                  I could only guess how you came to that conclusion, but I don’t care to. This discussion has become too ridiculous for me. Good day.

                                  1. 0

                                    However, there is no equivalent text for signed types, and C99 3.4.3 says: “An example of undefined behavior is the behavior on integer overflow”.

                                    Correct. So it’s possible, if you are a bad engineer and a standards lawyer, to claim that the standard gives permission for the implementation to run Daffy Duck cartoons on overflow. However, nothing in the standard forbids good engineering - for example - it is totally permissable to use the native arithmetic operations of the underlying architecture and I am 100% sure that was the original intention. There is certainly no requirement for your “mathematics with holes in it” model and since there is no good engineering excuse for it, QED.

                    2. 1

                      Since compilers already provide options for wrapping integer overflow, I think it’s a reasonable to propose to make those options default. After all, people who want undefined integer overflow for optimization or otherwise can use options to do so after default is changed. (If this sounds inconvenient, the exact same applies to “use options and stop complaints”.) Note that this change is backward compatible. (Although going back won’t be.)

                      Same applies for strict aliasing. I am much more uncertain about other undefined behaviors, for example null dereference, because when there are no pre-existing options such standard change would require (in my opinion quite substantial) additional work for implementations.

                      1. 2

                        Since compilers already provide options for wrapping integer overflow, I think it’s a reasonable to propose to make those options default.

                        Just because compilers offer an option to do something, doesn’t mean that it’s reasonable to make that something a default. (But sure, if the standard gets changed - I doubt it will - so that integer overflow is defined as wrapping, everyone can use compiler flags to get the old behaviour back, and that would be perfectly acceptable).

                        I’d personally much rather have integer overflow trap than wrap. As far as I can see all that wrapping gives you is an easier way to check for overflow; there’s very few cases where it’s useful in its own right. The problem is, people will still forget to check, and then wrapping still gives the wrong result. But there’s no need to change the standard for this: I can already get it with a compiler switch. (edit: note also that trapping on overflow still allows some of the optimisations that defining it as wrapping wouldn’t).

                        I am much more uncertain about other undefined behaviors, for example null dereference

                        It would be easy enough to define that as causing immediate termination; the real question is whether this would be worth doing.

                        Edit: you may also have missed the main point of my comment, which was that this proposed (one-word) change would not actually cause the behaviour to become defined.

                        1. 1

                          I am much more uncertain about other undefined behaviors, for example null dereference

                          It would be easy enough to define that as causing immediate termination

                          Easy enough to define it that way, sure, but I don’t think it would be a popular move in the embedded world – on MMU-less systems where the hardware might not trap it, seems like that would force the compiler to insert runtime checks before every pointer dereference.

                          1. 1

                            Right, hence the note about considering whether it would be worth doing. (I suspect that what a lot of complaints about the standard are missing, is just how significant these little optimisations from exploiting the nature undefined behaviour are, when the code potentially runs on some small embedded device. Really, most of the complaints about the language should be re-directed to the compiler vendors: why do they not choose safer defaults? But then, to be fair, they largely do. I don’t think gcc for example enables strict overflow by default: you have to enable optimisation).

                          2. 1

                            It would be easy enough to define that as causing immediate termination; the real question is whether this would be worth doing.

                            Nobody is asking for C implementations to force traps on null dereference. Nobody. So why are you trying to explain it would be hard or have negative consequences?

                            1. 1

                              The statement you quoted had nothing to do with traps on overflow, it was about null pointer dereference. (In fact, I specifically argued for trap-on-overflow. I think you’ve got your wires seriously crossed).

                              1. 1

                                Trap on null dereference is also something that is not necessary. What most people would prefer is that, when reasonable, the action be whatever is characteristic of the environment. So if the OS causes a trap or the architecure explodes on null dereference, or the OS (Like some versions of UNIX and many embedded systems) has valid memory at 0 the derefence fetches the data. This is not something that compilers have any useful information on and they should move on.

                                1. 1

                                  My point is while -fwrapv gives wrapping semantics, there is no similar flags for null dereference to compile to “whatever is characteristic of the environment”. This will need additional implementation work.

                                  1. 1

                                    fno-delete-null-pointer-checks

                                    1. 1

                                      fno-delete-null-pointer-checks is not implemented in Clang.

                                      1. 1

                                        Look like it is on the way. This “optimization” is already a major source of error, but with LTO it’s going to be unspeakable. Consider a parsing library with extensive null checks linked with a buggy front end. Boom.

                                        1. 1

                                          I’d appreciate the link to Clang work in progress.

                                            1. 2

                                              Thanks for the link!

                                              Reading the whole thread (including continuation in May) reinforces my impression that this is substantial amount of work. Searching the archive for June and July, it seems the patch author is missing in action and no actual patch was posted.

                      1. 2

                        Neat – I wrote a similar-but-different thing that’s somewhat more geared toward remote-controlling “normal” desktop-like systems, but could perhaps also be useful in the same sorts of scenarios: enthrall

                        1. 3

                          Nice to know about – I’ve got a few python scripts that’ll help clean up a bit.

                          (Note also the <<- here-doc variant, which is similarly convenient when writing shell scripts.)

                          1. 3

                            Here docs/strings are awesome.

                            Another use case I like (beyond ascii art) is embedding test text file contents in a string along with the test itself. When you come back to it later, instead of the indirection of looking up the contents of an external file and cluttering up the file system, you have it right there with the test.

                            1. 5

                              Perl has __DATA__, which is places at the end of the code in the file, and everything which comes after you can read with the DATA filehandle (I don’t know where Larry stole this idea from).

                              So a file looks like:

                              #!/usr/bin/env perl
                              
                              print "here be dragons!\n"
                              __DATA__
                              {
                                 "id": 42
                              }
                              

                              and you can read that last bit by passing a filehandle along. Pretty nice to embed simple stuff in test files, for example.

                            2. 1

                              Note that the <<- heredoc form only works on text indented with tabs, other kinds of white space will be ignored.

                            1. 1

                              Omitted from the description of the first point (“Use -- to separate options and arguments”): this happens “for free” if you just use getopt(3).

                              1. 2

                                I would love to know the books he grabbed the terrible C examples from, notably the combine function. If anyone knows, please leave a comment.

                                1. 2

                                  Given the description in the lead-in to combine, I would almost bet it’s from Herbert Schildt’s legendarily-bad “C: The Complete Reference” (see https://www.seebs.net/c/c_tcn4e.html).

                                  1. 2

                                    Turns out it was “Mastering C Pointers”, by Robert J. Traister.

                                1. 12

                                  Unmentioned: Hardware RAID generally has battery backup so writes are completed even if the power fails (or the kernel panics). Software and Fake RAID can’t do that.

                                  1. 4

                                    Or alternately that having that allows them to (legitimately) acknowledge writes before the data has actually hit disk platters and hence offer better write performance – i.e. that if they didn’t have that they would presumably (hopefully!) wait to acknowledge writes until the data actually had hit the platters rather than “cheating” and losing data on a power loss.

                                    That said, with the SSDs that are now easily available you can achieve a similar effect using host-side software layers like bcache/dm-cache in writeback mode.

                                    1. 1

                                      Generally that is the best option, it would only not apply when the drives are lying about syncing to disk (some cheap models still do on both SSD and HDD controllers, gets better benchmark results).

                                    2. 1

                                      More unmentioned: with a copy-on-write FS like ZFS, you won’t ever get corruption from incomplete writes, because writes are atomic.

                                    1. 3

                                      Another option for those willing to use GNU C extensions is a small inline assembly fragment with the .incbin directive…wrapped up in a convenient macro approximating the usage in the article (the .len member here is admittedly a bit hackish), perhaps something like:

                                      #include <stdio.h>
                                      
                                      #define DATA_FILE(name, file)  \
                                      	extern const char _##name##_data[], _##name##_len[]; \
                                      	asm(".pushsection .rodata\n" \
                                      	    ".local _" #name "_data\n" \
                                      	    "_" #name "_data:\n" \
                                      	    ".incbin \"" file "\"\n" \
                                      	    ".local _" #name "_len\n" \
                                      	    ".set _" #name "_len, . - _" #name "_data\n" \
                                      	    "_" #name "_data_end:\n" \
                                      	    ".byte 0\n" \
                                      	    ".popsection"); \
                                      	static const struct { \
                                      		const char* path; \
                                      		const char* data; \
                                      		size_t len; \
                                      	} name = { \
                                      		.path = file, \
                                      		.data = _##name##_data, \
                                      		.len = (size_t)&_##name##_len, \
                                      	}
                                      
                                      DATA_FILE(foo, "foo"); /* replace with __FILE__ for a cheater-quine... */
                                      
                                      int main(void)
                                      {
                                      	fwrite(foo.data, 1, foo.len, stdout);
                                      	return 0;
                                      }
                                      

                                      (This approach has the disadvantage of introducing an external file dependency that -M flags won’t report, however, so it’ll probably require a manual annotation in your makefile.)

                                      1. 12

                                        Consume input from stdin, produce output to stdout.

                                        This is certainly a good default, though it’s helpful to also offer the option of a -o flag to redirect output to a file opened by the program itself instead of by the shell via stdout redirection. While it’s a small degree of duplication of functionality (which is unfortunate), it makes your program much easier to integrate into makefiles properly.

                                        Without a -o flag:

                                        bar.txt: foo.txt
                                        	myprog < $< > $@
                                        

                                        If myprog fails for whatever reason, this will still create bar.txt, resulting in subsequent make runs happily proceeding with things that depend on it.

                                        In contrast, with a -o flag:

                                        bar.txt: foo.txt
                                        	myprog -o $@ < $<
                                        

                                        This allows myprog to (if written properly) only create and write to its output file once it’s determined that things are looking OK [1], preventing further make runs from spuriously continuing on after a failure somewhere upstream.

                                        (You can work around the lack of -o with a little || { rm -f $@; false; } dance after the stdout-redirected version, but it’s kind of clunky and has the disadvantage of deleting an already-existing output file on failure. This in turn can also be worked around by something like myprog < $< > $@.tmp && mv $@.tmp $@ || { rm -f $@.tmp; false; } but now it’s three times as long as the original command…might be nice if make itself offered some nicer way of solving this problem, but I’m not aware of one.)

                                        [1] Or preferably, write to a tempfile (unlinking it on failure) and rename it into the final output file only when completely finished so as to avoid clobbering or deleting an existing one if it fails partway through.

                                        1. 9

                                          might be nice if make itself offered some nicer way of solving this problem

                                          GNU make has a .DELETE_ON_ERROR special target: https://www.gnu.org/software/make/manual/html_node/Special-Targets.html#index-_002eDELETE_005fON_005fERROR

                                          It’s closer to your first example than the second though.

                                        1. 4

                                          While I enjoyed the post, the comparison at the end is unfair. The author compares ZFS with a 475GB NVMe drive as a cache to XFS without an equivalent cache.

                                          1. 2

                                            The initial comparison with XFS is somewhat unfair as well, though: does XFS provide the same data integrity features that ZFS does? It’s hard, really, to compare file systems with vastly different design centres and feature sets – which feels like the point they’re trying to make, really.

                                            1. 2

                                              Was the comparison looking at data integrity though? I didn’t see any mention of that anywhere – everything I saw was entirely about performance. If you’re doing a performance comparison of two filesystems, comparing them on (very) different hardware doesn’t seem real meaningful.

                                              The author mentions the possiblity of comparing against something like bcache (which would then be a zfs vs. xfs+bcache comparison rather than strictly a filesystem comparison), but then handwaves it away as “exotic” and concludes, essentially, that “zfs plus additional fancy hardware and a bunch of manual tuning outperforms xfs”. Well…big deal.

                                              1. 2

                                                At what point do you need to assume integrity as a baseline though? This is a database blog we’re talking here.

                                                Unrelated observation: it’s tragic that most production databases out there aren’t running on ZFS, and says a lot about the priorities (and less charitably the general ability) of our industry.

                                          1. 6

                                            Knuth vs McIlroy, round 2: now with parallelism.

                                            1. 1

                                              I feel like this isn’t quite fair to Knuth…

                                            1. 6

                                              The title seems misleading. I just quickly skimmed it and it’s pretty dense. So, do correct me if I’m wrong.

                                              It looks like it’s an O(N x M) algorithm (pre-processing) followed by an O(N) algorithm (actual sorting). Then, they say the O(N) part takes O(N x L) in worst case. So, a two-step, sorting algorithm that delivers O(N x M) + (O(N) or O(N x L)) performance?

                                              1. 6

                                                The other thought that has been rolling around in the back of my head is that the test inputs they use are generated according to nicely behaved, friendly probability distributions.

                                                I suspect that you could generate pathological inputs that would cause the neural network first step to fail to get the input “almost sorted” enough for the second step to work. That would invalidate the theoretical complexity claim they make, and then the question becomes, in practice, how hard is it to generate pathological inputs and how likely is it that a real-world input would be pathological?

                                                1. 5

                                                  I figured it was an elaborate prank to disguise a lookup table…

                                                  1. 4

                                                    They claim M and L are both constants. This is the part of the claim I find dubious… I suspect that as problem sizes grow it might turn out these aren’t actually constants.

                                                    They also don’t seem to include model training in the complexity, apparently because they use the same size training set for every problem size. This also might not be a valid assumption if input is big enough.

                                                    They did problem sizes from 10^3 to 10^7. The log2 difference is only about 10. If they were a little conservative in selecting their “constants”, their algorithm would work even if the “constants” were logarithmic.

                                                    1. 3

                                                      They claim M and L are both constants. This is the part of the claim I find dubious… I suspect that as problem sizes grow it might turn out these aren’t actually constants.

                                                      The thing that made me wonder is they said one of the constants was something that could change the effects of their analysis if they changed its size. That made me default on it wasn’t constant so much as constant in this instance of the method. Next set of problems might require that number to change. Given online nature of these algorithms, it might even have to change overtime while doing the same job. I don’t think we can know yet.

                                                      It was interesting work, though.

                                                      1. 2

                                                        I agree with you that’s it’s interesting work. It just feels like they really wanted their paper to stand out, and felt like doing sorting with a neural network wasn’t an exciting enough title, so they made a really big claim (that being, an O(N) sorting algorithm). The problem is that the claim they made has a specific, rigorous meaning and I don’t think they did the analysis to PROVE the claim is true (although it might still be true anyway).

                                                  1. 5

                                                    I always thought it interesting how the GNU tools have this split-brain between emacs and vi keybindings. I know that readline (by default) uses emacs, but I believe you’re able to make it use vi-like keybindings instead (or maybe that’s just a feature of Bash?). The two tools I use daily that use vi keybindings are man and less. And then you can see GNU’s influence with info because they use emacs bindings.

                                                    All in all, it’s a pain and I’m a vim guy so beyond single-line editing on the command line, emacs bindings are completely foreign to me. As a result info pages are almost useless to me because I have no idea how to correctly navigate them. ¯\_(ツ)_/¯

                                                    1. 12

                                                      I believe you’re able to make it use vi-like keybindings instead (or maybe that’s just a feature of Bash?)

                                                      That’s actually mandated by POSIX, as part of the definition of set for the shell. There was originally a proposal to have an “emacs” mode as well, but as the POSIX rationale document states:

                                                      In early proposals, the KornShell-derived emacs mode of command line editing was included, even though the emacs editor itself was not. The community of emacs proponents was adamant that the full emacs editor not be standardized because they were concerned that an attempt to standardize this very powerful environment would encourage vendors to ship strictly conforming versions lacking the extensibility required by the community.

                                                      Gotta love Emacs users ;)

                                                      1. 2

                                                        The original FSF crew were all (most?) emacsphiles, according to Brian Fox.

                                                        Source: many discussions with Brian Fox.

                                                        1. 1

                                                          Aw, man, Brian Fox. What’s up with him these days? I wish I could have been there in the early days of the FSF.

                                                        2. 2

                                                          …I believe you’re able to make it use vi-like keybindings instead…

                                                          In ~/.inputrc:

                                                          set editing-mode vi
                                                          set keymap vi
                                                          

                                                          As a result info pages are almost useless to me because I have no idea how to correctly navigate them.

                                                          That’s why I install pinfo on every Linux machine I use. It’s still not vi-like bindings, but it’s lynx-like bindings, which are at least easy to learn.

                                                          1. 1

                                                            The two tools I use daily that use vi keybindings are man and less.

                                                            If you’re on some reasonably “normal”-ish Linux distro and haven’t gone out of your way to configure things otherwise, man is most likely just displaying its output via less, so those are kind of one and the same as far as keybindings go – and less actually isn’t a GNU program.

                                                            1. 1

                                                              FWIW, less supports both bindings.

                                                              1. 0

                                                                info pages

                                                                Pet peeve. They’re info manuals not pages. Manpages are called that because individually each was supposed to be but a single, one-page cheat sheet of the full Unix manual.

                                                                The whole point of TeXinfo was to generate full manuals all at once, and in multiple formats, with an index, a table of contents, chapters, menus, and hyperlinks. If you don’t like the text-based info reader, there is HTML and PDF output as well. Use those!

                                                                … but I know you’ll tell me next, if it’s not in a text-based terminal, you don’t want to read it. In that case, just read the raw .info[.gz] files. They’re plain text files with a few ASCII control characters.

                                                              1. 1

                                                                On a related note, I was recently searching for a way to generate ASCII art diagrams from graphviz dot source and was pleased to discover Graph-Easy: http://search.cpan.org/~tels/Graph-Easy/bin/graph-easy

                                                                It of course can’t realistically render all the numerous features of the graphviz language in ASCII, but it does a pretty respectable job (good enough for my purposes, at least).

                                                                1. 1

                                                                  Why oh why did they have to call it “bionic”…were they trying to increase general nomenclature confusion?

                                                                  https://en.wikipedia.org/wiki/Bionic_%28software%29

                                                                  1. 4

                                                                    All the names that are actual words are already taken by someone somewhere. Naming something these days with a word is a guaranteed collision.

                                                                    1. 1

                                                                      See also Apple’s A11 Bionic processor (the one that’s in the iPhone X)

                                                                      1. -1

                                                                        Increasing general nomenclature confusion is kind of all the Ubuntu release code-names are good for, yes.