1. 3

    so that’s how WebAssembly becomes the new Flash.

    1. 2

      Depends on what you are comparing with. Most people who are engrained with fork/wait or pthread_creat/join aren’t complaining.

      1. 8

        A lot of this just seems like going against the grain of the distro when using Docker and wondering why that’s not good.

        1. 19

          Isn’t it how everything goes? People are not satisfied with this distro, they create another. People can’t bother with their distro, they create npm, pip, …. People realize language specific ones are not enough, they create conda. People think conda is bloated, they create miniconda. People can’t bother with installing anything, they use Docker. People still need to install things inside Docker, they choose a distro inside the Docker. Ad infinitum.

          1. 5

            People realize language specific ones are not enough, they create conda.

            It is reasonable to ask to manage language-specific packages together with other libraries. Many language packages rely on various C libraries.

            I think this is mostly a failing of traditional distribution package management. If you still insist on writing a .spec or rules file for e.g. every Rust crate that a developer might use from crates.io [1], you are never going to keep up. Additionally, traditional package managers cannot deal well with installing different versions of a package in parallel. Sure, you can hack around both problems, e.g. by automatically generating .spec files and sticking versions into package names, so that ndarray 0.13.0 is not seen as an upgrade of ndarray 0.12.0. But it’s going to be an ugly hack. And you still cannot pin system libraries to particular versions.

            So, you either have to accept that your package manager cannot deal with language package ecosystems and project-specific version pinning. Or you have to change your package manager so that it is possible to programmatically generate packages and permit multiple parallel versions. While they may not be the final solution, Nix and Guix do not have this problem and can just generate packages from e.g. Cargo metadata and deal with multiple semver-incompatible crate versions without any issues.

            [1] Deliberately not using Python as an example here, because Python packaging has many issues by itself, such as not permitting use multiple versions of a package by a single Python interpreter.

            1. 2

              This is mostly a failing of the python ecosystem or our software ecosystem as a whole. The very idea of wanting a set of “packages the precise set of things you want” (borrowing from another commenter here) is absurd. Users should pin their blames to those developers releasing packages without any sort of backward compatibility. If there are 10 packages release backward incompatible “updates”, the users got theirselves 1024 choices to choose from. Somehow, people still think it’s a mere package management problem. No.

              1. 8

                Users should pin their blames to those developers releasing packages without any sort of backward compatibility

                No, this is a solved problem. Cargo and npm do work with lots of messy dependencies. It’s the old inflexible package managers that aren’t keeping up. Traditional package managers blame the world for having software not fitting their inflexible model, instead of changing their model to fit how software is actually released.

                Cargo/npm have solved this by:

                1. Using Semver. It’s not a guarantee, but it works 99% of the time, and that is way better than letting deps break every time.
                2. Allowing multiple incompatible versions of the same package to coexist. In large dependency graphs probability of a conflict approaches 1. Package managers that can’t work around conflicts are not scalable, and end up hating people for using packages too much.
                1. 1

                  Right. Rust and Node solved everything. The rest of the world really can’t keep up. Why don’t we just rewrite everything in Rust and Node? You can have your package links with libA and libB, while libA links with libD.so.0 and libB links with libD.so.1. Wait, the system still has an ancient libc. Right. Those .so files are just relic from the past and it’s a mystery we are still using them. So inflexible.

                  Cargo/npm have solved this

                  It truly made my day. Thanks. I needed this for the weekend.

                  1. 2

                    .so files are just relic from the past

                    Yes, they are. Sonumbers alone don’t help, because package managers also need a whole requires/provides abstraction layer, renamed packages with version numbers in the name, and evidently they rarely bother with this. Lack of namespacing in C and global header include paths complicate it further.

                    To this day I’m suffering distros that can’t upgrade libpng, because it made slightly incompatible change 6 years ago. Meanwhile Cargo and npm go brrrrr.

            2. 1

              The problem is at least partially social, not technical. It’s far easier to start a new project than join an existing one, because you have to prove yourself in a new project. For some projects (say, Debian) there’s also significant bureacracy involved.

            3. 15

              Docker is irrelevant to the message of the story. They just didn’t have a box with Ubuntu 18.04 around to demonstrate the problems and resorted to a Docker container. A VM or bare metal with 18.04 would have told the same story.

              1. 5

                Could you expand? The general issue is the mismatch between distro release cycles and software development cycles: you need to bootstrap a newer toolchain in most languages. Python’s has some specific problems others don’t (e.g. no transitive pinning by default, unlike say Cargo, so new package releases are more likely to break things), but every unhappy family etc..

                1. 2

                  If you want specific versions of a tool, then perhaps it would be better to switch to a distro that offers that, or remove the distro from the equation.

                  1. 8

                    What if there is no distro in existence that packages the precise set of things you want? The whole point of language packaging ecosystems is to solve that problem; otherwise the options are “find a way to stick to only what your distro decided to package”, or “make your own distro that system-packages the things you want”, or “make a distro that system-packages every version of everything”.

                    And that’s without getting into the fact that distros historically change popular languages and their packages, sometimes in ways that are not compatible with upstream. For example, Django finally gave in and switched the name of its main management command from django-admin.py to django-admin, in part because some distros had absolute hard-line policies on renaming it to django-admin when they packaged Django, leading to a mismatch between what Django’s documentation said and what the distro had installed.

                    And it’s especially without getting into the fact that many Linux distros ship languages like Python because some of the distro’s own tooling is written in those languages, and you want clean isolation between that and your own application code. Which you can’t get if you go the system-package-only route.

                    So yes, even in Docker, you should be using language packages and the language’s package manager. It’s not that hard to add a pip upgrade line to your Dockerfile and just have it run on image build.

                    1. 2

                      What if there is no distro in existence that packages the precise set of things you want?

                      I feel like this is covered by the second half of the statement:

                      or remove the distro from the equation.

                      That is done - for instance - by using language packages.

                  2. 1

                    no transitive pinning by default

                    This is why I decided to go straight to Poetry. The ride has been bumpy (python is a chaotic world when it comes to package metadata), but at least now we have a lock file. Rust really hit a home run with Cargo.

                  3. 3

                    Why does it have to be one or the other?

                    I want to be more up to date on Python and it’s packages than other things like the libc, kernel version, …

                  1. 6

                    after sifting through a bunch of “No you can’t do this” answers I found a particularly funny looking expression that claimed to be the solution.

                    I know it’s internet, but actually citing the source benefits everyone.

                    For those who don’t know, here is a possible source: http://rosettacode.org/wiki/Y_combinator#Python

                    1. 5

                      SICP brings wonders. The rest I don’t know.

                      SICP: https://github.com/sarabander/sicp

                      1. 5

                        SICP is vastly overrated.

                        1. 4

                          Ok, I’ll bite. Why do you say that?

                          1. 1

                            I understand it doesn’t resonate with everybody, which is why I included it in the section of books good for specific interests.

                        1. 9

                          I’m using email. When I find something interesting I’m emailing it to myself. Then I add tags. Then remarks, comments, updates - all as replies.

                          Unlike apps which pop up and go this method is quite resilient and effective.

                          1. 3

                            I’ve been doing this as well, using Firefox‘s experimental Email Tabs to send whole articles. It’s been working out really nicely.

                            1. 2

                              Dang, too bad it doesn’t support other email providers.

                              1. 1

                                I do this also.

                              2. 2

                                This gives you tree structure for free, but it seems difficult to cross reference. How do you link to other emails?

                                1. 1

                                  I don’t :) I mean I don’t link, but I could. Every email has an id. In Gmail I would use the link from “View original message”, I guess.

                                  Some cross-referencing is offered by the overlapping tags. For example, I have an entry tagged with typography, resource, another with layout, resource … then resource becomes sort of a meta tag proven to be very useful.

                                2. 2

                                  That sounds pretty effective, I’m just curious to know what email provider do you use

                                  1. 1

                                    Gmail. But tagging and searching is available perhaps in any client / provider

                                1. 2

                                  It could have been more APL like, rather than what it is, an apple script clone. Ancient Chinese is supposed to be concise.

                                  1. 1

                                    I think that leveraging Hangul would be more APL and compact if each sound “letter” could be combined in one character.

                                    1. 1

                                      Hangul denotes sound by a 2D arrangement of letters. That would introduce a vocabulary of about 10000 code points. There are about 80000 unicode code points for CJK ideographs. We could put them together with emoticons and Hieroglyphs, and make a language with about 100000 unicode code points.

                                      1. 1

                                        I’ve wondered about this for shortening links (visually, not byte-wise). You can more easily type in hangul and have them turn into what you want but figuring out how to do it with Chinese besides memorization still seems to escape me. APL does seem to lend itself to memorization but I think being able to compress logic into variable width characters while still being able to piece together what something does seems kinda cool.

                                        I mean, take for instance 말 vs something like 零. With the first you could compose functions together in an interesting way like map ㅁ, some random function or variableㅏ, reduce ㄹ and you could remember them because of ligatures and the fact that it’s an alphabet.

                                        Whereas with 零 you can remember the name of each stroke, the stroke order, the meaning of the entire character plus it’s sound but at the same time I’m not sure how you’d remember how to type it in without help from your editor. Maybe there is a way to do this that makes sense? I would be the first to try it.

                                        1. 2

                                          I mean, take for instance 말 vs something like 零. With the first you could compose functions together in an interesting way like map ㅁ, some random function or variableㅏ, reduce ㄹ and you could remember them because of ligatures and the fact that it’s an alphabet.

                                          That was the spirit of my previous comment (sorry I was on the phone). By mixing that and notion from tacit programming and Raku/Perl about topical variables etc. You can compress (visually) a lot of information in one representation with a limit.

                                          Using Hangul can be seen as Scheme sort with a minimal set of operand to combine together where using CJK ideographs can lead to a battery included approach where an ideograph can pack a lot of information in it. I don’t think that using strokes as a minimal unit of operand is a good idea because the rules for the construction of the character would be so complex. Even going from radicals to construct a restrained vocabulary.

                                    2. 1

                                      I had similar ideas: https://twitter.com/porges/status/1159161836203155456 (but this is a hodgepodge, I have no knowledge of classical or modern Chinese)

                                    1. 10

                                      If you replace all the names with weird acronyms it reminds me of troff. XD

                                      1. 5

                                        It sure is a dumbdown version of troff. I prefer troff macros.

                                      1. 9

                                        If this were written in C, without knowing anything other than the fact that this code compiles correctly, I can tell you that x and y are numeric types, and the result is their sum. I can even make an educated guess about the CPU instructions which will be generated to perform this task.

                                        x could be a pointer and y could be an int. A pointer isn’t a numeric type. On most architectures, it kind of is numeric; it stores an address – which is just a numeric index into memory. FWIW I never fully grokked pointers until I learned assembly.

                                        C hides a lot of details too. That’s what programming languages at a higher level than assembly are all about: abstracting over details.

                                        Regardless of what language you write in, it’s good to be aware of the performance characteristics of your code. And it’s totally possible to cultivate that sort of awareness, even in the face of things like operator overloading. Start by learning some assembly.

                                        1. 7

                                          Not to mention that even if they’re numeric types, then “the result is their sum” can still not be true either - IIRC:

                                          • if the types are unsigned, this line could be an overflow and thus the result would be sum modulo;
                                          • even worse, if the types are signed, this line could be an undefined behavior, and thus a spooky eldritch Cthulhu action crawling slowly throughout your app and devouring any logic.

                                          Also, the malloc line does not check the returned value, so:

                                          • in case you run close to an out-of-memory condition, the strcpy and strcat could overwrite memory, also doing spooky eldritch Cthulhu corruption action;
                                          • in case x or y come from your users, they could be maliciously crafted such that strlen could overflow int, leading again to spooky eldritch Cthulhu corruption action, security breach etc. etc. (I think; not 100% sure if some other effect doesn’t accidentally neutralize this, I don’t care enough about C anymore to try and track precisely what would happen there in such case).
                                          1. 1

                                            in case you run close to an out-of-memory condition, the strcpy and strcat could overwrite memory, also doing spooky eldritch Cthulhu corruption action;

                                            if there is no memory, it returns NULL - this is probably just gonna segfault and not corrupt anything.

                                          2. 5

                                            I think this comment, and @akavel’s follow-up, are both missing the point. Yes, it’s more complex than my brief summary affords. However, the action is well-defined and constrained by the specification, and you can learn what it entails, then apply that knowledge to your understanding of all C programs. On the contrary, for languages which have operator overloading, you can never fully understand the behavior of the + operator without potentially consulting project-specific code.

                                            1. 3

                                              I totally get the point of “spooky action at a distance”, and totally agree with it; I learnt of this idea in programming quite some time ago and do look at it similarly since. I also honestly appreciate the look at #define in C as a feature that can be said to exhibit a similar mechanism; I never thought of it like this, and I find it a really cool and interesting take.

                                              Yet, I still do find the claims about C’s simplicity that this article makes highly misleading. Especially in this context of subtle hidden difficulties. If it at least provided some disclaimer footnote (or whatever else form given gemini), I could swallow this. But as shown in the comments, some of the sentences are currently just plain wrong.

                                              Let me phrase this differently: I believe this could be a great article. I could recommend it to others. However, currently I see it as, looking at a whole, an okay-ish article. Some good points, some false claims. I don’t plan to advertise it to others, given the dangerous and unfortunately seductive ideas it also conveys (esp. to non-expert or just would-be C programmers) “piggybacked” with the main argument (e.g. that “C is easy and simple”). I lament that the article is not better. And I’m painfully aware I need to actually warn some people against naively succumbing to the overt simplifications presented in this article. Because I’ve seen so many unaware and sincerely gullible people, and myself have been unaware for ohhhh so long

                                              1. 1

                                                I don’t think the omission is consequential to the point of the article. In fact, it criticizes C for its role in this “spooky action at a distance”. The point is not to explain how C works.

                                                1. 2

                                                  Given that this reply basically rehashes parts of what both of us already wrote above, and I apparently read some parts of the article differently than you, I’m assuming neither of us nor anyone else will probably gain anything from any further discussion by us on this matter.

                                              2. 1

                                                What C code seemingly free of function calls does depends on platform as well, for instance see:

                                                https://godbolt.org/z/jsWqzM

                                                Here binary operations on 64-bit operands compiled as 32-bit code may result in function calls. The other example is initializing a structure of a certain size, which may also result in a function call.

                                              3. 1

                                                Right. Try this for fun,

                                                cat <<'.'|cc -x c - && ./a.out && rm a.out
                                                #include<stdio.h>
                                                #define P(T) do{T*x=0;printf("%lx\n",(unsigned long)(x+y));}while(0)
                                                int main(int c,char**v){int y=1;P(char);P(short);P(int);P(long);P(void);return 0;}
                                                .
                                                
                                              1. 2

                                                a history reset is needed when we need to change the on-disk representation for one reason or another. It happened a few times in the past, and did happen again after the current alpha was published. This is, however, very unlikely to ever happen again.

                                                Maybe wait a few years and see if the last sentence comes out true or not? If you ask people to trust your system, the last thing I would want to hear is any mention of “we are moving fast”.

                                                1. 3

                                                  the last thing I would want to hear is any mention of “we are moving fast”.

                                                  The first thing I would like to hear as a user is “we’re transparent about what happened and what didn’t”, which is the reason this is stated very clearly there.

                                                  1. 1

                                                    Sure. Openness is good. Instability is bad. They coexists. A vcs becomes popular only after big projects start to migrate to it, and for that to happen, stability is key.

                                                    1. 1

                                                      Sure.

                                                    2. 1

                                                      Sure, just “we are moving fast” and “99% of people shouldn’t use this” are the same statement, for systems stuff.

                                                      1. 1

                                                        That’s true, I never said otherwise. But this also means that when we announce we’re ready, we will be ;-)

                                                    1. 6

                                                      From the PDF, these claims are…something?

                                                      Security industry is already raising concerns that proliferation of GoLang, file-less code and Powershell into the world of malware is the most unwelcome development over the recent years

                                                      Later on, the explanation about why Go, .NET, etc. are a security problem is because they’re cross-platform which allegedly makes them more attractive for attackers because of the idea of write once, run anywhere. Yet, most Docker images are packaged for typically a single platform from what I know. This feels like they’re reaching for a way to sow fear.

                                                      1. 5

                                                        The proliferation of C is like a plague, transmitting via the hands of our students, viciously infesting all of our machines with Unix.

                                                    1. 11

                                                      from the linked pdf: http://www.schemamania.org/troff/for-the-love-of-troff.pdf

                                                      I challenge the reader of the present document, for example, to find a paragraph that would be better rendered by the paragraph-at-once algorithm instead of the line-at-a-time one that was used.

                                                      Just look at that pdf file. The paragraphs. The interword spacings. Leaving a dangling half line at the top of a page? Hyphenating a word so it can put four letters at the beginning of a line to end the paragraph?

                                                      1. 1

                                                        The author shouldn’t have stated that. On the other hand, another version of Troff (less known than Groff but actually much better), namely Heirloom Troff https://n-t-roff.github.io/heirloom/doctools.html does implement Knuth’s algorithm; it also implement features that Knuth discarded (like using inter-letter invisible spaces in order to decreases visible extra spaces - you can even achieve constant inter-word spacing!). I use it on a daily basis, and as long as you don’t need equations, it is much more powerful than Troff in regards to micro-typography.

                                                        Of course, OTF fonts are supported (with their “features”), kerning is easy to adjust (including cross-fonts kerning), etc.

                                                        1. 2

                                                          I want to love troff, but I can’t see any added benefit from troff compared to LaTeX. I put \usepackage{microtype} in every document.

                                                      1. 11

                                                        Aside: please don’t use a fixed-width font with justified spacing. This is the worst of both worlds!

                                                        1. 1

                                                          Linux man page utils does this by default.

                                                        1. 1

                                                          This is the last one: 4!-4^(4-4)

                                                          Isn’t that equal to 23?

                                                          1. 1

                                                            A correct one would be 4*4+4+4.

                                                            1. 1

                                                              For some reason I didn’t write the pages in their natural order. There are 25 pages, two of them with the result 23. I can’t tell why, but I do remember that I was unhappy with 23. I suppose I may have ended up with two unsatisfying 23s. Which 23 to prefer is then not my decision.

                                                          1. 6

                                                            This makes the mistaken assumption that the reader doesn’t care what the output will look like.
                                                            while cut and awk do mostly the same thing, they can behave vastly differently, see:

                                                            mattrose@rome ~ % cat cutvawk
                                                            foo  bar
                                                            foo bar
                                                            foo	bar
                                                            
                                                            mattrose@rome ~ % cat cutvawk | awk '{print $2}'
                                                            bar
                                                            bar
                                                            bar
                                                            
                                                            mattrose@rome ~ % cat cutvawk | cut -d ' ' -f 2
                                                            
                                                            bar
                                                            foo	bar
                                                            
                                                            

                                                            Speed tests are fine, but they won’t tell you the right tool to use for any given job.

                                                            1. 10

                                                              The author specifically addresses this problem directly, clearly, and explicitly in their post. As cut doesn’t handle arbitrary spacing, they use tr to clean up the spacing first. Whether that’s squeezing spaces or converting tabs, tr can do it.

                                                              This makes the mistaken assumption that the reader doesn’t care what the output will look like.

                                                              This makes the mistaken assumption that the author is a complete idiot. Obviously they care about correct output.

                                                              1. 7

                                                                FreeBSD cut has -w to “Use whitespace (spaces and tabs) as the delimiter. Consecutive spaces and tabs count as one single field separator.”

                                                                It’s had this for years, and something I miss in GNU cut; it’s pretty useful. Maybe I should look at sending a patch.

                                                                1. 5

                                                                  Presumably the GNU project thinks sequences of whitespaces should be handled by awk, it’s referenced in the info page for cut:

                                                                  Note awk supports more sophisticated field processing, like reordering fields, and handling fields aligned with blank characters. By default awk uses (and discards) runs of blank characters to separate fields, and ignores leading and trailing blanks.

                                                                  [awk invocations snipped]

                                                                  This shows that the perennial discussion about “one thing well” and composability is granular and not really separable into “GNU just extends everything, *BSD keeps stuff small”, as the FreeBSD version of cut is “extended” to not need awk for successive runs of whitespace.

                                                                  (OpenBSD cut does not have the -w option: https://man.openbsd.org/cut)

                                                                  1. 5

                                                                    Yeah, it’s just a common use case; awk is an entire programming language and resorting to it for these kind of things is a bit overkill. Practicality beats “do one thing well” kind of purity IMO. And I also don’t think it really goes against that in the first place (it “feels” natural to me, very wishy-washy I know, but these kind of discussions tend to be).

                                                                2. 1

                                                                  If the author cared about output, why would he cat the results to /dev/null in the OP?

                                                                  My point is that there are considerations other than raw speed, when deciding between cut and awk, and awk is far more forgiving of invisible whitespace difference in the input than cut is, and this is not really mentioned in the post, even though I’ve seen it happen with cut so many times.

                                                                  1. 3

                                                                    If the author cared about output, why would he cat the results to /dev/null in the OP?

                                                                    To better present the timing information in a blog post, and to better measure the speed of these programs without accidentally measuring the speed of their terminal emulator at consuming the output.

                                                                    Seriously, if the author didn’t care about output, why bother using tr in their second example at all?

                                                                    awk is far more forgiving of invisible whitespace difference in the input than cut is, and this is not really mentioned in the post

                                                                    It’s explicitly mentioned in the post. See example 2, where the author explains using tr -s for exactly this reason.

                                                                3. 3

                                                                  You always need to know what your input looks like, right?

                                                                  % cat <<. | tr -s '     ' ' ' | cut -d ' ' -f 2
                                                                  foo  bar
                                                                  foo bar
                                                                  foo     bar
                                                                  .
                                                                  bar
                                                                  bar
                                                                  bar
                                                                  
                                                                  1. 1

                                                                    I had this exact case in mind reading this. It’s happened a lot and is why I’ve defaulted to always using Awk.

                                                                    1. 1

                                                                      Yeah, they’re different tools that do different things and conflating them like this has bitten people in the backside before

                                                                    2. 1

                                                                      The second example uses tr -s ' ' to collapse runs of spaces. If your input contains tabs as well, you could compress them too with tr -s ' \t' ' '. As I understand it,

                                                                      tr -s ' \t' ' ' < inputfile | cut -d ' ' $N
                                                                      

                                                                      will give the same output as

                                                                      awk "{print \$$N}" < inputfile
                                                                      

                                                                      for all positive integer values of N (up to overflow).

                                                                      1. 3

                                                                        that is still not the same as default awk behavior, because awk will remove leading/trailing space/tab/newlines as well

                                                                        $ printf '    a  \t  b      3   '  | awk '{print $2}'
                                                                        b
                                                                        $ printf '    a  \t  b      3   '  | tr -s ' \t' ' ' | cut -d ' ' -f2
                                                                        a
                                                                        
                                                                        1. 2

                                                                          Ah, nice. Thanks for the correction.

                                                                          The awk I have (gawk 5.1) will not remove leading or trailing newlines in the file and otherwise processes the file line-wise, but it will strip leading spaces and tabs before counting fields and cut does not.

                                                                          1. 1

                                                                            newlines come into picture when the input record separator doesn’t remove it

                                                                            here’s an example that I answered few days back: https://stackoverflow.com/questions/64870968/sed-read-a-file-get-a-block-before-specific-line/64875721#64875721

                                                                            1. 1

                                                                              Sure. I was talking about the defaults, tho.

                                                                    1. 20

                                                                      If you’re upgrading to Big Sur and need to install third-party software, I’ve now added a new repository of 20,000 binary packages for Big Sur here, in addition to our existing sets for older releases:

                                                                      https://pkgsrc.joyent.com/install-on-osx/

                                                                      If you’re already served well by Homebrew or MacPorts then feel free to ignore ;-)

                                                                      Big Sur required some quite invasive changes due to removing shared library files, so you can’t use the Mojave repository like you could with Catalina, but thankfully it hasn’t impacted the number of available packages compared to previous releases.

                                                                      Overall Big Sur looks pretty good to me. It’s certainly significantly more stable and quite a bit faster than Catalina in my experience, as well as fixing some major issues in the NFS server. Running pkgsrc bulk builds is a great way to expose bugs in macOS, and even with the latest updates Catalina would regularly crash.

                                                                      1. 3

                                                                        Neat, maybe I should try pkgsrc again. Last time I did a few years back it was missing several packages I needed, so I ended up with nix.

                                                                        Does anyone have a rough idea of the relative sizes of the package sets? nixpkgs claims 60000 packages over all, but it’s unclear how many of those are available on macos. And neither Homebrew nor MacPorts seem to advertise a number.

                                                                        Incidentally, nixpkgs support for Big Sur seems not too far off: https://github.com/NixOS/nixpkgs/pull/98541

                                                                        1. 3

                                                                          Let me know if there’s stuff missing that you need and we can take a look. I’m sure we’re missing stuff, but we rely on users to tell us what.

                                                                        2. 1

                                                                          Did this mean that compiling all of those packages with the just released Big Sur resulted in a complete success?

                                                                          1. 1

                                                                            I’ve been building on the betas for a while now. Bulk builds are never a complete success, as pkgsrc contains many packages that don’t build on all 20+ platforms, but the results are comparable to those running on Mojave.

                                                                        1. 1

                                                                          Dvorak dvp here. It doesn’t matter to me where hjkl are, I just use them where they are (jcvp on a qwerty layout). I also use yubn for diagonal moves in nethack and others.

                                                                          1. 1

                                                                            The Most Confusing Shell Mistakes You’ve Ever Made.

                                                                            1. 1

                                                                              So, this means Pijul is abandoned? Why don’t they say it on the pijul.org website?

                                                                              1. 7

                                                                                Who knows? I’m waiting for the author to have some new genius ideas and start another new rewrite from scratch. Or maybe the one after that.

                                                                              1. 6

                                                                                Then I got 6.out. I forgot the ./ but to my surprise it runs the correct program.

                                                                                By default, there is still

                                                                                ; xd -c /env/path
                                                                                0000000   /  b  i  n 00  . 00
                                                                                0000007