1. 3

    Remember when being Unix-like meant “everything was a file”? It was nice.

    Linux is still better than most, honestly (sysctl talks to /sys, for example, instead of being its own thing) but Linux still has netlink and other stuff that aren’t files.

    Of course, it’s easy to judge, but X and Wayland solve hard problems: it’s not enough to write to a file to put pixels on the screen, you have to multiplex/mediate access, allow for manipulation of potentially broken clients, etc.

    Plan 9 was really the only one who got it right, and even then it got a few things wrong.

    1. 4

      Why is it nice?

      I hate Linux’s “everything is a virtual filesystem” approach. Looking at mount output on a modern Linux box just feels disgusting. 12 lines (!) of cgroups spam, and then devpts pstore securityfs debugfs configfs hugetlbfs OMGWTFfs. And how could I forget the infamous efivarfs!

      Also, most files in sysfs are text, so you have the overhead of parsing strings just to read system information.

      1. 1

        Also, most files in sysfs are text, so you have the overhead of parsing strings just to read system information.

        That’s the advantage. I can grep for information, cat it, sort it, etc, etc using tools that I already know how to use, because they’re just files containing text.

        In the vast majority of applications the slight overhead for doing the string parsing isn’t going to have a significant effect on performance.

        1. 3

          Sure, but then when you want to do something less ad-hoc with it, it becomes a pain. The canonical source of that information shouldn’t be text, it should be structured data that can be dumped to text when necessary.

          1. 2

            But I can do the same with the output of sysctl(8). I don’t need a hundred mounts for that.

      1. 2

        Dunno why would someone not just use libSDL in this situation? And then, of course libSDL_grafx, etc.

        1. 10
          SDL2-2.0.7$ cloc src | grep SUM | grep -o '\d\+$'
          161068
          
          bin$ cloc fbclock.c | grep -o '\d\+$'
          86
          

          🤔

          1. 9

            I see several reasons, one being education. I had no idea how linux framebuffer system was working before reading this post.

            Great post, thanks for writing it!

          1. 3

            This is really neat! It makes me want to dive into writing my own framebuffer utilities. Some thoughts:

            For things like a clock or a battery indicator, tmux has a ‘status’ option and screen has a ‘hardstatus’ option. Both of these tools make the a console-sans-xorg experience quite enjoyable.

            For other framebuffer tools, try jfbview (pdf viewer) or libxine’s fbxine (video player).

            1. 2

              I’ve forgone tmux and screen because (at least tmux) adds noticeable input lag, and I find neovim’s terminal emulator more convenient (one set of keys for managing windows, unified “clipboard” vim registers). If I really need a detachable session, I wrote another simple tool for that.

              I’ve used jfbview (or maybe a fork) to read Intel manuals. I don’t think there are sound drivers for my Chromebook so watching videos probably isn’t going to happen. Framebuffer tools really don’t get enough love, though!

              1. 2

                (at least tmux) adds noticeable input lag

                I’m not sure if it’s the only input lag you were noticing, but the biggest annoyance in this respect for me goes away if you add

                set -s escape-time 0

                to your .tmux.conf. By default tmux pauses for a half-second after ESC before sending it through, in order to allow using ESC+key as equivalent to Meta+key for tmux bindings (like emacs does). Which is probably fine if you don’t use vim, but is very annoying in vim. Setting the delay to 0 does of course mean that you can’t use ESC+key sequences for tmux bindings.

              1. 1

                This is a different post right?

                1. 4

                  Yes. This post is somewhat of a followup to that one.

              1. 7

                3 GB/s is impressive, but I’m more curious to hear about the application that gives a Y/n prompt 1,500,000,000 times per second.

                1. 16

                  yes is useful for more than just interfaces: it’s effectively a more flexible /dev/zero.

                  Optimizing it is obviously golf, but on the other hand, it’s unlikely to hurt anything.

                  1. 4

                    There are plenty of applications that write loads of data through pipes, so while this example is kind of useless in of itself, it does provide a good platform to experiment with pipe perf. The Reddit discussion linked even has some good discussion about kernel internals.

                  1. 1

                    Ran on macOS just to see what happens:

                    Architecture:            x86_64
                    Byte Order:              Little Endian
                    Total CPU(s):            4
                    Model name:              MacBookPro11,1
                    

                    I appreciate graceful degradation!

                    1. 1

                      I update the code, so it can run on macOS now:-). When you have time, you can try it, thanks!

                    1. 2

                      Instead of memfd_create() you can use the POSIX standard shm_open(), so

                      memfd_create("queue_region", 0)

                      becomes

                      shm_open("queue_region", O_RDWR|O_CREAT, 0600)

                      Add ‘-lrt’ to your LDFLAGS and remember to shm_unlink() it when you’re done. Everything else stays the same, including the performance.

                      1. 2

                        I vaguely recall it being less effort to simply open /dev/zero and use a private mmap()ing of that.

                        Of course if you are using this as an IPC between two processes you’ll have to use a regular file.

                        1. 1

                          I don’t think a private map would work here:

                          MAP_PRIVATE

                          Create a private copy-on-write mapping. Updates to the mapping are not visible to other processes mapping the same file, and are not carried through to the underlying file.

                          1. 1

                            Meant to say “use a regular file with MAP_SHARED”, good catch. :)

                          2. 1

                            It doesn’t seem like you can mmap /dev/zero. I get ENODEV “Operation not supported by device” when I try. (macOS)

                            Edit, showing my work:

                            #include <err.h>
                            #include <fcntl.h>
                            #include <stdlib.h>
                            #include <sys/mman.h>
                            int main() {
                                int fd = open("/dev/zero", O_RDWR);
                                if (fd < 0) err(1, "open");
                                void *map = mmap(NULL, 4096, PROT_READ | PROT_WRITE, MAP_PRIVATE, fd, 0);
                                if (map == MAP_FAILED) err(1, "mmap");
                                return 0;
                            }
                            
                            1. 1

                              You’re confusing /dev/null with /dev/zero.

                              1. 1

                                Oops, I had used /dev/zero when I first tried it then accidentally swapped it for /dev/null when I came back to give some code. Either way, the result is the same: ENODEV.

                                1. 1

                                  Must be some MacOS specific breakage, because it works on Linux.

                        1. 5

                          I think this is a long winded way of saying directory entries are access controlled by directory permissions? There was a bit too much narrative for me to know if the point was that this was surprising or wrong or what.

                          1. 4

                            a long winded way of saying directory entries are access controlled by directory permissions

                            Yeah, this is the TL;DR but I posted it mainly because it was fun to read.

                            1. 3

                              Sure, though i think the presentation obscures the obvious corollary, assuming the goal is to annoy the user: mkdir root-dir; touch root-dir/root-file really will leave you with an unremovable file.

                              1. 1

                                Oh yeah, and I haven’t heard of chattr +i <filepath> before, which can be used to make files immutable. This could be quite handy.

                              2. 2

                                I think it is great. It describes a situation which you rarely hear about anymore: when your user and root aren’t actually the same person!

                                I used to live in this situation. My sysadmin and I were always messing with permissions–inside my $HOME and outside, too. He believed in giving each user as much power as was safe. ‘Safe’ meant “can’t bring down the system or read other users’ data”. I learned a lot from him!

                                1. 1

                                  I also recently came across this. It’s neither surprising nor wrong, just something I hadn’t thought about before.

                                1. 1

                                  I wish I understood how all this worked. Why exactly is a mod operation slow? Why exactly is it faster to do this via page tables? Is it because the kernel is already doing this and it effectively requires zero additional work? Is it because the CPU can handle this in hardware?

                                  I guess I’ve got some research to do.

                                  1. 4

                                    Mod isn’t super slow, but you can avoid mod entirely without the fancy page tricks by defining your buffer to be a power of 2. For example, a 4KiB buffer is 4096 = 2^12, so you can calculate the wrap-around with ( cur + len ) & 4095 without using mod.

                                    You would still need two separate memcpy’s, and a branch for the wrap-around non wrap-around cases (which is normally not a big deal except when you’re racing against the highly optimized hardware cache in your MMU…)

                                    1. 3

                                      Branches, conditionals such as if/switch statements, can cause performance problems so if you can structure things to avoid this sort of thing you can get a considerable bump in speed.

                                      A lot of people look to software tricks to pull off speedups but this particular data structure can benefit directly from calling upon hardware baked into the CPU (virtual memory mapping).

                                      Most of the time you have a 1:1 mapping of a 4kB continuous physical memory block to a single virtual 4kB page. This is not the only configuration though, you can have multiple virtual memory pages mapping back to the same physical memory block; most commonly seen as a way to save RAM when using shared libraries.

                                      This 1:N mapping technique can also be used for a circular buffer.

                                      So you get your software to ask the kernel to configure the MMU to duplicate the mapping of your buffer (page aligned and sized!) Immediately after the end of the initial allocation.

                                      Now when you are at 100 bytes short of the end of your 4kB circular buffer and you need to write 200 bytes you can just memcpy()-like-a-boss and ignore the problem of having to split your writes into two parts. Meanwhile your offset incrementer remains simply:

                                      offset = (offset + writelen) % 4096
                                      

                                      So the speedup comes from:

                                      • removing the conditionals necessary to handle writes that exceed the end of the buffer
                                      • doing a single longer write, rather than two smaller ones

                                      So it is not really that the CPU is handling this in hardware and so it is faster, the hardware is doing actually no more work than it was before. The performance comes from more a duck-lining-up excercise.

                                      1. 2

                                        Modulo and division (usually one operation) are much slower than the other usual integer operations like addition and subtraction (which are the same thing), though I’m not sure I can explain why in detail. Fortunately for division by multiples of two, right shift >> and AND & can be used instead.

                                        For why doing this with paging is so efficient, it is because the MMU (part of the CPU) does the translation between virtual and physical addresses directly in hardware. The kernel just has to set up the page tables to tell the MMU how it should do so.

                                      1. [Comment removed by author]

                                        1. 2

                                          That’s not really a reasonable criticism. That is the author’s thesis statement. They then go on to use the rest of the article to argue in favor of it. The rest of the argument may be (in my opinion, is) flawed, but the thesis statement itself is just the conclusion presented a priori, so you can see what the author is arguing for. (This is normal in most writing. In scientific papers, we traditionally call it the “abstract”.) The mere fact that its grammar admits humorous substitutions is uninteresting, as that is true of almost all sentences.

                                          1. 1

                                            “LaTeX fetish” is a pun though.

                                            1. 1

                                              Yeah, I get the impression the author wouldn’t have spoken so strongly if they didn’t have such a good pun to back it up with.

                                          1. 3

                                            Wish the pictures were bigger.

                                            1. 4

                                              We can just use more ML for that! https://github.com/nagadomi/waifu2x

                                            1. 2

                                              Couldn’t this be implemented entirely in user space on top of unix domain sockets?

                                              1. 4

                                                Yeah, of course. It’s “doable”/emulatable with any IPC mechanism, but my guess is that a first class kernel implementation provides much more efficient interactions since it’s happening without extra context switches.

                                              1. 6

                                                Surprised me as well, however I really like it as a formal standard including subdomains so one can use it reliably. Can see myself having apps listening to appname.localhost giving you a meaningful and rememberable name. Of course to my current knowledge this would still need data routing through an app proxy if the same TCP port is used.

                                                1. 5

                                                  Ideally we could map .localhost. subdomains to different addresses in 127.0.0.0/8 and use them without conflict.

                                                1. 26

                                                  exa is written in Rust, so it’s small, fast, and portable.

                                                  -rwxr-xr-x  1 root      wheel    38K 28 Apr 20:31 /bin/ls
                                                  -rwxr-xr-x@ 1 curtis    staff   1.3M  7 Jul 12:25 exa-macos-x86_64
                                                  

                                                  ?

                                                  1. 9

                                                    Stripping it helps a bit… but not much though.

                                                    $ du -hs exa-macos-x86_64  
                                                    1.3M	exa-macos-x86_64
                                                    $ strip exa-macos-x86_64     
                                                    $ du -hs exa-macos-x86_64  
                                                    956K	exa-macos-x86_64
                                                    

                                                    More fun is what it links to:

                                                    $ otool -L /bin/ls            
                                                    /bin/ls:
                                                    	/usr/lib/libutil.dylib (compatibility version 1.0.0, current version 1.0.0)
                                                    	/usr/lib/libncurses.5.4.dylib (compatibility version 5.4.0, current version 5.4.0)
                                                    	/usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 1238.60.2)
                                                    $ du -hs /usr/lib/libutil.dylib /usr/lib/libncurses.5.4.dylib /usr/lib/libSystem.B.dylib
                                                     28K	/usr/lib/libutil.dylib
                                                    284K	/usr/lib/libncurses.5.4.dylib
                                                     12K	/usr/lib/libSystem.B.dylib
                                                    $ otool -L /tmp/exa-macos-x86_64
                                                    /tmp/exa-macos-x86_64:
                                                    	/usr/lib/libiconv.2.dylib (compatibility version 7.0.0, current version 7.0.0)
                                                    	/System/Library/Frameworks/Security.framework/Versions/A/Security (compatibility version 1.0.0, current version 57740.60.18)
                                                    	/System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation (compatibility version 150.0.0, current version 1349.8.0)
                                                    	/usr/lib/libz.1.dylib (compatibility version 1.0.0, current version 1.2.8)
                                                    	/usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 1238.60.2)
                                                    $ du -hs /usr/lib/libiconv.2.dylib /System/Library/Frameworks/Security.framework/Versions/A/Security /System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation /usr/lib/libz.1.dylib /usr/lib/libSystem.B.dylib
                                                    1.6M	/usr/lib/libiconv.2.dylib
                                                    9.3M	/System/Library/Frameworks/Security.framework/Versions/A/Security
                                                    9.7M	/System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation
                                                     96K	/usr/lib/libz.1.dylib
                                                     12K	/usr/lib/libSystem.B.dylib
                                                    
                                                    1. 6

                                                      To be fair, exa is a self-contained executable, while ls probably has a dependency to libc, which it loads dynamically. If Rust ever becomes very popular and its runtime is installed by default everywhere, its executables will also be a few KB only.

                                                      1. 4

                                                        FWIW, linking ls from GNU coreutils statically with musl-libc on x86_64 gave me a 147K ELF with no shared object dependencies.

                                                        1. 3

                                                          For that to be true Rust would have to have well defined and stable ABI. Which it doesn’t have right now.

                                                          1. 3

                                                            Rust binaries actually do dynamically link to libc. Its standard library, which calls libc, is statically compiled into binaries.

                                                        1. 4

                                                          The ‘Bare bone’ part is very nice introduction about how to start writing x86-64 OS in any language. Other thing I noticed is x86 assembler. I do not have much experience with it, but I noticed that even it is CISC processor (at the end), still it is used in some kind of RISC manner: move some constant to the register and then move it to the other register - look at e.g. enable_paging procedure. I always had an impression that in x86 it could be done as a one assembler instruction.

                                                          1. 2

                                                            I think in a lot of cases this is necessitated by the instruction encoding. x86_64 uses 3 or 4 bits to represent a register, which works well for the 16 general-purpose registers, but to access other registers you need separate instructions.

                                                          1. 6

                                                            Now if only it were of any use to most developers, as it seems to be tightly coupled to macOS.

                                                            1. 8

                                                              While I know plenty of developers who are on Windows or Linux, I think implying it’s of no use to most developers is a bit of a stretch. The overwhelming majority of web developers I know use Macs, as do most of the mobile developers I know. They combined may be a strict minority, but it should still easily get plenty of use.

                                                              1. 5

                                                                Anecdata: of the ten laptop backs I can see, all have apple logos on them. They’re not all programmers, but there was definitely some syntax highlighting in the mix when I walked by.

                                                                1. 1

                                                                  Web and mobile developers are irrelevant here, and a minority. Okay, you have a bias from what you see. But most programmers are on Windows, and a version for that alone would help Linux developers, too, because of WINE. I just see these people battling with git and this is a solution that will be completely useless to them. Maybe next time.

                                                                  1. 3

                                                                    I suspect that most Windows-using developers don’t use Git and that most Git users actually use either MacOS or Linux.

                                                                    1. 2

                                                                      Maybe, I couldn’t find any data now, but even if, then it’s gradually changing in favor of git, as people are leaving CVS, Subversion and such. And the ratio of Windows to $anything_else developers is huge. I know several places where they use git on Windows.

                                                                    2. 2

                                                                      Software isn’t required to target the majority.

                                                                      1. 2

                                                                        Yeah, I didn’t respond to that particular item, but going off that logic, virtually all Linux GUI software wouldn’t “be of any use to most developers.”

                                                                      2. 1

                                                                        Web and mobile developers are irrelevant here,

                                                                        I genuinely don’t get what you mean. Are you under the misapprehension that they don’t use Git, or that we don’t have mobile and web developers on lobste.rs?

                                                                        1. 1

                                                                          I see it as an expression of bias and nothing more, i.e. why mention them at all?

                                                                          Though let’s not continue in this thread, it’s unproductive and began as a sigh.

                                                                    3. 3

                                                                      I do wonder if someone has useful stats on what kinds of systems programmers use… Like, if npm, cargo, rubygems, pip, etc. would keep track of what platforms people are on when they install packages (although the numbers would probably be a bit skewed, since a decent chunk on package installations happen on server systems on which no development is being done).

                                                                      It wouldn’t surprise me if there was a good amount of people on macOS, simply because there are quite a number of amazing macOS-only developer-centric apps. I mean, do you think Kapeli would sell as many Dash licenses if there was hardly any developers out there using the platform? At least me, Dash has almost revolutionised my workflow.

                                                                      1. 3

                                                                        Brew collected some usage stats, but used Google analytics, creating a giant shitstorm of outrage.

                                                                    1. 5

                                                                      SEO shitbags rank with email spammers as the absolute lowest pigshit dirtfuck dregs of humanity.

                                                                      Is this really the standard of article we want to see here?

                                                                      The author seems pretty ill informed as well:

                                                                      If people don’t want to see my site with random trash inserted into it, they can choose not to access it through broken and/or compromised networks.

                                                                      Earlier you recommended letsencrypt, and now suddenly you want me to pick a competent certificate authority

                                                                      1. 2

                                                                        The author seems pretty ill informed as well:

                                                                        Reposting the “ill informed” opinions without refutation or explanation doesn’t really have much value.

                                                                        Is this really the standard of article we want to see here?

                                                                        You’re new…maybe wait a bit and contribute more before hand-wringing. :)

                                                                        1. 2

                                                                          The article states those opinions without refutation or explanation…

                                                                          1. 1

                                                                            Reposting the “ill informed” opinions without refutation or explanation doesn’t really have much value.

                                                                            I included two quotes from the article to explain my point and I think they speak for themselves. Misquoting someone has negative value.

                                                                        1. 3

                                                                          One thing I like about HN is that if an article is from a while ago, people should update the title to include the year.

                                                                          So just to note, this article is from 2002.

                                                                          1. 3

                                                                            This is also encouraged in the story submission guidelines:

                                                                            When the story being submitted is more than a year or so old, please add the year the story was written to the post title in parentheses.

                                                                            1. 1

                                                                              Sorry, forgot to do that. I’ve know fix’d it

                                                                            1. 56

                                                                              PIPs are not there for your actual improvement; personal, professional, or otherwise.

                                                                              1. 5

                                                                                What should one do when presented with a PIP?

                                                                                1. 56

                                                                                  Start looking for a new job immediately. That is the message.

                                                                                  1. 25

                                                                                    Exactly. The message is, “we’re firing you in 6 months and we made this PIP so we can cite it during the firing.”

                                                                                  2. 3

                                                                                    Depends on the company and your situation.

                                                                                1. 6

                                                                                  The ‘s’ in npm stands for ‘security’.

                                                                                  1. 3

                                                                                    To be fair, as I understand it, this has very little to do with npm. It just happened to be where the potentially malicious code was pushed to, with it being automatically distributed through a third-party CDN.

                                                                                    1. 2

                                                                                      Author here, that’s accurate. Wasn’t pushing any blame on npm whatsoever - in fact, they were very responsive about removing the malicious packages (and mentioned that they’re working to reduce the opportunities for spam).

                                                                                      Why it’s mentioned up front is because this isn’t the first instance of a malware campaign targeting Chrome extensions through npm/unpkg - the author of unpkg mentioned that a similar strain of malvertising with an identical unpkg link generation algorithm had used his service in the past.

                                                                                      Just something to watch for.

                                                                                      1. 1

                                                                                        Ah, fair enough. Their team did a pretty good job getting back to my colleague, so there’s that.

                                                                                        1. 1

                                                                                          From a technical perspective, npm is well above average for a large site. 2fa is supported (but not mandated), etc.

                                                                                          From a social/cultural perspective, it’s insecure because of the large dependency trees.

                                                                                          Even a small app has many dependencies (eg I’ve just created a new, empty codebase with create-react-app; it has 898 distinct transitive dependencies from 448 distinct authors).

                                                                                          This means a huge number of maintainers with access to push code that will be added to your app when you next upgrade a dependency.

                                                                                          Even if each maintainer account is reasonably secure, a single account compromise equals code injection.

                                                                                          1. 1

                                                                                            Rubygems might be just as bad.

                                                                                            1. 1

                                                                                              A fresh install of rails 5.1.2 yields a mere 68 dependencies with 112 authors, so I’d say the problem is about 1/4 as bad in ruby.

                                                                                              1. 1

                                                                                                Ah, yeah, seeing those numbers in context tells a different story.

                                                                                                [e] Wait, 68 dependencies, 112 authors? How can the number of authors be larger?

                                                                                                1. 1

                                                                                                  Many libraries have more than one author - https://rubygems.org/gems/rails lists 12 owners for the main rails gem.

                                                                                                  1. 1

                                                                                                    Ah, I was confused by the distinction between the author and the list of maintainers. Gotcha.

                                                                                                    1. 1

                                                                                                      Yep - I picked ‘maintainers’ because I’m looking at the security angle, and any maintainer can push new versions.