1. 25

  2. 21

    I’m not near a device where I can test it, but I believe this explanation is not correct. The small buffer shouldn’t be a problem for ls - a few thousand syscalls is not a lot. The issue with listing large directories usually comes from the fact ls buffers the output, sorts it, and formats columns.

    Ls can be quick and stream the entries as they’re found:

    ls -1 -f
    1. 3

      Or a different take: If you’re not seeing the file names stream in in real time, then the method you’re using is likely waiting for the whole result. With large directories you first need to achieve streaming, then talk about speed.

    2. 8

      Somewhat related story: on a non-work linux laptop my wife ended up with a directory of 15 million files. The full story is here http://pzel.name/til/2020/08/30/Large-directory-feature-not-enabled-on-this-filesystem.html I used find . to list all the files, which surprisingly did not hang.

      1. 1

        I was wondering if find . would hang in the same way. ls is actually notoriously bad at listing files when it gets over a certain amount.

      2. 4

        Why doesn’t ls work?

        I think that isn’t intentional ?

        1. 6

          l̗oͯo̐̔k̖̗s ͉̹̊̚f̫̂i̱̳̅ͯne ̱͓̖toͨ͆ͧ ͎̟̈̈m̜͇͚̒ͨͦe̜̭ͣ̓ ;)

          1. 5

            Looks like the entire HTML header is missing, so it’ll use whatever codepage your browser defaults to. In Firefox there’s an option under “View” called “Repair Text Encoding”. Never seen that option before, but it worked for me :)

            1. 4

              Literally all they need is

              <meta charset="UTF-8">
          2. 1

            ls is also fine, but you need to pass options that disable calls to stat().

            1. 1

              LC_ALL=C ls -1 --color=never should be ok. (even without setting LC_ALL)