1. 28
  1.  

  2. 3

    Reading the source code revealed that cp keeps track of which files have been copied in a hash table that now and then has to be resized to avoid too many collisions. When the RAM has been used up, this becomes a slow operation.

    Since -R is recursive copy I would have assumed there would be a simple stack to keep track of the depth-first traversal. Why would you require your memory to scale O(n) !?

    1. 5

      GNU cp apparently has options to preserve hardlinks instead of creating duplicates. So it creates a hash table of every file copied and its source inode so it can detect links later in the traversal.

      That’s clearly the situation described in the email, but the email doesn’t actually tell us what options were used and the man page is crap.

      1. 1

        I’m guessing hardlinks. You’ll need to keep track of all inodes you’ve seen to be able to re-create them, right?

      2. 3

        I found an issue in cp that caused 350% extra mem usage for the original bug reporter, which fixing would have kept his working set at least within RAM.

        http://lists.gnu.org/archive/html/coreutils/2014-09/msg00014.html

        1. 1

          Yes, don’t do that!

          If you have that many files, you probably need to come up with some saner archival strategy.