Threads for cryptoquick

  1. 5

    Very cool! Works well. Gonna use this for a local sneakernet I’m running. 50GB of data encrypted in 3.2 minutes.

    Do you think this code could benefit from parallelization? Make it even faster?

    Also, any plans to publish to crates.io? It’d be really handy if I could just say cargo install bitbottle

    1. 6

      I’ll add “figure out crates.io” to my to-do list. :) I’m not sure if parallelization would help; the bottleneck seems to be disk I/O or LZMA2, and both are serial. I’m also worried about the complexity cost of adding concurrency, unless it makes a huge difference.

      1. 4

        the bottleneck seems to be disk I/O or LZMA2, and both are serial. I’m also worried about the complexity cost of adding concurrency, unless it makes a huge difference.

        For what it’s worth, the zstd library includes multithreaded compression (and Rust bindings exist).

        It and brotli also fill the space of “denser than Snappy, but faster than LZMA2” and have faster decompression than LZMA2 even at their denser settings. zstd started from the fast side (it’s from Yann Collet, LZ4’s author) and brotli started from the dense side (one of the authors is Jyrki Alakuijala, a Google compression specialist), but both offer a wide range of speed/density tradeoffs now.

        1. 1

          Ye, I’d look into zstd or lz4 instead of lzma2.