Based on some rough benchmarking performed on reasonably modern,
but not over the top laptop hardware (i7-8665U + PCIe3 NVMe SSD)
this results in raw disk io (+ tar overhead) becoming the performance
bottleneck instead of hashing rate.
This patch uses a clever invocation of GNU tar to produce a
deterministic bytestream from a directory tree. This stream is fed to a
hash function in 64KiB chunks (== Linux default pipe capacity) to
produce a fingerprint which can be displayed as an identicon.
Why would we do this instead of using the tarfile stdlib package or just
using os.walk plus some code?
The tarfile package is not capable of producing the output as a stream
of N byte chunks. The "most granual" mode of operation it can do is
producing all of the chunks belonging to a given file all at once.
This is problematic, because we could run out of memory or be forced to
write the tar archive to a temporary file - which would be painfully slow,
we could run out of disk space, wear out SSDs or outright refuse to run in a
container with a read-only rootfs and no tmpfs mounted.
An os.walk solution is doable, but will require some problem solving
which I am too lazy to do right now:
- Forcing os.walk to walk in a deterministic order (should be easy)
- Walks on different directory structures could theoretically produce
the same bytestream (doable but requires some thinking)
The GNU tar solution is far from ideal (it forces an external dependency
and requires a subprocess call and some pipe juggling) but is very easy
to implement and should be fine performance wise:
- The bottleneck on reasonable hardware configurations should
be hashing or disk IO
- The cost of doing a fork/exec is negligible compared to either
TL;DR os.walk: maybe in a future patch