migrated from https://github.com/crawl-space/content-set-packer
227e8de979
Each node is written to disk as a list of (path, node pointer) pairs. The duplicate detection code was considering the node's children and the node's name. If we only look for the children, we can find much more duplicates. Previous duplicate detection went from 424 nodes to 127. New duplicate detection reduces to 48 nodes. With this better duplicate detection, the prefix compression doesn't appear to be useful anymore. comment it out. Trims an extra 40 bytes off my sample data. |
||
---|---|---|
.gitignore | ||
huffman.rb | ||
Makefile | ||
thing.rb | ||
unpack.c |