1
0
Fork 1
mirror of https://github.com/vbatts/tar-split.git synced 2024-11-15 12:58:38 +00:00
tar archive assembly/disassembly
Find a file
Joe Tsai b598ba3ee7 archive/tar: fix issues with readGNUSparseMap1x0
Motivations:
* Use of strconv.ParseInt does not properly treat integers as 64bit,
preventing this function from working properly on 32bit machines.
* Use of io.ReadFull does not properly detect truncated streams
when the file suddenly ends on a block boundary.
* The function blindly trusts user input for numEntries and allocates
memory accordingly.
* The function does not validate that numEntries is not negative,
allowing a malicious sparse file to cause a panic during make.

In general, this function was overly complicated for what it was
accomplishing and it was hard to reason that it was free from
bounds errors. Instead, it has been rewritten and relies on
bytes.Buffer.ReadString to do the main work. So long as invariants
about the number of '\n' in the buffer are maintained, it is much
easier to see why this approach is correct.

Change-Id: Ibb12c4126c26e0ea460ea063cd17af68e3cf609e
Reviewed-on: https://go-review.googlesource.com/15174
Reviewed-by: Russ Cox <rsc@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-02-02 14:17:35 -05:00
archive/tar archive/tar: fix issues with readGNUSparseMap1x0 2016-02-02 14:17:35 -05:00
cmd/tar-split *: adding some version magic 2015-08-14 10:02:46 -04:00
concept README: formatting and cleanup 2015-08-10 15:36:30 -04:00
tar Merge pull request #27 from vbatts/bench_asm 2015-12-02 14:09:21 -06:00
version *: adding some version magic 2015-08-14 10:02:46 -04:00
.travis.yml travis: drop go1.2 2015-12-01 15:26:30 -05:00
LICENSE LICENSE: update LICENSE to BSD 3-clause 2015-12-03 15:45:57 -05:00
README.md demo: docker layer checksums 2015-10-16 17:05:18 -04:00

tar-split

Build Status

Pristinely disassembling a tar archive, and stashing needed raw bytes and offsets to reassemble a validating original archive.

Docs

Code API for libraries provided by tar-split:

Install

The command line utilitiy is installable via:

go get github.com/vbatts/tar-split/cmd/tar-split

Usage

For cli usage, see its README.md. For the library see the docs

Demo

Basic disassembly and assembly

This demonstrates the tar-split command and how to assemble a tar archive from the tar-data.json.gz

basic cmd demo thumbnail youtube video of basic command demo

Docker layer preservation

This demonstrates the tar-split integration for docker-1.8. Providing consistent tar archives for the image layer content.

docker tar-split demo youtube vide of docker layer checksums

Caveat

Eventually this should detect TARs that this is not possible with.

For example stored sparse files that have "holes" in them, will be read as a contiguous file, though the archive contents may be recorded in sparse format. Therefore when adding the file payload to a reassembled tar, to achieve identical output, the file payload would need be precisely re-sparsified. This is not something I seek to fix imediately, but would rather have an alert that precise reassembly is not possible. (see more http://www.gnu.org/software/tar/manual/html_node/Sparse-Formats.html)

Other caveat, while tar archives support having multiple file entries for the same path, we will not support this feature. If there are more than one entries with the same path, expect an err (like ErrDuplicatePath) or a resulting tar stream that does not validate your original checksum/signature.

Contract

Do not break the API of stdlib archive/tar in our fork (ideally find an upstream mergeable solution).

Std Version

The version of golang stdlib archive/tar is from go1.4.1, and their master branch around a9dddb53f. It is minimally extended to expose the raw bytes of the TAR, rather than just the marshalled headers and file stream.

Design

See the design.

Stored Metadata

Since the raw bytes of the headers and padding are stored, you may be wondering what the size implications are. The headers are at least 512 bytes per file (sometimes more), at least 1024 null bytes on the end, and then various padding. This makes for a constant linear growth in the stored metadata, with a naive storage implementation.

First we'll get an archive to work with. For repeatability, we'll make an archive from what you've just cloned:

git archive --format=tar -o tar-split.tar HEAD .
$ go get github.com/vbatts/tar-split/cmd/tar-split
$ tar-split checksize ./tar-split.tar
inspecting "tar-split.tar" (size 210k)
 -- number of files: 50
 -- size of metadata uncompressed: 53k
 -- size of gzip compressed metadata: 3k

So assuming you've managed the extraction of the archive yourself, for reuse of the file payloads from a relative path, then the only additional storage implications are as little as 3kb.

But let's look at a larger archive, with many files.

$ ls -sh ./d.tar
1.4G ./d.tar
$ tar-split checksize ~/d.tar 
inspecting "/home/vbatts/d.tar" (size 1420749k)
 -- number of files: 38718
 -- size of metadata uncompressed: 43261k
 -- size of gzip compressed metadata: 2251k

Here, an archive with 38,718 files has a compressed footprint of about 2mb.

Rolling the null bytes on the end of the archive, we will assume a bytes-per-file rate for the storage implications.

uncompressed compressed
~ 1kb per/file 0.06kb per/file

What's Next?

  • More implementations of storage Packer and Unpacker
  • More implementations of FileGetter and FilePutter
  • would be interesting to have an assembler stream that implements io.Seeker

License

See LICENSE