Moved a defer up to a better spot.
Fixed TestUntarPathWithInvalidDest to actually fail for the right reason
Closes#18170
Signed-off-by: Doug Davis <dug@us.ibm.com>
Fixes#16555
Original docker `cp` always copy symbol link itself instead of target,
now we provide '-L' option to allow docker to follow symbol link to real
target.
Signed-off-by: Zhang Wei <zhangwei555@huawei.com>
Fixes#17766
Previously, opaque directory whiteouts on non-native
graphdrivers depended on the file order, meaning
files added with the same layer before the whiteout
file `.wh..wh..opq` were also removed.
If that file happened to have subdirs, then calling
chtimes on those dirs after unpack would fail the pull.
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
The race is between pools.Put which calls buf.Reset and exec.Cmd
doing io.Copy from the buffer; it caused a runtime crash, as
described in #16924:
``` docker-daemon cat the-tarball.xz | xz -d -c -q | docker-untar /path/to/... (aufs ) ```
When docker-untar side fails (like try to set xattr on aufs, or a broken
tar), invokeUnpack will be responsible to exhaust all input, otherwise
`xz` will be write pending for ever.
this change add a receive only channel to cmdStream, and will close it
to notify it's now safe to close the input stream;
in CmdStream the change to use Stdin / Stdout / Stderr keeps the
code simple, os/exec.Cmd will spawn goroutines and call io.Copy automatically.
the CmdStream is actually called in the same file only, change it
lowercase to mark as private.
[...]
INFO[0000] Docker daemon commit=0a8c2e3 execdriver=native-0.2 graphdriver=aufs version=1.8.2
DEBU[0006] Calling POST /build
INFO[0006] POST /v1.20/build?cgroupparent=&cpuperiod=0&cpuquota=0&cpusetcpus=&cpusetmems=&cpushares=0&dockerfile=Dockerfile&memory=0&memswap=0&rm=1&t=gentoo-x32&ulimits=null
DEBU[0008] [BUILDER] Cache miss
DEBU[0009] Couldn't untar /home/lib-docker-v1.8.2-tmp/tmp/docker-build316710953/stage3-x32-20151004.tar.xz to /home/lib-docker-v1.8.2-tmp/aufs/mnt/d909abb87150463939c13e8a349b889a72d9b14f0cfcab42a8711979be285537: Untar re-exec error: exit status 1: output: operation not supported
DEBU[0009] CopyFileWithTar(/home/lib-docker-v1.8.2-tmp/tmp/docker-build316710953/stage3-x32-20151004.tar.xz, /home/lib-docker-v1.8.2-tmp/aufs/mnt/d909abb87150463939c13e8a349b889a72d9b14f0cfcab42a8711979be285537/)
panic: runtime error: slice bounds out of range
goroutine 42 [running]:
bufio.(*Reader).fill(0xc208187800)
/usr/local/go/src/bufio/bufio.go:86 +0x2db
bufio.(*Reader).WriteTo(0xc208187800, 0x7ff39602d150, 0xc2083f11a0, 0x508000, 0x0, 0x0)
/usr/local/go/src/bufio/bufio.go:449 +0x27e
io.Copy(0x7ff39602d150, 0xc2083f11a0, 0x7ff3960261f8, 0xc208187800, 0x0, 0x0, 0x0)
/usr/local/go/src/io/io.go:354 +0xb2
github.com/docker/docker/pkg/archive.func·006()
/go/src/github.com/docker/docker/pkg/archive/archive.go:817 +0x71
created by github.com/docker/docker/pkg/archive.CmdStream
/go/src/github.com/docker/docker/pkg/archive/archive.go:819 +0x1ec
goroutine 1 [chan receive]:
main.(*DaemonCli).CmdDaemon(0xc20809da30, 0xc20800a020, 0xd, 0xd, 0x0, 0x0)
/go/src/github.com/docker/docker/docker/daemon.go:289 +0x1781
reflect.callMethod(0xc208140090, 0xc20828fce0)
/usr/local/go/src/reflect/value.go:605 +0x179
reflect.methodValueCall(0xc20800a020, 0xd, 0xd, 0x1, 0xc208140090, 0x0, 0x0, 0xc208140090, 0x0, 0x45343f, ...)
/usr/local/go/src/reflect/asm_amd64.s:29 +0x36
github.com/docker/docker/cli.(*Cli).Run(0xc208129fb0, 0xc20800a010, 0xe, 0xe, 0x0, 0x0)
/go/src/github.com/docker/docker/cli/cli.go:89 +0x38e
main.main()
/go/src/github.com/docker/docker/docker/docker.go:69 +0x428
goroutine 5 [syscall]:
os/signal.loop()
/usr/local/go/src/os/signal/signal_unix.go:21 +0x1f
created by os/signal.init·1
/usr/local/go/src/os/signal/signal_unix.go:27 +0x35
Signed-off-by: Derek Ch <denc716@gmail.com>
All the go-lint work forced any existing "Uid" -> "UID", but seems to
not have the same rules for Gid, so stat package has calls UID() and
Gid().
Docker-DCO-1.1-Signed-off-by: Phil Estes <estesp@linux.vnet.ibm.com> (github: estesp)
Adds support for the daemon to handle user namespace maps as a
per-daemon setting.
Support for handling uid/gid mapping is added to the builder,
archive/unarchive packages and functions, all graphdrivers (except
Windows), and the test suite is updated to handle user namespace daemon
rootgraph changes.
Docker-DCO-1.1-Signed-off-by: Phil Estes <estesp@linux.vnet.ibm.com> (github: estesp)
This fixes the case where directory is removed in
aufs and then the same layer is imported to a
different graphdriver.
Currently when you do `rm -rf /foo && mkdir /foo`
in a layer in aufs the files under `foo` would
only be be hidden on aufs.
The problems with this fix:
1) When a new diff is recreated from non-aufs driver
the `opq` files would not be there. This should not
mean layer differences for the user but still
different content in the tar (one would have one
`opq` file, the others would have `.wh.*` for every
file inside that folder). This difference also only
happens if the tar-split file isn’t stored for the
layer.
2) New files that have the filenames before `.wh..wh..opq`
when they are sorted do not get picked up by non-aufs
graphdrivers. Fixing this would require a bigger
refactoring that is planned in the future.
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
[pkg/archive] Update archive/copy path handling
- Remove unused TarOptions.Name field.
- Add new TarOptions.RebaseNames field.
- Update some of the logic around path dir/base splitting.
- Update some of the logic behind archive entry name rebasing.
[api/types] Add LinkTarget field to PathStat
[daemon] Fix stat, archive, extract of symlinks
These operations *should* resolve symlinks that are in the path but if the
resource itself is a symlink then it *should not* be resolved. This patch
puts this logic into a common function `resolvePath` which resolves symlinks
of the path's dir in scope of the container rootfs but does not resolve the
final element of the path. Now archive, extract, and stat operations will
return symlinks if the path is indeed a symlink.
[api/client] Update cp path hanling
[docs/reference/api] Update description of stat
Add the linkTarget field to the header of the archive endpoint.
Remove path field.
[integration-cli] Fix/Add cp symlink test cases
Copying a symlink should do just that: copy the symlink NOT
copy the target of the symlink. Also, the resulting file from
the copy should have the name of the symlink NOT the name of
the target file.
Copying to a symlink should copy to the symlink target and not
modify the symlink itself.
Docker-DCO-1.1-Signed-off-by: Josh Hawn <josh.hawn@docker.com> (github: jlhawn)
TL;DR: check for IsExist(err) after a failed MkdirAll() is both
redundant and wrong -- so two reasons to remove it.
Quoting MkdirAll documentation:
> MkdirAll creates a directory named path, along with any necessary
> parents, and returns nil, or else returns an error. If path
> is already a directory, MkdirAll does nothing and returns nil.
This means two things:
1. If a directory to be created already exists, no error is returned.
2. If the error returned is IsExist (EEXIST), it means there exists
a non-directory with the same name as MkdirAll need to use for
directory. Example: we want to MkdirAll("a/b"), but file "a"
(or "a/b") already exists, so MkdirAll fails.
The above is a theory, based on quoted documentation and my UNIX
knowledge.
3. In practice, though, current MkdirAll implementation [1] returns
ENOTDIR in most of cases described in #2, with the exception when
there is a race between MkdirAll and someone else creating the
last component of MkdirAll argument as a file. In this very case
MkdirAll() will indeed return EEXIST.
Because of #1, IsExist check after MkdirAll is not needed.
Because of #2 and #3, ignoring IsExist error is just plain wrong,
as directory we require is not created. It's cleaner to report
the error now.
Note this error is all over the tree, I guess due to copy-paste,
or trying to follow the same usage pattern as for Mkdir(),
or some not quite correct examples on the Internet.
[v2: a separate aufs commit is merged into this one]
[1] https://github.com/golang/go/blob/f9ed2f75/src/os/path.go
Signed-off-by: Kir Kolyshkin <kir@openvz.org>
In `ApplyLayer` and `Untar`, the stream is magically decompressed. Since
this is not able to be toggled, rather than break this ./pkg/ API, add
an `ApplyUncompressedLayer` and `UntarUncompressed` that does not
magically decompress the layer stream.
Signed-off-by: Vincent Batts <vbatts@redhat.com>
Adds TarResource and CopyTo functions to be used for creating
archives for use with the new `docker cp` behavior.
Adds multiple test cases for the CopyFrom and CopyTo
functions in the pkg/archive package.
Docker-DCO-1.1-Signed-off-by: Josh Hawn <josh.hawn@docker.com> (github: jlhawn)
The "TestChangesWithChanges" case randomlly fails on my development
VM with the following errors:
```
--- FAIL: TestChangesWithChanges (0.00s)
changes_test.go:201: no change for expected change C /dir1/subfolder != A /dir1/subfolder/newFile
```
If I apply the following patch to changes_test.go, the test passes.
```diff
diff --git a/pkg/archive/changes_test.go b/pkg/archive/changes_test.go
index 290b2dd..ba1aca0 100644
--- a/pkg/archive/changes_test.go
+++ b/pkg/archive/changes_test.go
@@ -156,6 +156,7 @@ func TestChangesWithChanges(t *testing.T) {
}
defer os.RemoveAll(layer)
createSampleDir(t, layer)
+ time.Sleep(5 * time.Millisecond)
os.MkdirAll(path.Join(layer, "dir1/subfolder"), 0740)
// Let's modify modtime for dir1 to be sure it's the same for the two layer (to not having false positive)
```
It seems that if a file is created immediately after the directory is created,
the `archive.Changes` function could't recognize that the parent directory of
the new file is modified.
Perhaps the problem may reproduce on machines with low time precision?
I had successfully reproduced the failure on my development VM as well as
a VM on DigitalOcean.
Signed-off-by: Shijiang Wei <mountkin@gmail.com>
This makes the "Buffering to disk" part of `docker push` 70% faster in
my use-case (having already applied #12833).
fsync'ing here serves no valuable purpose: if the drive's operation is
interrupted, so it the program's, and this archive has no value other
than the immediate and transient one.
Signed-off-by: Burke Libbey <burke.libbey@shopify.com>
If we tear through a few layers of abstraction, we can get at the inodes
contained in a directory without having to stat all the files. This
allows us to eliminate identical files much earlier in the changelist
generation process.
Signed-off-by: Burke Libbey <burke@libbey.me>