This fixes the case where directory is removed in
aufs and then the same layer is imported to a
different graphdriver.
Currently when you do `rm -rf /foo && mkdir /foo`
in a layer in aufs the files under `foo` would
only be be hidden on aufs.
The problems with this fix:
1) When a new diff is recreated from non-aufs driver
the `opq` files would not be there. This should not
mean layer differences for the user but still
different content in the tar (one would have one
`opq` file, the others would have `.wh.*` for every
file inside that folder). This difference also only
happens if the tar-split file isn’t stored for the
layer.
2) New files that have the filenames before `.wh..wh..opq`
when they are sorted do not get picked up by non-aufs
graphdrivers. Fixing this would require a bigger
refactoring that is planned in the future.
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
Before this patch libcontainer badly errored out with `invalid
argument` or `numerical result out of range` while trying to write
to cpuset.cpus or cpuset.mems with an invalid value provided.
This patch adds validation to --cpuset-cpus and --cpuset-mems flag along with
validation based on system's available cpus/mems before starting a container.
Signed-off-by: Antonio Murdaca <runcom@linux.com>
Absorb Swarm's discovery package in order to provide a common node
discovery mechanism to be used by both Swarm and networking code.
Signed-off-by: Arnaud Porterie <arnaud.porterie@docker.com>
This PR adds a "request ID" to each event generated, the 'docker events'
stream now looks like this:
```
2015-09-10T15:02:50.000000000-07:00 [reqid: c01e3534ddca] de7c5d4ca927253cf4e978ee9c4545161e406e9b5a14617efb52c658b249174a: (from ubuntu) create
```
Note the `[reqID: c01e3534ddca]` part, that's new.
Each HTTP request will generate its own unique ID. So, if you do a
`docker build` you'll see a series of events all with the same reqID.
This allow for log processing tools to determine which events are all related
to the same http request.
I didn't propigate the context to all possible funcs in the daemon,
I decided to just do the ones that needed it in order to get the reqID
into the events. I'd like to have people review this direction first, and
if we're ok with it then I'll make sure we're consistent about when
we pass around the context - IOW, make sure that all funcs at the same level
have a context passed in even if they don't call the log funcs - this will
ensure we're consistent w/o passing it around for all calls unnecessarily.
ping @icecrime @calavera @crosbymichael
Signed-off-by: Doug Davis <dug@us.ibm.com>
The tests added cover the case when the Writer field returns and error
and, related to that, when the number of written bytes is less than 0.
Signed-off-by: Federico Gimenez <fgimenez@coit.es>
This way provide both Time and TimeNano in the event. For the display of
the JSONMessage, use either, but prefer TimeNano Proving only TimeNano
would break Subscribers that are using the `Time` field, so both are set
for backwards compatibility.
The events logging uses nano formatting, but only provides a Unix()
time, therefor ordering may get lost in the output. Example:
```
2015-09-15T14:18:51.000000000-04:00 ee46febd64ac629f7de9cd8bf58582e6f263d97ff46896adc5b508db804682da: (from busybox) resize
2015-09-15T14:18:51.000000000-04:00 a78c9149b1c0474502a117efaa814541926c2ae6ec3c76607e1c931b84c3a44b: (from busybox) resize
```
By having a field just for Nano time, when set, the marshalling back to
`time.Unix(sec int64, nsec int64)` has zeros exactly where it needs to.
This does not break any existing use of jsonmessage.JSONMessage, but now
allows for use of `UnixNano()` and get event formatting that has
distinguishable order. Example:
```
2015-09-15T15:37:23.810295632-04:00 6adcf8ed9f5f5ec059a915466cd1cde86a18b4a085fc3af405e9cc9fecbbbbaf: (from busybox) resize
2015-09-15T15:37:23.810412202-04:00 6b7c5bfdc3f902096f5a91e628f21bd4b56e32590c5b4b97044aafc005ddcb0d: (from busybox) resize
```
Including tests for TimeNano and updated event API reference doc.
Signed-off-by: Vincent Batts <vbatts@redhat.com>
Also, use the channel to determine if the broadcaster is closed,
removing the redundant isClosed variable.
Signed-off-by: Aaron Lehmann <aaron.lehmann@docker.com>
Before, this only waited for the download to complete. There was no
guarantee that the layer had been registered in the graph and was ready
use. This is especially problematic with v2 pulls, which wait for all
downloads before extracting layers.
Change Broadcaster to allow an error value to be propagated from Close
to the waiters.
Make the wait stop when the extraction is finished, rather than just the
download.
This also fixes v2 layer downloads to prefix the pool key with "layer:"
instead of "img:". "img:" is the wrong prefix, because this is what v1
uses for entire images. A v1 pull waiting for one of these operations to
finish would only wait for that particular layer, not all its
dependencies.
Signed-off-by: Aaron Lehmann <aaron.lehmann@docker.com>