Build and install from GOPATH
* Rename 'vendor/src' -> 'vendor' * Ignore vendor/ instead of vendor/src/ for lint * Rename 'cmd/client' -> 'cmd/ocic' to make it 'go install'able * Rename 'cmd/server' -> 'cmd/ocid' to make it 'go install'able * Update Makefile to build and install from GOPATH * Update tests to locate ocid/ocic in GOPATH/bin * Search for binaries in GOPATH/bin instead of PATH * Install tools using `go get -u`, so they are updated on each run Signed-off-by: Jonathan Yu <jawnsy@redhat.com>
This commit is contained in:
parent
9da2882d49
commit
6c9628cdb1
1111 changed files with 70 additions and 61 deletions
189
vendor/github.com/containers/image/LICENSE
generated
vendored
Normal file
189
vendor/github.com/containers/image/LICENSE
generated
vendored
Normal file
|
@ -0,0 +1,189 @@
|
|||
|
||||
Apache License
|
||||
Version 2.0, January 2004
|
||||
https://www.apache.org/licenses/
|
||||
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
|
||||
1. Definitions.
|
||||
|
||||
"License" shall mean the terms and conditions for use, reproduction,
|
||||
and distribution as defined by Sections 1 through 9 of this document.
|
||||
|
||||
"Licensor" shall mean the copyright owner or entity authorized by
|
||||
the copyright owner that is granting the License.
|
||||
|
||||
"Legal Entity" shall mean the union of the acting entity and all
|
||||
other entities that control, are controlled by, or are under common
|
||||
control with that entity. For the purposes of this definition,
|
||||
"control" means (i) the power, direct or indirect, to cause the
|
||||
direction or management of such entity, whether by contract or
|
||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||
|
||||
"You" (or "Your") shall mean an individual or Legal Entity
|
||||
exercising permissions granted by this License.
|
||||
|
||||
"Source" form shall mean the preferred form for making modifications,
|
||||
including but not limited to software source code, documentation
|
||||
source, and configuration files.
|
||||
|
||||
"Object" form shall mean any form resulting from mechanical
|
||||
transformation or translation of a Source form, including but
|
||||
not limited to compiled object code, generated documentation,
|
||||
and conversions to other media types.
|
||||
|
||||
"Work" shall mean the work of authorship, whether in Source or
|
||||
Object form, made available under the License, as indicated by a
|
||||
copyright notice that is included in or attached to the work
|
||||
(an example is provided in the Appendix below).
|
||||
|
||||
"Derivative Works" shall mean any work, whether in Source or Object
|
||||
form, that is based on (or derived from) the Work and for which the
|
||||
editorial revisions, annotations, elaborations, or other modifications
|
||||
represent, as a whole, an original work of authorship. For the purposes
|
||||
of this License, Derivative Works shall not include works that remain
|
||||
separable from, or merely link (or bind by name) to the interfaces of,
|
||||
the Work and Derivative Works thereof.
|
||||
|
||||
"Contribution" shall mean any work of authorship, including
|
||||
the original version of the Work and any modifications or additions
|
||||
to that Work or Derivative Works thereof, that is intentionally
|
||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||
or by an individual or Legal Entity authorized to submit on behalf of
|
||||
the copyright owner. For the purposes of this definition, "submitted"
|
||||
means any form of electronic, verbal, or written communication sent
|
||||
to the Licensor or its representatives, including but not limited to
|
||||
communication on electronic mailing lists, source code control systems,
|
||||
and issue tracking systems that are managed by, or on behalf of, the
|
||||
Licensor for the purpose of discussing and improving the Work, but
|
||||
excluding communication that is conspicuously marked or otherwise
|
||||
designated in writing by the copyright owner as "Not a Contribution."
|
||||
|
||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||
on behalf of whom a Contribution has been received by Licensor and
|
||||
subsequently incorporated within the Work.
|
||||
|
||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
copyright license to reproduce, prepare Derivative Works of,
|
||||
publicly display, publicly perform, sublicense, and distribute the
|
||||
Work and such Derivative Works in Source or Object form.
|
||||
|
||||
3. Grant of Patent License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
(except as stated in this section) patent license to make, have made,
|
||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||
where such license applies only to those patent claims licensable
|
||||
by such Contributor that are necessarily infringed by their
|
||||
Contribution(s) alone or by combination of their Contribution(s)
|
||||
with the Work to which such Contribution(s) was submitted. If You
|
||||
institute patent litigation against any entity (including a
|
||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||
or a Contribution incorporated within the Work constitutes direct
|
||||
or contributory patent infringement, then any patent licenses
|
||||
granted to You under this License for that Work shall terminate
|
||||
as of the date such litigation is filed.
|
||||
|
||||
4. Redistribution. You may reproduce and distribute copies of the
|
||||
Work or Derivative Works thereof in any medium, with or without
|
||||
modifications, and in Source or Object form, provided that You
|
||||
meet the following conditions:
|
||||
|
||||
(a) You must give any other recipients of the Work or
|
||||
Derivative Works a copy of this License; and
|
||||
|
||||
(b) You must cause any modified files to carry prominent notices
|
||||
stating that You changed the files; and
|
||||
|
||||
(c) You must retain, in the Source form of any Derivative Works
|
||||
that You distribute, all copyright, patent, trademark, and
|
||||
attribution notices from the Source form of the Work,
|
||||
excluding those notices that do not pertain to any part of
|
||||
the Derivative Works; and
|
||||
|
||||
(d) If the Work includes a "NOTICE" text file as part of its
|
||||
distribution, then any Derivative Works that You distribute must
|
||||
include a readable copy of the attribution notices contained
|
||||
within such NOTICE file, excluding those notices that do not
|
||||
pertain to any part of the Derivative Works, in at least one
|
||||
of the following places: within a NOTICE text file distributed
|
||||
as part of the Derivative Works; within the Source form or
|
||||
documentation, if provided along with the Derivative Works; or,
|
||||
within a display generated by the Derivative Works, if and
|
||||
wherever such third-party notices normally appear. The contents
|
||||
of the NOTICE file are for informational purposes only and
|
||||
do not modify the License. You may add Your own attribution
|
||||
notices within Derivative Works that You distribute, alongside
|
||||
or as an addendum to the NOTICE text from the Work, provided
|
||||
that such additional attribution notices cannot be construed
|
||||
as modifying the License.
|
||||
|
||||
You may add Your own copyright statement to Your modifications and
|
||||
may provide additional or different license terms and conditions
|
||||
for use, reproduction, or distribution of Your modifications, or
|
||||
for any such Derivative Works as a whole, provided Your use,
|
||||
reproduction, and distribution of the Work otherwise complies with
|
||||
the conditions stated in this License.
|
||||
|
||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||
any Contribution intentionally submitted for inclusion in the Work
|
||||
by You to the Licensor shall be under the terms and conditions of
|
||||
this License, without any additional terms or conditions.
|
||||
Notwithstanding the above, nothing herein shall supersede or modify
|
||||
the terms of any separate license agreement you may have executed
|
||||
with Licensor regarding such Contributions.
|
||||
|
||||
6. Trademarks. This License does not grant permission to use the trade
|
||||
names, trademarks, service marks, or product names of the Licensor,
|
||||
except as required for reasonable and customary use in describing the
|
||||
origin of the Work and reproducing the content of the NOTICE file.
|
||||
|
||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||
agreed to in writing, Licensor provides the Work (and each
|
||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
implied, including, without limitation, any warranties or conditions
|
||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||
appropriateness of using or redistributing the Work and assume any
|
||||
risks associated with Your exercise of permissions under this License.
|
||||
|
||||
8. Limitation of Liability. In no event and under no legal theory,
|
||||
whether in tort (including negligence), contract, or otherwise,
|
||||
unless required by applicable law (such as deliberate and grossly
|
||||
negligent acts) or agreed to in writing, shall any Contributor be
|
||||
liable to You for damages, including any direct, indirect, special,
|
||||
incidental, or consequential damages of any character arising as a
|
||||
result of this License or out of the use or inability to use the
|
||||
Work (including but not limited to damages for loss of goodwill,
|
||||
work stoppage, computer failure or malfunction, or any and all
|
||||
other commercial damages or losses), even if such Contributor
|
||||
has been advised of the possibility of such damages.
|
||||
|
||||
9. Accepting Warranty or Additional Liability. While redistributing
|
||||
the Work or Derivative Works thereof, You may choose to offer,
|
||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||
or other liability obligations and/or rights consistent with this
|
||||
License. However, in accepting such obligations, You may act only
|
||||
on Your own behalf and on Your sole responsibility, not on behalf
|
||||
of any other Contributor, and only if You agree to indemnify,
|
||||
defend, and hold each Contributor harmless for any liability
|
||||
incurred by, or claims asserted against, such Contributor by reason
|
||||
of your accepting any such warranty or additional liability.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
https://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
61
vendor/github.com/containers/image/copy/compression.go
generated
vendored
Normal file
61
vendor/github.com/containers/image/copy/compression.go
generated
vendored
Normal file
|
@ -0,0 +1,61 @@
|
|||
package copy
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"compress/bzip2"
|
||||
"compress/gzip"
|
||||
"errors"
|
||||
"io"
|
||||
|
||||
"github.com/Sirupsen/logrus"
|
||||
)
|
||||
|
||||
// decompressorFunc, given a compressed stream, returns the decompressed stream.
|
||||
type decompressorFunc func(io.Reader) (io.Reader, error)
|
||||
|
||||
func gzipDecompressor(r io.Reader) (io.Reader, error) {
|
||||
return gzip.NewReader(r)
|
||||
}
|
||||
func bzip2Decompressor(r io.Reader) (io.Reader, error) {
|
||||
return bzip2.NewReader(r), nil
|
||||
}
|
||||
func xzDecompressor(r io.Reader) (io.Reader, error) {
|
||||
return nil, errors.New("Decompressing xz streams is not supported")
|
||||
}
|
||||
|
||||
// compressionAlgos is an internal implementation detail of detectCompression
|
||||
var compressionAlgos = map[string]struct {
|
||||
prefix []byte
|
||||
decompressor decompressorFunc
|
||||
}{
|
||||
"gzip": {[]byte{0x1F, 0x8B, 0x08}, gzipDecompressor}, // gzip (RFC 1952)
|
||||
"bzip2": {[]byte{0x42, 0x5A, 0x68}, bzip2Decompressor}, // bzip2 (decompress.c:BZ2_decompress)
|
||||
"xz": {[]byte{0xFD, 0x37, 0x7A, 0x58, 0x5A, 0x00}, xzDecompressor}, // xz (/usr/share/doc/xz/xz-file-format.txt)
|
||||
}
|
||||
|
||||
// detectCompression returns a decompressorFunc if the input is recognized as a compressed format, nil otherwise.
|
||||
// Because it consumes the start of input, other consumers must use the returned io.Reader instead to also read from the beginning.
|
||||
func detectCompression(input io.Reader) (decompressorFunc, io.Reader, error) {
|
||||
buffer := [8]byte{}
|
||||
|
||||
n, err := io.ReadAtLeast(input, buffer[:], len(buffer))
|
||||
if err != nil && err != io.EOF && err != io.ErrUnexpectedEOF {
|
||||
// This is a “real” error. We could just ignore it this time, process the data we have, and hope that the source will report the same error again.
|
||||
// Instead, fail immediately with the original error cause instead of a possibly secondary/misleading error returned later.
|
||||
return nil, nil, err
|
||||
}
|
||||
|
||||
var decompressor decompressorFunc
|
||||
for name, algo := range compressionAlgos {
|
||||
if bytes.HasPrefix(buffer[:n], algo.prefix) {
|
||||
logrus.Debugf("Detected compression format %s", name)
|
||||
decompressor = algo.decompressor
|
||||
break
|
||||
}
|
||||
}
|
||||
if decompressor == nil {
|
||||
logrus.Debugf("No compression detected")
|
||||
}
|
||||
|
||||
return decompressor, io.MultiReader(bytes.NewReader(buffer[:n]), input), nil
|
||||
}
|
568
vendor/github.com/containers/image/copy/copy.go
generated
vendored
Normal file
568
vendor/github.com/containers/image/copy/copy.go
generated
vendored
Normal file
|
@ -0,0 +1,568 @@
|
|||
package copy
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"compress/gzip"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"reflect"
|
||||
|
||||
pb "gopkg.in/cheggaaa/pb.v1"
|
||||
|
||||
"github.com/Sirupsen/logrus"
|
||||
"github.com/containers/image/image"
|
||||
"github.com/containers/image/manifest"
|
||||
"github.com/containers/image/signature"
|
||||
"github.com/containers/image/transports"
|
||||
"github.com/containers/image/types"
|
||||
"github.com/docker/distribution/digest"
|
||||
)
|
||||
|
||||
// preferredManifestMIMETypes lists manifest MIME types in order of our preference, if we can't use the original manifest and need to convert.
|
||||
// Prefer v2s2 to v2s1 because v2s2 does not need to be changed when uploading to a different location.
|
||||
// Include v2s1 signed but not v2s1 unsigned, because docker/distribution requires a signature even if the unsigned MIME type is used.
|
||||
var preferredManifestMIMETypes = []string{manifest.DockerV2Schema2MediaType, manifest.DockerV2Schema1SignedMediaType}
|
||||
|
||||
type digestingReader struct {
|
||||
source io.Reader
|
||||
digester digest.Digester
|
||||
expectedDigest digest.Digest
|
||||
validationFailed bool
|
||||
}
|
||||
|
||||
// imageCopier allows us to keep track of diffID values for blobs, and other
|
||||
// data, that we're copying between images, and cache other information that
|
||||
// might allow us to take some shortcuts
|
||||
type imageCopier struct {
|
||||
copiedBlobs map[digest.Digest]digest.Digest
|
||||
cachedDiffIDs map[digest.Digest]digest.Digest
|
||||
manifestUpdates *types.ManifestUpdateOptions
|
||||
dest types.ImageDestination
|
||||
src types.Image
|
||||
rawSource types.ImageSource
|
||||
diffIDsAreNeeded bool
|
||||
canModifyManifest bool
|
||||
reportWriter io.Writer
|
||||
}
|
||||
|
||||
// newDigestingReader returns an io.Reader implementation with contents of source, which will eventually return a non-EOF error
|
||||
// and set validationFailed to true if the source stream does not match expectedDigest.
|
||||
func newDigestingReader(source io.Reader, expectedDigest digest.Digest) (*digestingReader, error) {
|
||||
if err := expectedDigest.Validate(); err != nil {
|
||||
return nil, fmt.Errorf("Invalid digest specification %s", expectedDigest)
|
||||
}
|
||||
digestAlgorithm := expectedDigest.Algorithm()
|
||||
if !digestAlgorithm.Available() {
|
||||
return nil, fmt.Errorf("Invalid digest specification %s: unsupported digest algorithm %s", expectedDigest, digestAlgorithm)
|
||||
}
|
||||
return &digestingReader{
|
||||
source: source,
|
||||
digester: digestAlgorithm.New(),
|
||||
expectedDigest: expectedDigest,
|
||||
validationFailed: false,
|
||||
}, nil
|
||||
}
|
||||
|
||||
func (d *digestingReader) Read(p []byte) (int, error) {
|
||||
n, err := d.source.Read(p)
|
||||
if n > 0 {
|
||||
if n2, err := d.digester.Hash().Write(p[:n]); n2 != n || err != nil {
|
||||
// Coverage: This should not happen, the hash.Hash interface requires
|
||||
// d.digest.Write to never return an error, and the io.Writer interface
|
||||
// requires n2 == len(input) if no error is returned.
|
||||
return 0, fmt.Errorf("Error updating digest during verification: %d vs. %d, %v", n2, n, err)
|
||||
}
|
||||
}
|
||||
if err == io.EOF {
|
||||
actualDigest := d.digester.Digest()
|
||||
if actualDigest != d.expectedDigest {
|
||||
d.validationFailed = true
|
||||
return 0, fmt.Errorf("Digest did not match, expected %s, got %s", d.expectedDigest, actualDigest)
|
||||
}
|
||||
}
|
||||
return n, err
|
||||
}
|
||||
|
||||
// Options allows supplying non-default configuration modifying the behavior of CopyImage.
|
||||
type Options struct {
|
||||
RemoveSignatures bool // Remove any pre-existing signatures. SignBy will still add a new signature.
|
||||
SignBy string // If non-empty, asks for a signature to be added during the copy, and specifies a key ID, as accepted by signature.NewGPGSigningMechanism().SignDockerManifest(),
|
||||
ReportWriter io.Writer
|
||||
SourceCtx *types.SystemContext
|
||||
DestinationCtx *types.SystemContext
|
||||
}
|
||||
|
||||
// Image copies image from srcRef to destRef, using policyContext to validate source image admissibility.
|
||||
func Image(policyContext *signature.PolicyContext, destRef, srcRef types.ImageReference, options *Options) error {
|
||||
reportWriter := ioutil.Discard
|
||||
if options != nil && options.ReportWriter != nil {
|
||||
reportWriter = options.ReportWriter
|
||||
}
|
||||
writeReport := func(f string, a ...interface{}) {
|
||||
fmt.Fprintf(reportWriter, f, a...)
|
||||
}
|
||||
|
||||
dest, err := destRef.NewImageDestination(options.DestinationCtx)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Error initializing destination %s: %v", transports.ImageName(destRef), err)
|
||||
}
|
||||
defer dest.Close()
|
||||
destSupportedManifestMIMETypes := dest.SupportedManifestMIMETypes()
|
||||
|
||||
rawSource, err := srcRef.NewImageSource(options.SourceCtx, destSupportedManifestMIMETypes)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Error initializing source %s: %v", transports.ImageName(srcRef), err)
|
||||
}
|
||||
unparsedImage := image.UnparsedFromSource(rawSource)
|
||||
defer func() {
|
||||
if unparsedImage != nil {
|
||||
unparsedImage.Close()
|
||||
}
|
||||
}()
|
||||
|
||||
// Please keep this policy check BEFORE reading any other information about the image.
|
||||
if allowed, err := policyContext.IsRunningImageAllowed(unparsedImage); !allowed || err != nil { // Be paranoid and fail if either return value indicates so.
|
||||
return fmt.Errorf("Source image rejected: %v", err)
|
||||
}
|
||||
src, err := image.FromUnparsedImage(unparsedImage)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Error initializing image from source %s: %v", transports.ImageName(srcRef), err)
|
||||
}
|
||||
unparsedImage = nil
|
||||
defer src.Close()
|
||||
|
||||
if src.IsMultiImage() {
|
||||
return fmt.Errorf("can not copy %s: manifest contains multiple images", transports.ImageName(srcRef))
|
||||
}
|
||||
|
||||
var sigs [][]byte
|
||||
if options != nil && options.RemoveSignatures {
|
||||
sigs = [][]byte{}
|
||||
} else {
|
||||
writeReport("Getting image source signatures\n")
|
||||
s, err := src.Signatures()
|
||||
if err != nil {
|
||||
return fmt.Errorf("Error reading signatures: %v", err)
|
||||
}
|
||||
sigs = s
|
||||
}
|
||||
if len(sigs) != 0 {
|
||||
writeReport("Checking if image destination supports signatures\n")
|
||||
if err := dest.SupportsSignatures(); err != nil {
|
||||
return fmt.Errorf("Can not copy signatures: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
canModifyManifest := len(sigs) == 0
|
||||
manifestUpdates := types.ManifestUpdateOptions{}
|
||||
|
||||
if err := determineManifestConversion(&manifestUpdates, src, destSupportedManifestMIMETypes, canModifyManifest); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// If src.UpdatedImageNeedsLayerDiffIDs(manifestUpdates) will be true, it needs to be true by the time we get here.
|
||||
ic := imageCopier{
|
||||
copiedBlobs: make(map[digest.Digest]digest.Digest),
|
||||
cachedDiffIDs: make(map[digest.Digest]digest.Digest),
|
||||
manifestUpdates: &manifestUpdates,
|
||||
dest: dest,
|
||||
src: src,
|
||||
rawSource: rawSource,
|
||||
diffIDsAreNeeded: src.UpdatedImageNeedsLayerDiffIDs(manifestUpdates),
|
||||
canModifyManifest: canModifyManifest,
|
||||
reportWriter: reportWriter,
|
||||
}
|
||||
|
||||
if err := ic.copyLayers(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
pendingImage := src
|
||||
if !reflect.DeepEqual(manifestUpdates, types.ManifestUpdateOptions{InformationOnly: manifestUpdates.InformationOnly}) {
|
||||
if !canModifyManifest {
|
||||
return fmt.Errorf("Internal error: copy needs an updated manifest but that was known to be forbidden")
|
||||
}
|
||||
manifestUpdates.InformationOnly.Destination = dest
|
||||
pendingImage, err = src.UpdatedImage(manifestUpdates)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Error creating an updated image manifest: %v", err)
|
||||
}
|
||||
}
|
||||
manifest, _, err := pendingImage.Manifest()
|
||||
if err != nil {
|
||||
return fmt.Errorf("Error reading manifest: %v", err)
|
||||
}
|
||||
|
||||
if err := ic.copyConfig(pendingImage); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if options != nil && options.SignBy != "" {
|
||||
mech, err := signature.NewGPGSigningMechanism()
|
||||
if err != nil {
|
||||
return fmt.Errorf("Error initializing GPG: %v", err)
|
||||
}
|
||||
dockerReference := dest.Reference().DockerReference()
|
||||
if dockerReference == nil {
|
||||
return fmt.Errorf("Cannot determine canonical Docker reference for destination %s", transports.ImageName(dest.Reference()))
|
||||
}
|
||||
|
||||
writeReport("Signing manifest\n")
|
||||
newSig, err := signature.SignDockerManifest(manifest, dockerReference.String(), mech, options.SignBy)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Error creating signature: %v", err)
|
||||
}
|
||||
sigs = append(sigs, newSig)
|
||||
}
|
||||
|
||||
writeReport("Writing manifest to image destination\n")
|
||||
if err := dest.PutManifest(manifest); err != nil {
|
||||
return fmt.Errorf("Error writing manifest: %v", err)
|
||||
}
|
||||
|
||||
writeReport("Storing signatures\n")
|
||||
if err := dest.PutSignatures(sigs); err != nil {
|
||||
return fmt.Errorf("Error writing signatures: %v", err)
|
||||
}
|
||||
|
||||
if err := dest.Commit(); err != nil {
|
||||
return fmt.Errorf("Error committing the finished image: %v", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// copyLayers copies layers from src/rawSource to dest, using and updating ic.manifestUpdates if necessary and ic.canModifyManifest.
|
||||
func (ic *imageCopier) copyLayers() error {
|
||||
srcInfos := ic.src.LayerInfos()
|
||||
destInfos := []types.BlobInfo{}
|
||||
diffIDs := []digest.Digest{}
|
||||
for _, srcLayer := range srcInfos {
|
||||
var (
|
||||
destInfo types.BlobInfo
|
||||
diffID digest.Digest
|
||||
err error
|
||||
)
|
||||
if ic.dest.AcceptsForeignLayerURLs() && len(srcLayer.URLs) != 0 {
|
||||
// DiffIDs are, currently, needed only when converting from schema1.
|
||||
// In which case src.LayerInfos will not have URLs because schema1
|
||||
// does not support them.
|
||||
if ic.diffIDsAreNeeded {
|
||||
return errors.New("getting DiffID for foreign layers is unimplemented")
|
||||
}
|
||||
destInfo = srcLayer
|
||||
fmt.Fprintf(ic.reportWriter, "Skipping foreign layer %q copy to %s\n", destInfo.Digest, ic.dest.Reference().Transport().Name())
|
||||
} else {
|
||||
destInfo, diffID, err = ic.copyLayer(srcLayer)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
destInfos = append(destInfos, destInfo)
|
||||
diffIDs = append(diffIDs, diffID)
|
||||
}
|
||||
ic.manifestUpdates.InformationOnly.LayerInfos = destInfos
|
||||
if ic.diffIDsAreNeeded {
|
||||
ic.manifestUpdates.InformationOnly.LayerDiffIDs = diffIDs
|
||||
}
|
||||
if layerDigestsDiffer(srcInfos, destInfos) {
|
||||
ic.manifestUpdates.LayerInfos = destInfos
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// layerDigestsDiffer return true iff the digests in a and b differ (ignoring sizes and possible other fields)
|
||||
func layerDigestsDiffer(a, b []types.BlobInfo) bool {
|
||||
if len(a) != len(b) {
|
||||
return true
|
||||
}
|
||||
for i := range a {
|
||||
if a[i].Digest != b[i].Digest {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// copyConfig copies config.json, if any, from src to dest.
|
||||
func (ic *imageCopier) copyConfig(src types.Image) error {
|
||||
srcInfo := src.ConfigInfo()
|
||||
if srcInfo.Digest != "" {
|
||||
fmt.Fprintf(ic.reportWriter, "Copying config %s\n", srcInfo.Digest)
|
||||
configBlob, err := src.ConfigBlob()
|
||||
if err != nil {
|
||||
return fmt.Errorf("Error reading config blob %s: %v", srcInfo.Digest, err)
|
||||
}
|
||||
destInfo, err := ic.copyBlobFromStream(bytes.NewReader(configBlob), srcInfo, nil, false)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if destInfo.Digest != srcInfo.Digest {
|
||||
return fmt.Errorf("Internal error: copying uncompressed config blob %s changed digest to %s", srcInfo.Digest, destInfo.Digest)
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// diffIDResult contains both a digest value and an error from diffIDComputationGoroutine.
|
||||
// We could also send the error through the pipeReader, but this more cleanly separates the copying of the layer and the DiffID computation.
|
||||
type diffIDResult struct {
|
||||
digest digest.Digest
|
||||
err error
|
||||
}
|
||||
|
||||
// copyLayer copies a layer with srcInfo (with known Digest and possibly known Size) in src to dest, perhaps compressing it if canCompress,
|
||||
// and returns a complete blobInfo of the copied layer, and a value for LayerDiffIDs if diffIDIsNeeded
|
||||
func (ic *imageCopier) copyLayer(srcInfo types.BlobInfo) (types.BlobInfo, digest.Digest, error) {
|
||||
// Check if we already have a blob with this digest
|
||||
haveBlob, extantBlobSize, err := ic.dest.HasBlob(srcInfo)
|
||||
if err != nil && err != types.ErrBlobNotFound {
|
||||
return types.BlobInfo{}, "", fmt.Errorf("Error checking for blob %s at destination: %v", srcInfo.Digest, err)
|
||||
}
|
||||
// If we already have a cached diffID for this blob, we don't need to compute it
|
||||
diffIDIsNeeded := ic.diffIDsAreNeeded && (ic.cachedDiffIDs[srcInfo.Digest] == "")
|
||||
// If we already have the blob, and we don't need to recompute the diffID, then we might be able to avoid reading it again
|
||||
if haveBlob && !diffIDIsNeeded {
|
||||
// Check the blob sizes match, if we were given a size this time
|
||||
if srcInfo.Size != -1 && srcInfo.Size != extantBlobSize {
|
||||
return types.BlobInfo{}, "", fmt.Errorf("Error: blob %s is already present, but with size %d instead of %d", srcInfo.Digest, extantBlobSize, srcInfo.Size)
|
||||
}
|
||||
srcInfo.Size = extantBlobSize
|
||||
// Tell the image destination that this blob's delta is being applied again. For some image destinations, this can be faster than using GetBlob/PutBlob
|
||||
blobinfo, err := ic.dest.ReapplyBlob(srcInfo)
|
||||
if err != nil {
|
||||
return types.BlobInfo{}, "", fmt.Errorf("Error reapplying blob %s at destination: %v", srcInfo.Digest, err)
|
||||
}
|
||||
fmt.Fprintf(ic.reportWriter, "Skipping fetch of repeat blob %s\n", srcInfo.Digest)
|
||||
return blobinfo, ic.cachedDiffIDs[srcInfo.Digest], err
|
||||
}
|
||||
|
||||
// Fallback: copy the layer, computing the diffID if we need to do so
|
||||
fmt.Fprintf(ic.reportWriter, "Copying blob %s\n", srcInfo.Digest)
|
||||
srcStream, srcBlobSize, err := ic.rawSource.GetBlob(srcInfo)
|
||||
if err != nil {
|
||||
return types.BlobInfo{}, "", fmt.Errorf("Error reading blob %s: %v", srcInfo.Digest, err)
|
||||
}
|
||||
defer srcStream.Close()
|
||||
|
||||
blobInfo, diffIDChan, err := ic.copyLayerFromStream(srcStream, types.BlobInfo{Digest: srcInfo.Digest, Size: srcBlobSize},
|
||||
diffIDIsNeeded)
|
||||
if err != nil {
|
||||
return types.BlobInfo{}, "", err
|
||||
}
|
||||
var diffIDResult diffIDResult // = {digest:""}
|
||||
if diffIDIsNeeded {
|
||||
diffIDResult = <-diffIDChan
|
||||
if diffIDResult.err != nil {
|
||||
return types.BlobInfo{}, "", fmt.Errorf("Error computing layer DiffID: %v", diffIDResult.err)
|
||||
}
|
||||
logrus.Debugf("Computed DiffID %s for layer %s", diffIDResult.digest, srcInfo.Digest)
|
||||
ic.cachedDiffIDs[srcInfo.Digest] = diffIDResult.digest
|
||||
}
|
||||
return blobInfo, diffIDResult.digest, nil
|
||||
}
|
||||
|
||||
// copyLayerFromStream is an implementation detail of copyLayer; mostly providing a separate “defer” scope.
|
||||
// it copies a blob with srcInfo (with known Digest and possibly known Size) from srcStream to dest,
|
||||
// perhaps compressing the stream if canCompress,
|
||||
// and returns a complete blobInfo of the copied blob and perhaps a <-chan diffIDResult if diffIDIsNeeded, to be read by the caller.
|
||||
func (ic *imageCopier) copyLayerFromStream(srcStream io.Reader, srcInfo types.BlobInfo,
|
||||
diffIDIsNeeded bool) (types.BlobInfo, <-chan diffIDResult, error) {
|
||||
var getDiffIDRecorder func(decompressorFunc) io.Writer // = nil
|
||||
var diffIDChan chan diffIDResult
|
||||
|
||||
err := errors.New("Internal error: unexpected panic in copyLayer") // For pipeWriter.CloseWithError below
|
||||
if diffIDIsNeeded {
|
||||
diffIDChan = make(chan diffIDResult, 1) // Buffered, so that sending a value after this or our caller has failed and exited does not block.
|
||||
pipeReader, pipeWriter := io.Pipe()
|
||||
defer func() { // Note that this is not the same as {defer pipeWriter.CloseWithError(err)}; we need err to be evaluated lazily.
|
||||
pipeWriter.CloseWithError(err) // CloseWithError(nil) is equivalent to Close()
|
||||
}()
|
||||
|
||||
getDiffIDRecorder = func(decompressor decompressorFunc) io.Writer {
|
||||
// If this fails, e.g. because we have exited and due to pipeWriter.CloseWithError() above further
|
||||
// reading from the pipe has failed, we don’t really care.
|
||||
// We only read from diffIDChan if the rest of the flow has succeeded, and when we do read from it,
|
||||
// the return value includes an error indication, which we do check.
|
||||
//
|
||||
// If this gets never called, pipeReader will not be used anywhere, but pipeWriter will only be
|
||||
// closed above, so we are happy enough with both pipeReader and pipeWriter to just get collected by GC.
|
||||
go diffIDComputationGoroutine(diffIDChan, pipeReader, decompressor) // Closes pipeReader
|
||||
return pipeWriter
|
||||
}
|
||||
}
|
||||
blobInfo, err := ic.copyBlobFromStream(srcStream, srcInfo, getDiffIDRecorder, ic.canModifyManifest) // Sets err to nil on success
|
||||
return blobInfo, diffIDChan, err
|
||||
// We need the defer … pipeWriter.CloseWithError() to happen HERE so that the caller can block on reading from diffIDChan
|
||||
}
|
||||
|
||||
// diffIDComputationGoroutine reads all input from layerStream, uncompresses using decompressor if necessary, and sends its digest, and status, if any, to dest.
|
||||
func diffIDComputationGoroutine(dest chan<- diffIDResult, layerStream io.ReadCloser, decompressor decompressorFunc) {
|
||||
result := diffIDResult{
|
||||
digest: "",
|
||||
err: errors.New("Internal error: unexpected panic in diffIDComputationGoroutine"),
|
||||
}
|
||||
defer func() { dest <- result }()
|
||||
defer layerStream.Close() // We do not care to bother the other end of the pipe with other failures; we send them to dest instead.
|
||||
|
||||
result.digest, result.err = computeDiffID(layerStream, decompressor)
|
||||
}
|
||||
|
||||
// computeDiffID reads all input from layerStream, uncompresses it using decompressor if necessary, and returns its digest.
|
||||
func computeDiffID(stream io.Reader, decompressor decompressorFunc) (digest.Digest, error) {
|
||||
if decompressor != nil {
|
||||
s, err := decompressor(stream)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
stream = s
|
||||
}
|
||||
|
||||
return digest.Canonical.FromReader(stream)
|
||||
}
|
||||
|
||||
// copyBlobFromStream copies a blob with srcInfo (with known Digest and possibly known Size) from srcStream to dest,
|
||||
// perhaps sending a copy to an io.Writer if getOriginalLayerCopyWriter != nil,
|
||||
// perhaps compressing it if canCompress,
|
||||
// and returns a complete blobInfo of the copied blob.
|
||||
func (ic *imageCopier) copyBlobFromStream(srcStream io.Reader, srcInfo types.BlobInfo,
|
||||
getOriginalLayerCopyWriter func(decompressor decompressorFunc) io.Writer,
|
||||
canCompress bool) (types.BlobInfo, error) {
|
||||
// The copying happens through a pipeline of connected io.Readers.
|
||||
// === Input: srcStream
|
||||
|
||||
// === Process input through digestingReader to validate against the expected digest.
|
||||
// Be paranoid; in case PutBlob somehow managed to ignore an error from digestingReader,
|
||||
// use a separate validation failure indicator.
|
||||
// Note that we don't use a stronger "validationSucceeded" indicator, because
|
||||
// dest.PutBlob may detect that the layer already exists, in which case we don't
|
||||
// read stream to the end, and validation does not happen.
|
||||
digestingReader, err := newDigestingReader(srcStream, srcInfo.Digest)
|
||||
if err != nil {
|
||||
return types.BlobInfo{}, fmt.Errorf("Error preparing to verify blob %s: %v", srcInfo.Digest, err)
|
||||
}
|
||||
var destStream io.Reader = digestingReader
|
||||
|
||||
// === Detect compression of the input stream.
|
||||
// This requires us to “peek ahead” into the stream to read the initial part, which requires us to chain through another io.Reader returned by detectCompression.
|
||||
decompressor, destStream, err := detectCompression(destStream) // We could skip this in some cases, but let's keep the code path uniform
|
||||
if err != nil {
|
||||
return types.BlobInfo{}, fmt.Errorf("Error reading blob %s: %v", srcInfo.Digest, err)
|
||||
}
|
||||
isCompressed := decompressor != nil
|
||||
|
||||
// === Report progress using a pb.Reader.
|
||||
bar := pb.New(int(srcInfo.Size)).SetUnits(pb.U_BYTES)
|
||||
bar.Output = ic.reportWriter
|
||||
bar.SetMaxWidth(80)
|
||||
bar.ShowTimeLeft = false
|
||||
bar.ShowPercent = false
|
||||
bar.Start()
|
||||
destStream = bar.NewProxyReader(destStream)
|
||||
defer fmt.Fprint(ic.reportWriter, "\n")
|
||||
|
||||
// === Send a copy of the original, uncompressed, stream, to a separate path if necessary.
|
||||
var originalLayerReader io.Reader // DO NOT USE this other than to drain the input if no other consumer in the pipeline has done so.
|
||||
if getOriginalLayerCopyWriter != nil {
|
||||
destStream = io.TeeReader(destStream, getOriginalLayerCopyWriter(decompressor))
|
||||
originalLayerReader = destStream
|
||||
}
|
||||
|
||||
// === Compress the layer if it is uncompressed and compression is desired
|
||||
var inputInfo types.BlobInfo
|
||||
if !canCompress || isCompressed || !ic.dest.ShouldCompressLayers() {
|
||||
logrus.Debugf("Using original blob without modification")
|
||||
inputInfo = srcInfo
|
||||
} else {
|
||||
logrus.Debugf("Compressing blob on the fly")
|
||||
pipeReader, pipeWriter := io.Pipe()
|
||||
defer pipeReader.Close()
|
||||
|
||||
// If this fails while writing data, it will do pipeWriter.CloseWithError(); if it fails otherwise,
|
||||
// e.g. because we have exited and due to pipeReader.Close() above further writing to the pipe has failed,
|
||||
// we don’t care.
|
||||
go compressGoroutine(pipeWriter, destStream) // Closes pipeWriter
|
||||
destStream = pipeReader
|
||||
inputInfo.Digest = ""
|
||||
inputInfo.Size = -1
|
||||
}
|
||||
|
||||
// === Finally, send the layer stream to dest.
|
||||
uploadedInfo, err := ic.dest.PutBlob(destStream, inputInfo)
|
||||
if err != nil {
|
||||
return types.BlobInfo{}, fmt.Errorf("Error writing blob: %v", err)
|
||||
}
|
||||
|
||||
// This is fairly horrible: the writer from getOriginalLayerCopyWriter wants to consumer
|
||||
// all of the input (to compute DiffIDs), even if dest.PutBlob does not need it.
|
||||
// So, read everything from originalLayerReader, which will cause the rest to be
|
||||
// sent there if we are not already at EOF.
|
||||
if getOriginalLayerCopyWriter != nil {
|
||||
logrus.Debugf("Consuming rest of the original blob to satisfy getOriginalLayerCopyWriter")
|
||||
_, err := io.Copy(ioutil.Discard, originalLayerReader)
|
||||
if err != nil {
|
||||
return types.BlobInfo{}, fmt.Errorf("Error reading input blob %s: %v", srcInfo.Digest, err)
|
||||
}
|
||||
}
|
||||
|
||||
if digestingReader.validationFailed { // Coverage: This should never happen.
|
||||
return types.BlobInfo{}, fmt.Errorf("Internal error writing blob %s, digest verification failed but was ignored", srcInfo.Digest)
|
||||
}
|
||||
if inputInfo.Digest != "" && uploadedInfo.Digest != inputInfo.Digest {
|
||||
return types.BlobInfo{}, fmt.Errorf("Internal error writing blob %s, blob with digest %s saved with digest %s", srcInfo.Digest, inputInfo.Digest, uploadedInfo.Digest)
|
||||
}
|
||||
return uploadedInfo, nil
|
||||
}
|
||||
|
||||
// compressGoroutine reads all input from src and writes its compressed equivalent to dest.
|
||||
func compressGoroutine(dest *io.PipeWriter, src io.Reader) {
|
||||
err := errors.New("Internal error: unexpected panic in compressGoroutine")
|
||||
defer func() { // Note that this is not the same as {defer dest.CloseWithError(err)}; we need err to be evaluated lazily.
|
||||
dest.CloseWithError(err) // CloseWithError(nil) is equivalent to Close()
|
||||
}()
|
||||
|
||||
zipper := gzip.NewWriter(dest)
|
||||
defer zipper.Close()
|
||||
|
||||
_, err = io.Copy(zipper, src) // Sets err to nil, i.e. causes dest.Close()
|
||||
}
|
||||
|
||||
// determineManifestConversion updates manifestUpdates to convert manifest to a supported MIME type, if necessary and canModifyManifest.
|
||||
// Note that the conversion will only happen later, through src.UpdatedImage
|
||||
func determineManifestConversion(manifestUpdates *types.ManifestUpdateOptions, src types.Image, destSupportedManifestMIMETypes []string, canModifyManifest bool) error {
|
||||
if len(destSupportedManifestMIMETypes) == 0 {
|
||||
return nil // Anything goes
|
||||
}
|
||||
supportedByDest := map[string]struct{}{}
|
||||
for _, t := range destSupportedManifestMIMETypes {
|
||||
supportedByDest[t] = struct{}{}
|
||||
}
|
||||
|
||||
_, srcType, err := src.Manifest()
|
||||
if err != nil { // This should have been cached?!
|
||||
return fmt.Errorf("Error reading manifest: %v", err)
|
||||
}
|
||||
if _, ok := supportedByDest[srcType]; ok {
|
||||
logrus.Debugf("Manifest MIME type %s is declared supported by the destination", srcType)
|
||||
return nil
|
||||
}
|
||||
|
||||
// OK, we should convert the manifest.
|
||||
if !canModifyManifest {
|
||||
logrus.Debugf("Manifest MIME type %s is not supported by the destination, but we can't modify the manifest, hoping for the best...")
|
||||
return nil // Take our chances - FIXME? Or should we fail without trying?
|
||||
}
|
||||
|
||||
var chosenType = destSupportedManifestMIMETypes[0] // This one is known to be supported.
|
||||
for _, t := range preferredManifestMIMETypes {
|
||||
if _, ok := supportedByDest[t]; ok {
|
||||
chosenType = t
|
||||
break
|
||||
}
|
||||
}
|
||||
logrus.Debugf("Will convert manifest from MIME type %s to %s", srcType, chosenType)
|
||||
manifestUpdates.ManifestMIMEType = chosenType
|
||||
return nil
|
||||
}
|
135
vendor/github.com/containers/image/directory/directory_dest.go
generated
vendored
Normal file
135
vendor/github.com/containers/image/directory/directory_dest.go
generated
vendored
Normal file
|
@ -0,0 +1,135 @@
|
|||
package directory
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
|
||||
"github.com/containers/image/types"
|
||||
"github.com/docker/distribution/digest"
|
||||
)
|
||||
|
||||
type dirImageDestination struct {
|
||||
ref dirReference
|
||||
}
|
||||
|
||||
// newImageDestination returns an ImageDestination for writing to an existing directory.
|
||||
func newImageDestination(ref dirReference) types.ImageDestination {
|
||||
return &dirImageDestination{ref}
|
||||
}
|
||||
|
||||
// Reference returns the reference used to set up this destination. Note that this should directly correspond to user's intent,
|
||||
// e.g. it should use the public hostname instead of the result of resolving CNAMEs or following redirects.
|
||||
func (d *dirImageDestination) Reference() types.ImageReference {
|
||||
return d.ref
|
||||
}
|
||||
|
||||
// Close removes resources associated with an initialized ImageDestination, if any.
|
||||
func (d *dirImageDestination) Close() {
|
||||
}
|
||||
|
||||
func (d *dirImageDestination) SupportedManifestMIMETypes() []string {
|
||||
return nil
|
||||
}
|
||||
|
||||
// SupportsSignatures returns an error (to be displayed to the user) if the destination certainly can't store signatures.
|
||||
// Note: It is still possible for PutSignatures to fail if SupportsSignatures returns nil.
|
||||
func (d *dirImageDestination) SupportsSignatures() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// ShouldCompressLayers returns true iff it is desirable to compress layer blobs written to this destination.
|
||||
func (d *dirImageDestination) ShouldCompressLayers() bool {
|
||||
return false
|
||||
}
|
||||
|
||||
// AcceptsForeignLayerURLs returns false iff foreign layers in manifest should be actually
|
||||
// uploaded to the image destination, true otherwise.
|
||||
func (d *dirImageDestination) AcceptsForeignLayerURLs() bool {
|
||||
return false
|
||||
}
|
||||
|
||||
// PutBlob writes contents of stream and returns data representing the result (with all data filled in).
|
||||
// inputInfo.Digest can be optionally provided if known; it is not mandatory for the implementation to verify it.
|
||||
// inputInfo.Size is the expected length of stream, if known.
|
||||
// WARNING: The contents of stream are being verified on the fly. Until stream.Read() returns io.EOF, the contents of the data SHOULD NOT be available
|
||||
// to any other readers for download using the supplied digest.
|
||||
// If stream.Read() at any time, ESPECIALLY at end of input, returns an error, PutBlob MUST 1) fail, and 2) delete any data stored so far.
|
||||
func (d *dirImageDestination) PutBlob(stream io.Reader, inputInfo types.BlobInfo) (types.BlobInfo, error) {
|
||||
blobFile, err := ioutil.TempFile(d.ref.path, "dir-put-blob")
|
||||
if err != nil {
|
||||
return types.BlobInfo{}, err
|
||||
}
|
||||
succeeded := false
|
||||
defer func() {
|
||||
blobFile.Close()
|
||||
if !succeeded {
|
||||
os.Remove(blobFile.Name())
|
||||
}
|
||||
}()
|
||||
|
||||
digester := digest.Canonical.New()
|
||||
tee := io.TeeReader(stream, digester.Hash())
|
||||
|
||||
size, err := io.Copy(blobFile, tee)
|
||||
if err != nil {
|
||||
return types.BlobInfo{}, err
|
||||
}
|
||||
computedDigest := digester.Digest()
|
||||
if inputInfo.Size != -1 && size != inputInfo.Size {
|
||||
return types.BlobInfo{}, fmt.Errorf("Size mismatch when copying %s, expected %d, got %d", computedDigest, inputInfo.Size, size)
|
||||
}
|
||||
if err := blobFile.Sync(); err != nil {
|
||||
return types.BlobInfo{}, err
|
||||
}
|
||||
if err := blobFile.Chmod(0644); err != nil {
|
||||
return types.BlobInfo{}, err
|
||||
}
|
||||
blobPath := d.ref.layerPath(computedDigest)
|
||||
if err := os.Rename(blobFile.Name(), blobPath); err != nil {
|
||||
return types.BlobInfo{}, err
|
||||
}
|
||||
succeeded = true
|
||||
return types.BlobInfo{Digest: computedDigest, Size: size}, nil
|
||||
}
|
||||
|
||||
func (d *dirImageDestination) HasBlob(info types.BlobInfo) (bool, int64, error) {
|
||||
if info.Digest == "" {
|
||||
return false, -1, fmt.Errorf(`"Can not check for a blob with unknown digest`)
|
||||
}
|
||||
blobPath := d.ref.layerPath(info.Digest)
|
||||
finfo, err := os.Stat(blobPath)
|
||||
if err != nil && os.IsNotExist(err) {
|
||||
return false, -1, types.ErrBlobNotFound
|
||||
}
|
||||
if err != nil {
|
||||
return false, -1, err
|
||||
}
|
||||
return true, finfo.Size(), nil
|
||||
}
|
||||
|
||||
func (d *dirImageDestination) ReapplyBlob(info types.BlobInfo) (types.BlobInfo, error) {
|
||||
return info, nil
|
||||
}
|
||||
|
||||
func (d *dirImageDestination) PutManifest(manifest []byte) error {
|
||||
return ioutil.WriteFile(d.ref.manifestPath(), manifest, 0644)
|
||||
}
|
||||
|
||||
func (d *dirImageDestination) PutSignatures(signatures [][]byte) error {
|
||||
for i, sig := range signatures {
|
||||
if err := ioutil.WriteFile(d.ref.signaturePath(i), sig, 0644); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Commit marks the process of storing the image as successful and asks for the image to be persisted.
|
||||
// WARNING: This does not have any transactional semantics:
|
||||
// - Uploaded data MAY be visible to others before Commit() is called
|
||||
// - Uploaded data MAY be removed or MAY remain around if Close() is called without Commit() (i.e. rollback is allowed but not guaranteed)
|
||||
func (d *dirImageDestination) Commit() error {
|
||||
return nil
|
||||
}
|
74
vendor/github.com/containers/image/directory/directory_src.go
generated
vendored
Normal file
74
vendor/github.com/containers/image/directory/directory_src.go
generated
vendored
Normal file
|
@ -0,0 +1,74 @@
|
|||
package directory
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
|
||||
"github.com/containers/image/manifest"
|
||||
"github.com/containers/image/types"
|
||||
"github.com/docker/distribution/digest"
|
||||
)
|
||||
|
||||
type dirImageSource struct {
|
||||
ref dirReference
|
||||
}
|
||||
|
||||
// newImageSource returns an ImageSource reading from an existing directory.
|
||||
// The caller must call .Close() on the returned ImageSource.
|
||||
func newImageSource(ref dirReference) types.ImageSource {
|
||||
return &dirImageSource{ref}
|
||||
}
|
||||
|
||||
// Reference returns the reference used to set up this source, _as specified by the user_
|
||||
// (not as the image itself, or its underlying storage, claims). This can be used e.g. to determine which public keys are trusted for this image.
|
||||
func (s *dirImageSource) Reference() types.ImageReference {
|
||||
return s.ref
|
||||
}
|
||||
|
||||
// Close removes resources associated with an initialized ImageSource, if any.
|
||||
func (s *dirImageSource) Close() {
|
||||
}
|
||||
|
||||
// GetManifest returns the image's manifest along with its MIME type (which may be empty when it can't be determined but the manifest is available).
|
||||
// It may use a remote (= slow) service.
|
||||
func (s *dirImageSource) GetManifest() ([]byte, string, error) {
|
||||
m, err := ioutil.ReadFile(s.ref.manifestPath())
|
||||
if err != nil {
|
||||
return nil, "", err
|
||||
}
|
||||
return m, manifest.GuessMIMEType(m), err
|
||||
}
|
||||
|
||||
func (s *dirImageSource) GetTargetManifest(digest digest.Digest) ([]byte, string, error) {
|
||||
return nil, "", fmt.Errorf(`Getting target manifest not supported by "dir:"`)
|
||||
}
|
||||
|
||||
// GetBlob returns a stream for the specified blob, and the blob’s size (or -1 if unknown).
|
||||
func (s *dirImageSource) GetBlob(info types.BlobInfo) (io.ReadCloser, int64, error) {
|
||||
r, err := os.Open(s.ref.layerPath(info.Digest))
|
||||
if err != nil {
|
||||
return nil, 0, nil
|
||||
}
|
||||
fi, err := r.Stat()
|
||||
if err != nil {
|
||||
return nil, 0, nil
|
||||
}
|
||||
return r, fi.Size(), nil
|
||||
}
|
||||
|
||||
func (s *dirImageSource) GetSignatures() ([][]byte, error) {
|
||||
signatures := [][]byte{}
|
||||
for i := 0; ; i++ {
|
||||
signature, err := ioutil.ReadFile(s.ref.signaturePath(i))
|
||||
if err != nil {
|
||||
if os.IsNotExist(err) {
|
||||
break
|
||||
}
|
||||
return nil, err
|
||||
}
|
||||
signatures = append(signatures, signature)
|
||||
}
|
||||
return signatures, nil
|
||||
}
|
173
vendor/github.com/containers/image/directory/directory_transport.go
generated
vendored
Normal file
173
vendor/github.com/containers/image/directory/directory_transport.go
generated
vendored
Normal file
|
@ -0,0 +1,173 @@
|
|||
package directory
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
|
||||
"github.com/containers/image/directory/explicitfilepath"
|
||||
"github.com/containers/image/docker/reference"
|
||||
"github.com/containers/image/image"
|
||||
"github.com/containers/image/types"
|
||||
"github.com/docker/distribution/digest"
|
||||
)
|
||||
|
||||
// Transport is an ImageTransport for directory paths.
|
||||
var Transport = dirTransport{}
|
||||
|
||||
type dirTransport struct{}
|
||||
|
||||
func (t dirTransport) Name() string {
|
||||
return "dir"
|
||||
}
|
||||
|
||||
// ParseReference converts a string, which should not start with the ImageTransport.Name prefix, into an ImageReference.
|
||||
func (t dirTransport) ParseReference(reference string) (types.ImageReference, error) {
|
||||
return NewReference(reference)
|
||||
}
|
||||
|
||||
// ValidatePolicyConfigurationScope checks that scope is a valid name for a signature.PolicyTransportScopes keys
|
||||
// (i.e. a valid PolicyConfigurationIdentity() or PolicyConfigurationNamespaces() return value).
|
||||
// It is acceptable to allow an invalid value which will never be matched, it can "only" cause user confusion.
|
||||
// scope passed to this function will not be "", that value is always allowed.
|
||||
func (t dirTransport) ValidatePolicyConfigurationScope(scope string) error {
|
||||
if !strings.HasPrefix(scope, "/") {
|
||||
return fmt.Errorf("Invalid scope %s: Must be an absolute path", scope)
|
||||
}
|
||||
// Refuse also "/", otherwise "/" and "" would have the same semantics,
|
||||
// and "" could be unexpectedly shadowed by the "/" entry.
|
||||
if scope == "/" {
|
||||
return errors.New(`Invalid scope "/": Use the generic default scope ""`)
|
||||
}
|
||||
cleaned := filepath.Clean(scope)
|
||||
if cleaned != scope {
|
||||
return fmt.Errorf(`Invalid scope %s: Uses non-canonical format, perhaps try %s`, scope, cleaned)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// dirReference is an ImageReference for directory paths.
|
||||
type dirReference struct {
|
||||
// Note that the interpretation of paths below depends on the underlying filesystem state, which may change under us at any time!
|
||||
// Either of the paths may point to a different, or no, inode over time. resolvedPath may contain symbolic links, and so on.
|
||||
|
||||
// Generally we follow the intent of the user, and use the "path" member for filesystem operations (e.g. the user can use a relative path to avoid
|
||||
// being exposed to symlinks and renames in the parent directories to the working directory).
|
||||
// (But in general, we make no attempt to be completely safe against concurrent hostile filesystem modifications.)
|
||||
path string // As specified by the user. May be relative, contain symlinks, etc.
|
||||
resolvedPath string // Absolute path with no symlinks, at least at the time of its creation. Primarily used for policy namespaces.
|
||||
}
|
||||
|
||||
// There is no directory.ParseReference because it is rather pointless.
|
||||
// Callers who need a transport-independent interface will go through
|
||||
// dirTransport.ParseReference; callers who intentionally deal with directories
|
||||
// can use directory.NewReference.
|
||||
|
||||
// NewReference returns a directory reference for a specified path.
|
||||
//
|
||||
// We do not expose an API supplying the resolvedPath; we could, but recomputing it
|
||||
// is generally cheap enough that we prefer being confident about the properties of resolvedPath.
|
||||
func NewReference(path string) (types.ImageReference, error) {
|
||||
resolved, err := explicitfilepath.ResolvePathToFullyExplicit(path)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return dirReference{path: path, resolvedPath: resolved}, nil
|
||||
}
|
||||
|
||||
func (ref dirReference) Transport() types.ImageTransport {
|
||||
return Transport
|
||||
}
|
||||
|
||||
// StringWithinTransport returns a string representation of the reference, which MUST be such that
|
||||
// reference.Transport().ParseReference(reference.StringWithinTransport()) returns an equivalent reference.
|
||||
// NOTE: The returned string is not promised to be equal to the original input to ParseReference;
|
||||
// e.g. default attribute values omitted by the user may be filled in in the return value, or vice versa.
|
||||
// WARNING: Do not use the return value in the UI to describe an image, it does not contain the Transport().Name() prefix.
|
||||
func (ref dirReference) StringWithinTransport() string {
|
||||
return ref.path
|
||||
}
|
||||
|
||||
// DockerReference returns a Docker reference associated with this reference
|
||||
// (fully explicit, i.e. !reference.IsNameOnly, but reflecting user intent,
|
||||
// not e.g. after redirect or alias processing), or nil if unknown/not applicable.
|
||||
func (ref dirReference) DockerReference() reference.Named {
|
||||
return nil
|
||||
}
|
||||
|
||||
// PolicyConfigurationIdentity returns a string representation of the reference, suitable for policy lookup.
|
||||
// This MUST reflect user intent, not e.g. after processing of third-party redirects or aliases;
|
||||
// The value SHOULD be fully explicit about its semantics, with no hidden defaults, AND canonical
|
||||
// (i.e. various references with exactly the same semantics should return the same configuration identity)
|
||||
// It is fine for the return value to be equal to StringWithinTransport(), and it is desirable but
|
||||
// not required/guaranteed that it will be a valid input to Transport().ParseReference().
|
||||
// Returns "" if configuration identities for these references are not supported.
|
||||
func (ref dirReference) PolicyConfigurationIdentity() string {
|
||||
return ref.resolvedPath
|
||||
}
|
||||
|
||||
// PolicyConfigurationNamespaces returns a list of other policy configuration namespaces to search
|
||||
// for if explicit configuration for PolicyConfigurationIdentity() is not set. The list will be processed
|
||||
// in order, terminating on first match, and an implicit "" is always checked at the end.
|
||||
// It is STRONGLY recommended for the first element, if any, to be a prefix of PolicyConfigurationIdentity(),
|
||||
// and each following element to be a prefix of the element preceding it.
|
||||
func (ref dirReference) PolicyConfigurationNamespaces() []string {
|
||||
res := []string{}
|
||||
path := ref.resolvedPath
|
||||
for {
|
||||
lastSlash := strings.LastIndex(path, "/")
|
||||
if lastSlash == -1 || lastSlash == 0 {
|
||||
break
|
||||
}
|
||||
path = path[:lastSlash]
|
||||
res = append(res, path)
|
||||
}
|
||||
// Note that we do not include "/"; it is redundant with the default "" global default,
|
||||
// and rejected by dirTransport.ValidatePolicyConfigurationScope above.
|
||||
return res
|
||||
}
|
||||
|
||||
// NewImage returns a types.Image for this reference, possibly specialized for this ImageTransport.
|
||||
// The caller must call .Close() on the returned Image.
|
||||
// NOTE: If any kind of signature verification should happen, build an UnparsedImage from the value returned by NewImageSource,
|
||||
// verify that UnparsedImage, and convert it into a real Image via image.FromUnparsedImage.
|
||||
func (ref dirReference) NewImage(ctx *types.SystemContext) (types.Image, error) {
|
||||
src := newImageSource(ref)
|
||||
return image.FromSource(src)
|
||||
}
|
||||
|
||||
// NewImageSource returns a types.ImageSource for this reference,
|
||||
// asking the backend to use a manifest from requestedManifestMIMETypes if possible.
|
||||
// nil requestedManifestMIMETypes means manifest.DefaultRequestedManifestMIMETypes.
|
||||
// The caller must call .Close() on the returned ImageSource.
|
||||
func (ref dirReference) NewImageSource(ctx *types.SystemContext, requestedManifestMIMETypes []string) (types.ImageSource, error) {
|
||||
return newImageSource(ref), nil
|
||||
}
|
||||
|
||||
// NewImageDestination returns a types.ImageDestination for this reference.
|
||||
// The caller must call .Close() on the returned ImageDestination.
|
||||
func (ref dirReference) NewImageDestination(ctx *types.SystemContext) (types.ImageDestination, error) {
|
||||
return newImageDestination(ref), nil
|
||||
}
|
||||
|
||||
// DeleteImage deletes the named image from the registry, if supported.
|
||||
func (ref dirReference) DeleteImage(ctx *types.SystemContext) error {
|
||||
return fmt.Errorf("Deleting images not implemented for dir: images")
|
||||
}
|
||||
|
||||
// manifestPath returns a path for the manifest within a directory using our conventions.
|
||||
func (ref dirReference) manifestPath() string {
|
||||
return filepath.Join(ref.path, "manifest.json")
|
||||
}
|
||||
|
||||
// layerPath returns a path for a layer tarball within a directory using our conventions.
|
||||
func (ref dirReference) layerPath(digest digest.Digest) string {
|
||||
// FIXME: Should we keep the digest identification?
|
||||
return filepath.Join(ref.path, digest.Hex()+".tar")
|
||||
}
|
||||
|
||||
// signaturePath returns a path for a signature within a directory using our conventions.
|
||||
func (ref dirReference) signaturePath(index int) string {
|
||||
return filepath.Join(ref.path, fmt.Sprintf("signature-%d", index+1))
|
||||
}
|
55
vendor/github.com/containers/image/directory/explicitfilepath/path.go
generated
vendored
Normal file
55
vendor/github.com/containers/image/directory/explicitfilepath/path.go
generated
vendored
Normal file
|
@ -0,0 +1,55 @@
|
|||
package explicitfilepath
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
)
|
||||
|
||||
// ResolvePathToFullyExplicit returns the input path converted to an absolute, no-symlinks, cleaned up path.
|
||||
// To do so, all elements of the input path must exist; as a special case, the final component may be
|
||||
// a non-existent name (but not a symlink pointing to a non-existent name)
|
||||
// This is intended as a a helper for implementations of types.ImageReference.PolicyConfigurationIdentity etc.
|
||||
func ResolvePathToFullyExplicit(path string) (string, error) {
|
||||
switch _, err := os.Lstat(path); {
|
||||
case err == nil:
|
||||
return resolveExistingPathToFullyExplicit(path)
|
||||
case os.IsNotExist(err):
|
||||
parent, file := filepath.Split(path)
|
||||
resolvedParent, err := resolveExistingPathToFullyExplicit(parent)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
if file == "." || file == ".." {
|
||||
// Coverage: This can happen, but very rarely: if we have successfully resolved the parent, both "." and ".." in it should have been resolved as well.
|
||||
// This can still happen if there is a filesystem race condition, causing the Lstat() above to fail but the later resolution to succeed.
|
||||
// We do not care to promise anything if such filesystem race conditions can happen, but we definitely don't want to return "."/".." components
|
||||
// in the resulting path, and especially not at the end.
|
||||
return "", fmt.Errorf("Unexpectedly missing special filename component in %s", path)
|
||||
}
|
||||
resolvedPath := filepath.Join(resolvedParent, file)
|
||||
// As a sanity check, ensure that there are no "." or ".." components.
|
||||
cleanedResolvedPath := filepath.Clean(resolvedPath)
|
||||
if cleanedResolvedPath != resolvedPath {
|
||||
// Coverage: This should never happen.
|
||||
return "", fmt.Errorf("Internal inconsistency: Path %s resolved to %s still cleaned up to %s", path, resolvedPath, cleanedResolvedPath)
|
||||
}
|
||||
return resolvedPath, nil
|
||||
default: // err != nil, unrecognized
|
||||
return "", err
|
||||
}
|
||||
}
|
||||
|
||||
// resolveExistingPathToFullyExplicit is the same as ResolvePathToFullyExplicit,
|
||||
// but without the special case for missing final component.
|
||||
func resolveExistingPathToFullyExplicit(path string) (string, error) {
|
||||
resolved, err := filepath.Abs(path)
|
||||
if err != nil {
|
||||
return "", err // Coverage: This can fail only if os.Getwd() fails.
|
||||
}
|
||||
resolved, err = filepath.EvalSymlinks(resolved)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
return filepath.Clean(resolved), nil
|
||||
}
|
312
vendor/github.com/containers/image/docker/daemon/daemon_dest.go
generated
vendored
Normal file
312
vendor/github.com/containers/image/docker/daemon/daemon_dest.go
generated
vendored
Normal file
|
@ -0,0 +1,312 @@
|
|||
package daemon
|
||||
|
||||
import (
|
||||
"archive/tar"
|
||||
"bytes"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"time"
|
||||
|
||||
"github.com/Sirupsen/logrus"
|
||||
"github.com/containers/image/docker/reference"
|
||||
"github.com/containers/image/manifest"
|
||||
"github.com/containers/image/types"
|
||||
"github.com/docker/distribution/digest"
|
||||
"github.com/docker/engine-api/client"
|
||||
"golang.org/x/net/context"
|
||||
)
|
||||
|
||||
type daemonImageDestination struct {
|
||||
ref daemonReference
|
||||
namedTaggedRef reference.NamedTagged // Strictly speaking redundant with ref above; having the field makes it structurally impossible for later users to fail.
|
||||
// For talking to imageLoadGoroutine
|
||||
goroutineCancel context.CancelFunc
|
||||
statusChannel <-chan error
|
||||
writer *io.PipeWriter
|
||||
tar *tar.Writer
|
||||
// Other state
|
||||
committed bool // writer has been closed
|
||||
blobs map[digest.Digest]types.BlobInfo // list of already-sent blobs
|
||||
}
|
||||
|
||||
// newImageDestination returns a types.ImageDestination for the specified image reference.
|
||||
func newImageDestination(systemCtx *types.SystemContext, ref daemonReference) (types.ImageDestination, error) {
|
||||
if ref.ref == nil {
|
||||
return nil, fmt.Errorf("Invalid destination docker-daemon:%s: a destination must be a name:tag", ref.StringWithinTransport())
|
||||
}
|
||||
namedTaggedRef, ok := ref.ref.(reference.NamedTagged)
|
||||
if !ok {
|
||||
return nil, fmt.Errorf("Invalid destination docker-daemon:%s: a destination must be a name:tag", ref.StringWithinTransport())
|
||||
}
|
||||
|
||||
c, err := client.NewClient(client.DefaultDockerHost, "1.22", nil, nil) // FIXME: overridable host
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("Error initializing docker engine client: %v", err)
|
||||
}
|
||||
|
||||
reader, writer := io.Pipe()
|
||||
// Commit() may never be called, so we may never read from this channel; so, make this buffered to allow imageLoadGoroutine to write status and terminate even if we never read it.
|
||||
statusChannel := make(chan error, 1)
|
||||
|
||||
ctx, goroutineCancel := context.WithCancel(context.Background())
|
||||
go imageLoadGoroutine(ctx, c, reader, statusChannel)
|
||||
|
||||
return &daemonImageDestination{
|
||||
ref: ref,
|
||||
namedTaggedRef: namedTaggedRef,
|
||||
goroutineCancel: goroutineCancel,
|
||||
statusChannel: statusChannel,
|
||||
writer: writer,
|
||||
tar: tar.NewWriter(writer),
|
||||
committed: false,
|
||||
blobs: make(map[digest.Digest]types.BlobInfo),
|
||||
}, nil
|
||||
}
|
||||
|
||||
// imageLoadGoroutine accepts tar stream on reader, sends it to c, and reports error or success by writing to statusChannel
|
||||
func imageLoadGoroutine(ctx context.Context, c *client.Client, reader *io.PipeReader, statusChannel chan<- error) {
|
||||
err := errors.New("Internal error: unexpected panic in imageLoadGoroutine")
|
||||
defer func() {
|
||||
logrus.Debugf("docker-daemon: sending done, status %v", err)
|
||||
statusChannel <- err
|
||||
}()
|
||||
defer func() {
|
||||
if err == nil {
|
||||
reader.Close()
|
||||
} else {
|
||||
reader.CloseWithError(err)
|
||||
}
|
||||
}()
|
||||
|
||||
resp, err := c.ImageLoad(ctx, reader, true)
|
||||
if err != nil {
|
||||
err = fmt.Errorf("Error saving image to docker engine: %v", err)
|
||||
return
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
}
|
||||
|
||||
// Close removes resources associated with an initialized ImageDestination, if any.
|
||||
func (d *daemonImageDestination) Close() {
|
||||
if !d.committed {
|
||||
logrus.Debugf("docker-daemon: Closing tar stream to abort loading")
|
||||
// In principle, goroutineCancel() should abort the HTTP request and stop the process from continuing.
|
||||
// In practice, though, https://github.com/docker/engine-api/blob/master/client/transport/cancellable/cancellable.go
|
||||
// currently just runs the HTTP request to completion in a goroutine, and returns early if the context is canceled
|
||||
// without terminating the HTTP request at all. So we need this CloseWithError to terminate sending the HTTP request Body
|
||||
// immediately, and hopefully, through terminating the sending which uses "Transfer-Encoding: chunked"" without sending
|
||||
// the terminating zero-length chunk, prevent the docker daemon from processing the tar stream at all.
|
||||
// Whether that works or not, closing the PipeWriter seems desirable in any case.
|
||||
d.writer.CloseWithError(errors.New("Aborting upload, daemonImageDestination closed without a previous .Commit()"))
|
||||
}
|
||||
d.goroutineCancel()
|
||||
}
|
||||
|
||||
// Reference returns the reference used to set up this destination. Note that this should directly correspond to user's intent,
|
||||
// e.g. it should use the public hostname instead of the result of resolving CNAMEs or following redirects.
|
||||
func (d *daemonImageDestination) Reference() types.ImageReference {
|
||||
return d.ref
|
||||
}
|
||||
|
||||
// SupportedManifestMIMETypes tells which manifest mime types the destination supports
|
||||
// If an empty slice or nil it's returned, then any mime type can be tried to upload
|
||||
func (d *daemonImageDestination) SupportedManifestMIMETypes() []string {
|
||||
return []string{
|
||||
manifest.DockerV2Schema2MediaType, // We rely on the types.Image.UpdatedImage schema conversion capabilities.
|
||||
}
|
||||
}
|
||||
|
||||
// SupportsSignatures returns an error (to be displayed to the user) if the destination certainly can't store signatures.
|
||||
// Note: It is still possible for PutSignatures to fail if SupportsSignatures returns nil.
|
||||
func (d *daemonImageDestination) SupportsSignatures() error {
|
||||
return fmt.Errorf("Storing signatures for docker-daemon: destinations is not supported")
|
||||
}
|
||||
|
||||
// ShouldCompressLayers returns true iff it is desirable to compress layer blobs written to this destination.
|
||||
func (d *daemonImageDestination) ShouldCompressLayers() bool {
|
||||
return false
|
||||
}
|
||||
|
||||
// AcceptsForeignLayerURLs returns false iff foreign layers in manifest should be actually
|
||||
// uploaded to the image destination, true otherwise.
|
||||
func (d *daemonImageDestination) AcceptsForeignLayerURLs() bool {
|
||||
return false
|
||||
}
|
||||
|
||||
// PutBlob writes contents of stream and returns data representing the result (with all data filled in).
|
||||
// inputInfo.Digest can be optionally provided if known; it is not mandatory for the implementation to verify it.
|
||||
// inputInfo.Size is the expected length of stream, if known.
|
||||
// WARNING: The contents of stream are being verified on the fly. Until stream.Read() returns io.EOF, the contents of the data SHOULD NOT be available
|
||||
// to any other readers for download using the supplied digest.
|
||||
// If stream.Read() at any time, ESPECIALLY at end of input, returns an error, PutBlob MUST 1) fail, and 2) delete any data stored so far.
|
||||
func (d *daemonImageDestination) PutBlob(stream io.Reader, inputInfo types.BlobInfo) (types.BlobInfo, error) {
|
||||
if ok, size, err := d.HasBlob(inputInfo); err == nil && ok {
|
||||
return types.BlobInfo{Digest: inputInfo.Digest, Size: size}, nil
|
||||
}
|
||||
|
||||
if inputInfo.Size == -1 { // Ouch, we need to stream the blob into a temporary file just to determine the size.
|
||||
logrus.Debugf("docker-daemon: input with unknown size, streaming to disk first…")
|
||||
streamCopy, err := ioutil.TempFile(temporaryDirectoryForBigFiles, "docker-daemon-blob")
|
||||
if err != nil {
|
||||
return types.BlobInfo{}, err
|
||||
}
|
||||
defer os.Remove(streamCopy.Name())
|
||||
defer streamCopy.Close()
|
||||
|
||||
size, err := io.Copy(streamCopy, stream)
|
||||
if err != nil {
|
||||
return types.BlobInfo{}, err
|
||||
}
|
||||
_, err = streamCopy.Seek(0, os.SEEK_SET)
|
||||
if err != nil {
|
||||
return types.BlobInfo{}, err
|
||||
}
|
||||
inputInfo.Size = size // inputInfo is a struct, so we are only modifying our copy.
|
||||
stream = streamCopy
|
||||
logrus.Debugf("… streaming done")
|
||||
}
|
||||
|
||||
digester := digest.Canonical.New()
|
||||
tee := io.TeeReader(stream, digester.Hash())
|
||||
if err := d.sendFile(inputInfo.Digest.String(), inputInfo.Size, tee); err != nil {
|
||||
return types.BlobInfo{}, err
|
||||
}
|
||||
d.blobs[inputInfo.Digest] = types.BlobInfo{Digest: digester.Digest(), Size: inputInfo.Size}
|
||||
return types.BlobInfo{Digest: digester.Digest(), Size: inputInfo.Size}, nil
|
||||
}
|
||||
|
||||
func (d *daemonImageDestination) HasBlob(info types.BlobInfo) (bool, int64, error) {
|
||||
if info.Digest == "" {
|
||||
return false, -1, fmt.Errorf(`"Can not check for a blob with unknown digest`)
|
||||
}
|
||||
if blob, ok := d.blobs[info.Digest]; ok {
|
||||
return true, blob.Size, nil
|
||||
}
|
||||
return false, -1, types.ErrBlobNotFound
|
||||
}
|
||||
|
||||
func (d *daemonImageDestination) ReapplyBlob(info types.BlobInfo) (types.BlobInfo, error) {
|
||||
return info, nil
|
||||
}
|
||||
|
||||
func (d *daemonImageDestination) PutManifest(m []byte) error {
|
||||
var man schema2Manifest
|
||||
if err := json.Unmarshal(m, &man); err != nil {
|
||||
return fmt.Errorf("Error parsing manifest: %v", err)
|
||||
}
|
||||
if man.SchemaVersion != 2 || man.MediaType != manifest.DockerV2Schema2MediaType {
|
||||
return fmt.Errorf("Unsupported manifest type, need a Docker schema 2 manifest")
|
||||
}
|
||||
|
||||
layerPaths := []string{}
|
||||
for _, l := range man.Layers {
|
||||
layerPaths = append(layerPaths, l.Digest.String())
|
||||
}
|
||||
|
||||
// For github.com/docker/docker consumers, this works just as well as
|
||||
// refString := d.namedTaggedRef.String() [i.e. d.ref.ref.String()]
|
||||
// because when reading the RepoTags strings, github.com/docker/docker/reference
|
||||
// normalizes both of them to the same value.
|
||||
//
|
||||
// Doing it this way to include the normalized-out `docker.io[/library]` does make
|
||||
// a difference for github.com/projectatomic/docker consumers, with the
|
||||
// “Add --add-registry and --block-registry options to docker daemon” patch.
|
||||
// These consumers treat reference strings which include a hostname and reference
|
||||
// strings without a hostname differently.
|
||||
//
|
||||
// Using the host name here is more explicit about the intent, and it has the same
|
||||
// effect as (docker pull) in projectatomic/docker, which tags the result using
|
||||
// a hostname-qualified reference.
|
||||
// See https://github.com/containers/image/issues/72 for a more detailed
|
||||
// analysis and explanation.
|
||||
refString := fmt.Sprintf("%s:%s", d.namedTaggedRef.FullName(), d.namedTaggedRef.Tag())
|
||||
|
||||
items := []manifestItem{{
|
||||
Config: man.Config.Digest.String(),
|
||||
RepoTags: []string{refString},
|
||||
Layers: layerPaths,
|
||||
Parent: "",
|
||||
LayerSources: nil,
|
||||
}}
|
||||
itemsBytes, err := json.Marshal(&items)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// FIXME? Do we also need to support the legacy format?
|
||||
return d.sendFile(manifestFileName, int64(len(itemsBytes)), bytes.NewReader(itemsBytes))
|
||||
}
|
||||
|
||||
type tarFI struct {
|
||||
path string
|
||||
size int64
|
||||
}
|
||||
|
||||
func (t *tarFI) Name() string {
|
||||
return t.path
|
||||
}
|
||||
func (t *tarFI) Size() int64 {
|
||||
return t.size
|
||||
}
|
||||
func (t *tarFI) Mode() os.FileMode {
|
||||
return 0444
|
||||
}
|
||||
func (t *tarFI) ModTime() time.Time {
|
||||
return time.Unix(0, 0)
|
||||
}
|
||||
func (t *tarFI) IsDir() bool {
|
||||
return false
|
||||
}
|
||||
func (t *tarFI) Sys() interface{} {
|
||||
return nil
|
||||
}
|
||||
|
||||
// sendFile sends a file into the tar stream.
|
||||
func (d *daemonImageDestination) sendFile(path string, expectedSize int64, stream io.Reader) error {
|
||||
hdr, err := tar.FileInfoHeader(&tarFI{path: path, size: expectedSize}, "")
|
||||
if err != nil {
|
||||
return nil
|
||||
}
|
||||
logrus.Debugf("Sending as tar file %s", path)
|
||||
if err := d.tar.WriteHeader(hdr); err != nil {
|
||||
return err
|
||||
}
|
||||
size, err := io.Copy(d.tar, stream)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if size != expectedSize {
|
||||
return fmt.Errorf("Size mismatch when copying %s, expected %d, got %d", path, expectedSize, size)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (d *daemonImageDestination) PutSignatures(signatures [][]byte) error {
|
||||
if len(signatures) != 0 {
|
||||
return fmt.Errorf("Storing signatures for docker-daemon: destinations is not supported")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Commit marks the process of storing the image as successful and asks for the image to be persisted.
|
||||
// WARNING: This does not have any transactional semantics:
|
||||
// - Uploaded data MAY be visible to others before Commit() is called
|
||||
// - Uploaded data MAY be removed or MAY remain around if Close() is called without Commit() (i.e. rollback is allowed but not guaranteed)
|
||||
func (d *daemonImageDestination) Commit() error {
|
||||
logrus.Debugf("docker-daemon: Closing tar stream")
|
||||
if err := d.tar.Close(); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := d.writer.Close(); err != nil {
|
||||
return err
|
||||
}
|
||||
d.committed = true // We may still fail, but we are done sending to imageLoadGoroutine.
|
||||
|
||||
logrus.Debugf("docker-daemon: Waiting for status")
|
||||
err := <-d.statusChannel
|
||||
return err
|
||||
}
|
361
vendor/github.com/containers/image/docker/daemon/daemon_src.go
generated
vendored
Normal file
361
vendor/github.com/containers/image/docker/daemon/daemon_src.go
generated
vendored
Normal file
|
@ -0,0 +1,361 @@
|
|||
package daemon
|
||||
|
||||
import (
|
||||
"archive/tar"
|
||||
"bytes"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"path"
|
||||
|
||||
"github.com/containers/image/manifest"
|
||||
"github.com/containers/image/types"
|
||||
"github.com/docker/distribution/digest"
|
||||
"github.com/docker/engine-api/client"
|
||||
"golang.org/x/net/context"
|
||||
)
|
||||
|
||||
const temporaryDirectoryForBigFiles = "/var/tmp" // Do not use the system default of os.TempDir(), usually /tmp, because with systemd it could be a tmpfs.
|
||||
|
||||
type daemonImageSource struct {
|
||||
ref daemonReference
|
||||
tarCopyPath string
|
||||
// The following data is only available after ensureCachedDataIsPresent() succeeds
|
||||
tarManifest *manifestItem // nil if not available yet.
|
||||
configBytes []byte
|
||||
configDigest digest.Digest
|
||||
orderedDiffIDList []diffID
|
||||
knownLayers map[diffID]*layerInfo
|
||||
// Other state
|
||||
generatedManifest []byte // Private cache for GetManifest(), nil if not set yet.
|
||||
}
|
||||
|
||||
type layerInfo struct {
|
||||
path string
|
||||
size int64
|
||||
}
|
||||
|
||||
// newImageSource returns a types.ImageSource for the specified image reference.
|
||||
// The caller must call .Close() on the returned ImageSource.
|
||||
//
|
||||
// It would be great if we were able to stream the input tar as it is being
|
||||
// sent; but Docker sends the top-level manifest, which determines which paths
|
||||
// to look for, at the end, so in we will need to seek back and re-read, several times.
|
||||
// (We could, perhaps, expect an exact sequence, assume that the first plaintext file
|
||||
// is the config, and that the following len(RootFS) files are the layers, but that feels
|
||||
// way too brittle.)
|
||||
func newImageSource(ctx *types.SystemContext, ref daemonReference) (types.ImageSource, error) {
|
||||
c, err := client.NewClient(client.DefaultDockerHost, "1.22", nil, nil) // FIXME: overridable host
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("Error initializing docker engine client: %v", err)
|
||||
}
|
||||
// Per NewReference(), ref.StringWithinTransport() is either an image ID (config digest), or a !reference.NameOnly() reference.
|
||||
// Either way ImageSave should create a tarball with exactly one image.
|
||||
inputStream, err := c.ImageSave(context.TODO(), []string{ref.StringWithinTransport()})
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("Error loading image from docker engine: %v", err)
|
||||
}
|
||||
defer inputStream.Close()
|
||||
|
||||
// FIXME: use SystemContext here.
|
||||
tarCopyFile, err := ioutil.TempFile(temporaryDirectoryForBigFiles, "docker-daemon-tar")
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer tarCopyFile.Close()
|
||||
|
||||
succeeded := false
|
||||
defer func() {
|
||||
if !succeeded {
|
||||
os.Remove(tarCopyFile.Name())
|
||||
}
|
||||
}()
|
||||
|
||||
if _, err := io.Copy(tarCopyFile, inputStream); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
succeeded = true
|
||||
return &daemonImageSource{
|
||||
ref: ref,
|
||||
tarCopyPath: tarCopyFile.Name(),
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Reference returns the reference used to set up this source, _as specified by the user_
|
||||
// (not as the image itself, or its underlying storage, claims). This can be used e.g. to determine which public keys are trusted for this image.
|
||||
func (s *daemonImageSource) Reference() types.ImageReference {
|
||||
return s.ref
|
||||
}
|
||||
|
||||
// Close removes resources associated with an initialized ImageSource, if any.
|
||||
func (s *daemonImageSource) Close() {
|
||||
_ = os.Remove(s.tarCopyPath)
|
||||
}
|
||||
|
||||
// tarReadCloser is a way to close the backing file of a tar.Reader when the user no longer needs the tar component.
|
||||
type tarReadCloser struct {
|
||||
*tar.Reader
|
||||
backingFile *os.File
|
||||
}
|
||||
|
||||
func (t *tarReadCloser) Close() error {
|
||||
return t.backingFile.Close()
|
||||
}
|
||||
|
||||
// openTarComponent returns a ReadCloser for the specific file within the archive.
|
||||
// This is linear scan; we assume that the tar file will have a fairly small amount of files (~layers),
|
||||
// and that filesystem caching will make the repeated seeking over the (uncompressed) tarCopyPath cheap enough.
|
||||
// The caller should call .Close() on the returned stream.
|
||||
func (s *daemonImageSource) openTarComponent(componentPath string) (io.ReadCloser, error) {
|
||||
f, err := os.Open(s.tarCopyPath)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
succeeded := false
|
||||
defer func() {
|
||||
if !succeeded {
|
||||
f.Close()
|
||||
}
|
||||
}()
|
||||
|
||||
tarReader, header, err := findTarComponent(f, componentPath)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if header == nil {
|
||||
return nil, os.ErrNotExist
|
||||
}
|
||||
if header.FileInfo().Mode()&os.ModeType == os.ModeSymlink { // FIXME: untested
|
||||
// We follow only one symlink; so no loops are possible.
|
||||
if _, err := f.Seek(0, os.SEEK_SET); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
// The new path could easily point "outside" the archive, but we only compare it to existing tar headers without extracting the archive,
|
||||
// so we don't care.
|
||||
tarReader, header, err = findTarComponent(f, path.Join(path.Dir(componentPath), header.Linkname))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if header == nil {
|
||||
return nil, os.ErrNotExist
|
||||
}
|
||||
}
|
||||
|
||||
if !header.FileInfo().Mode().IsRegular() {
|
||||
return nil, fmt.Errorf("Error reading tar archive component %s: not a regular file", header.Name)
|
||||
}
|
||||
succeeded = true
|
||||
return &tarReadCloser{Reader: tarReader, backingFile: f}, nil
|
||||
}
|
||||
|
||||
// findTarComponent returns a header and a reader matching path within inputFile,
|
||||
// or (nil, nil, nil) if not found.
|
||||
func findTarComponent(inputFile io.Reader, path string) (*tar.Reader, *tar.Header, error) {
|
||||
t := tar.NewReader(inputFile)
|
||||
for {
|
||||
h, err := t.Next()
|
||||
if err == io.EOF {
|
||||
break
|
||||
}
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
if h.Name == path {
|
||||
return t, h, nil
|
||||
}
|
||||
}
|
||||
return nil, nil, nil
|
||||
}
|
||||
|
||||
// readTarComponent returns full contents of componentPath.
|
||||
func (s *daemonImageSource) readTarComponent(path string) ([]byte, error) {
|
||||
file, err := s.openTarComponent(path)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("Error loading tar component %s: %v", path, err)
|
||||
}
|
||||
defer file.Close()
|
||||
bytes, err := ioutil.ReadAll(file)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return bytes, nil
|
||||
}
|
||||
|
||||
// ensureCachedDataIsPresent loads data necessary for any of the public accessors.
|
||||
func (s *daemonImageSource) ensureCachedDataIsPresent() error {
|
||||
if s.tarManifest != nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Read and parse manifest.json
|
||||
tarManifest, err := s.loadTarManifest()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Read and parse config.
|
||||
configBytes, err := s.readTarComponent(tarManifest.Config)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
var parsedConfig dockerImage // Most fields ommitted, we only care about layer DiffIDs.
|
||||
if err := json.Unmarshal(configBytes, &parsedConfig); err != nil {
|
||||
return fmt.Errorf("Error decoding tar config %s: %v", tarManifest.Config, err)
|
||||
}
|
||||
|
||||
knownLayers, err := s.prepareLayerData(tarManifest, &parsedConfig)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Success; commit.
|
||||
s.tarManifest = tarManifest
|
||||
s.configBytes = configBytes
|
||||
s.configDigest = digest.FromBytes(configBytes)
|
||||
s.orderedDiffIDList = parsedConfig.RootFS.DiffIDs
|
||||
s.knownLayers = knownLayers
|
||||
return nil
|
||||
}
|
||||
|
||||
// loadTarManifest loads and decodes the manifest.json.
|
||||
func (s *daemonImageSource) loadTarManifest() (*manifestItem, error) {
|
||||
// FIXME? Do we need to deal with the legacy format?
|
||||
bytes, err := s.readTarComponent(manifestFileName)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
var items []manifestItem
|
||||
if err := json.Unmarshal(bytes, &items); err != nil {
|
||||
return nil, fmt.Errorf("Error decoding tar manifest.json: %v", err)
|
||||
}
|
||||
if len(items) != 1 {
|
||||
return nil, fmt.Errorf("Unexpected tar manifest.json: expected 1 item, got %d", len(items))
|
||||
}
|
||||
return &items[0], nil
|
||||
}
|
||||
|
||||
func (s *daemonImageSource) prepareLayerData(tarManifest *manifestItem, parsedConfig *dockerImage) (map[diffID]*layerInfo, error) {
|
||||
// Collect layer data available in manifest and config.
|
||||
if len(tarManifest.Layers) != len(parsedConfig.RootFS.DiffIDs) {
|
||||
return nil, fmt.Errorf("Inconsistent layer count: %d in manifest, %d in config", len(tarManifest.Layers), len(parsedConfig.RootFS.DiffIDs))
|
||||
}
|
||||
knownLayers := map[diffID]*layerInfo{}
|
||||
unknownLayerSizes := map[string]*layerInfo{} // Points into knownLayers, a "to do list" of items with unknown sizes.
|
||||
for i, diffID := range parsedConfig.RootFS.DiffIDs {
|
||||
if _, ok := knownLayers[diffID]; ok {
|
||||
// Apparently it really can happen that a single image contains the same layer diff more than once.
|
||||
// In that case, the diffID validation ensures that both layers truly are the same, and it should not matter
|
||||
// which of the tarManifest.Layers paths is used; (docker save) actually makes the duplicates symlinks to the original.
|
||||
continue
|
||||
}
|
||||
layerPath := tarManifest.Layers[i]
|
||||
if _, ok := unknownLayerSizes[layerPath]; ok {
|
||||
return nil, fmt.Errorf("Layer tarfile %s used for two different DiffID values", layerPath)
|
||||
}
|
||||
li := &layerInfo{ // A new element in each iteration
|
||||
path: layerPath,
|
||||
size: -1,
|
||||
}
|
||||
knownLayers[diffID] = li
|
||||
unknownLayerSizes[layerPath] = li
|
||||
}
|
||||
|
||||
// Scan the tar file to collect layer sizes.
|
||||
file, err := os.Open(s.tarCopyPath)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer file.Close()
|
||||
t := tar.NewReader(file)
|
||||
for {
|
||||
h, err := t.Next()
|
||||
if err == io.EOF {
|
||||
break
|
||||
}
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if li, ok := unknownLayerSizes[h.Name]; ok {
|
||||
li.size = h.Size
|
||||
delete(unknownLayerSizes, h.Name)
|
||||
}
|
||||
}
|
||||
if len(unknownLayerSizes) != 0 {
|
||||
return nil, fmt.Errorf("Some layer tarfiles are missing in the tarball") // This could do with a better error reporting, if this ever happened in practice.
|
||||
}
|
||||
|
||||
return knownLayers, nil
|
||||
}
|
||||
|
||||
// GetManifest returns the image's manifest along with its MIME type (which may be empty when it can't be determined but the manifest is available).
|
||||
// It may use a remote (= slow) service.
|
||||
func (s *daemonImageSource) GetManifest() ([]byte, string, error) {
|
||||
if s.generatedManifest == nil {
|
||||
if err := s.ensureCachedDataIsPresent(); err != nil {
|
||||
return nil, "", err
|
||||
}
|
||||
m := schema2Manifest{
|
||||
SchemaVersion: 2,
|
||||
MediaType: manifest.DockerV2Schema2MediaType,
|
||||
Config: distributionDescriptor{
|
||||
MediaType: manifest.DockerV2Schema2ConfigMediaType,
|
||||
Size: int64(len(s.configBytes)),
|
||||
Digest: s.configDigest,
|
||||
},
|
||||
Layers: []distributionDescriptor{},
|
||||
}
|
||||
for _, diffID := range s.orderedDiffIDList {
|
||||
li, ok := s.knownLayers[diffID]
|
||||
if !ok {
|
||||
return nil, "", fmt.Errorf("Internal inconsistency: Information about layer %s missing", diffID)
|
||||
}
|
||||
m.Layers = append(m.Layers, distributionDescriptor{
|
||||
Digest: digest.Digest(diffID), // diffID is a digest of the uncompressed tarball
|
||||
MediaType: manifest.DockerV2Schema2LayerMediaType,
|
||||
Size: li.size,
|
||||
})
|
||||
}
|
||||
manifestBytes, err := json.Marshal(&m)
|
||||
if err != nil {
|
||||
return nil, "", err
|
||||
}
|
||||
s.generatedManifest = manifestBytes
|
||||
}
|
||||
return s.generatedManifest, manifest.DockerV2Schema2MediaType, nil
|
||||
}
|
||||
|
||||
// GetTargetManifest returns an image's manifest given a digest. This is mainly used to retrieve a single image's manifest
|
||||
// out of a manifest list.
|
||||
func (s *daemonImageSource) GetTargetManifest(digest digest.Digest) ([]byte, string, error) {
|
||||
// How did we even get here? GetManifest() above has returned a manifest.DockerV2Schema2MediaType.
|
||||
return nil, "", fmt.Errorf(`Manifest lists are not supported by "docker-daemon:"`)
|
||||
}
|
||||
|
||||
// GetBlob returns a stream for the specified blob, and the blob’s size (or -1 if unknown).
|
||||
func (s *daemonImageSource) GetBlob(info types.BlobInfo) (io.ReadCloser, int64, error) {
|
||||
if err := s.ensureCachedDataIsPresent(); err != nil {
|
||||
return nil, 0, err
|
||||
}
|
||||
|
||||
if info.Digest == s.configDigest { // FIXME? Implement a more general algorithm matching instead of assuming sha256.
|
||||
return ioutil.NopCloser(bytes.NewReader(s.configBytes)), int64(len(s.configBytes)), nil
|
||||
}
|
||||
|
||||
if li, ok := s.knownLayers[diffID(info.Digest)]; ok { // diffID is a digest of the uncompressed tarball,
|
||||
stream, err := s.openTarComponent(li.path)
|
||||
if err != nil {
|
||||
return nil, 0, err
|
||||
}
|
||||
return stream, li.size, nil
|
||||
}
|
||||
|
||||
return nil, 0, fmt.Errorf("Unknown blob %s", info.Digest)
|
||||
}
|
||||
|
||||
// GetSignatures returns the image's signatures. It may use a remote (= slow) service.
|
||||
func (s *daemonImageSource) GetSignatures() ([][]byte, error) {
|
||||
return [][]byte{}, nil
|
||||
}
|
179
vendor/github.com/containers/image/docker/daemon/daemon_transport.go
generated
vendored
Normal file
179
vendor/github.com/containers/image/docker/daemon/daemon_transport.go
generated
vendored
Normal file
|
@ -0,0 +1,179 @@
|
|||
package daemon
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
|
||||
"github.com/containers/image/docker/reference"
|
||||
"github.com/containers/image/image"
|
||||
"github.com/containers/image/types"
|
||||
"github.com/docker/distribution/digest"
|
||||
)
|
||||
|
||||
// Transport is an ImageTransport for images managed by a local Docker daemon.
|
||||
var Transport = daemonTransport{}
|
||||
|
||||
type daemonTransport struct{}
|
||||
|
||||
// Name returns the name of the transport, which must be unique among other transports.
|
||||
func (t daemonTransport) Name() string {
|
||||
return "docker-daemon"
|
||||
}
|
||||
|
||||
// ParseReference converts a string, which should not start with the ImageTransport.Name prefix, into an ImageReference.
|
||||
func (t daemonTransport) ParseReference(reference string) (types.ImageReference, error) {
|
||||
return ParseReference(reference)
|
||||
}
|
||||
|
||||
// ValidatePolicyConfigurationScope checks that scope is a valid name for a signature.PolicyTransportScopes keys
|
||||
// (i.e. a valid PolicyConfigurationIdentity() or PolicyConfigurationNamespaces() return value).
|
||||
// It is acceptable to allow an invalid value which will never be matched, it can "only" cause user confusion.
|
||||
// scope passed to this function will not be "", that value is always allowed.
|
||||
func (t daemonTransport) ValidatePolicyConfigurationScope(scope string) error {
|
||||
// See the explanation in daemonReference.PolicyConfigurationIdentity.
|
||||
return errors.New(`docker-daemon: does not support any scopes except the default "" one`)
|
||||
}
|
||||
|
||||
// daemonReference is an ImageReference for images managed by a local Docker daemon
|
||||
// Exactly one of id and ref can be set.
|
||||
// For daemonImageSource, both id and ref are acceptable, ref must not be a NameOnly (interpreted as all tags in that repository by the daemon)
|
||||
// For daemonImageDestination, it must be a ref, which is NamedTagged.
|
||||
// (We could, in principle, also allow storing images without tagging them, and the user would have to refer to them using the docker image ID = config digest.
|
||||
// Using the config digest requires the caller to parse the manifest themselves, which is very cumbersome; so, for now, we don’t bother.)
|
||||
type daemonReference struct {
|
||||
id digest.Digest
|
||||
ref reference.Named // !reference.IsNameOnly
|
||||
}
|
||||
|
||||
// ParseReference converts a string, which should not start with the ImageTransport.Name prefix, into an ImageReference.
|
||||
func ParseReference(refString string) (types.ImageReference, error) {
|
||||
// This is intended to be compatible with reference.ParseIDOrReference, but more strict about refusing some of the ambiguous cases.
|
||||
// In particular, this rejects unprefixed digest values (64 hex chars), and sha256 digest prefixes (sha256:fewer-than-64-hex-chars).
|
||||
|
||||
// digest:hexstring is structurally the same as a reponame:tag (meaning docker.io/library/reponame:tag).
|
||||
// reference.ParseIDOrReference interprets such strings as digests.
|
||||
if dgst, err := digest.ParseDigest(refString); err == nil {
|
||||
// The daemon explicitly refuses to tag images with a reponame equal to digest.Canonical - but _only_ this digest name.
|
||||
// Other digest references are ambiguous, so refuse them.
|
||||
if dgst.Algorithm() != digest.Canonical {
|
||||
return nil, fmt.Errorf("Invalid docker-daemon: reference %s: only digest algorithm %s accepted", refString, digest.Canonical)
|
||||
}
|
||||
return NewReference(dgst, nil)
|
||||
}
|
||||
|
||||
ref, err := reference.ParseNamed(refString) // This also rejects unprefixed digest values
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if ref.Name() == digest.Canonical.String() {
|
||||
return nil, fmt.Errorf("Invalid docker-daemon: reference %s: The %s repository name is reserved for (non-shortened) digest references", refString, digest.Canonical)
|
||||
}
|
||||
return NewReference("", ref)
|
||||
}
|
||||
|
||||
// NewReference returns a docker-daemon reference for either the supplied image ID (config digest) or the supplied reference (which must satisfy !reference.IsNameOnly)
|
||||
func NewReference(id digest.Digest, ref reference.Named) (types.ImageReference, error) {
|
||||
if id != "" && ref != nil {
|
||||
return nil, errors.New("docker-daemon: reference must not have an image ID and a reference string specified at the same time")
|
||||
}
|
||||
if ref != nil {
|
||||
if reference.IsNameOnly(ref) {
|
||||
return nil, fmt.Errorf("docker-daemon: reference %s has neither a tag nor a digest", ref.String())
|
||||
}
|
||||
// A github.com/distribution/reference value can have a tag and a digest at the same time!
|
||||
// docker/reference does not handle that, so fail.
|
||||
_, isTagged := ref.(reference.NamedTagged)
|
||||
_, isDigested := ref.(reference.Canonical)
|
||||
if isTagged && isDigested {
|
||||
return nil, fmt.Errorf("docker-daemon: references with both a tag and digest are currently not supported")
|
||||
}
|
||||
}
|
||||
return daemonReference{
|
||||
id: id,
|
||||
ref: ref,
|
||||
}, nil
|
||||
}
|
||||
|
||||
func (ref daemonReference) Transport() types.ImageTransport {
|
||||
return Transport
|
||||
}
|
||||
|
||||
// StringWithinTransport returns a string representation of the reference, which MUST be such that
|
||||
// reference.Transport().ParseReference(reference.StringWithinTransport()) returns an equivalent reference.
|
||||
// NOTE: The returned string is not promised to be equal to the original input to ParseReference;
|
||||
// e.g. default attribute values omitted by the user may be filled in in the return value, or vice versa.
|
||||
// WARNING: Do not use the return value in the UI to describe an image, it does not contain the Transport().Name() prefix;
|
||||
// instead, see transports.ImageName().
|
||||
func (ref daemonReference) StringWithinTransport() string {
|
||||
switch {
|
||||
case ref.id != "":
|
||||
return ref.id.String()
|
||||
case ref.ref != nil:
|
||||
return ref.ref.String()
|
||||
default: // Coverage: Should never happen, NewReference above should refuse such values.
|
||||
panic("Internal inconsistency: daemonReference has empty id and nil ref")
|
||||
}
|
||||
}
|
||||
|
||||
// DockerReference returns a Docker reference associated with this reference
|
||||
// (fully explicit, i.e. !reference.IsNameOnly, but reflecting user intent,
|
||||
// not e.g. after redirect or alias processing), or nil if unknown/not applicable.
|
||||
func (ref daemonReference) DockerReference() reference.Named {
|
||||
return ref.ref // May be nil
|
||||
}
|
||||
|
||||
// PolicyConfigurationIdentity returns a string representation of the reference, suitable for policy lookup.
|
||||
// This MUST reflect user intent, not e.g. after processing of third-party redirects or aliases;
|
||||
// The value SHOULD be fully explicit about its semantics, with no hidden defaults, AND canonical
|
||||
// (i.e. various references with exactly the same semantics should return the same configuration identity)
|
||||
// It is fine for the return value to be equal to StringWithinTransport(), and it is desirable but
|
||||
// not required/guaranteed that it will be a valid input to Transport().ParseReference().
|
||||
// Returns "" if configuration identities for these references are not supported.
|
||||
func (ref daemonReference) PolicyConfigurationIdentity() string {
|
||||
// We must allow referring to images in the daemon by image ID, otherwise untagged images would not be accessible.
|
||||
// But the existence of image IDs means that we can’t truly well namespace the input; the untagged images would have to fall into the default policy,
|
||||
// which can be unexpected. So, punt.
|
||||
return "" // This still allows using the default "" scope to define a policy for this transport.
|
||||
}
|
||||
|
||||
// PolicyConfigurationNamespaces returns a list of other policy configuration namespaces to search
|
||||
// for if explicit configuration for PolicyConfigurationIdentity() is not set. The list will be processed
|
||||
// in order, terminating on first match, and an implicit "" is always checked at the end.
|
||||
// It is STRONGLY recommended for the first element, if any, to be a prefix of PolicyConfigurationIdentity(),
|
||||
// and each following element to be a prefix of the element preceding it.
|
||||
func (ref daemonReference) PolicyConfigurationNamespaces() []string {
|
||||
// See the explanation in daemonReference.PolicyConfigurationIdentity.
|
||||
return []string{}
|
||||
}
|
||||
|
||||
// NewImage returns a types.Image for this reference.
|
||||
// The caller must call .Close() on the returned Image.
|
||||
func (ref daemonReference) NewImage(ctx *types.SystemContext) (types.Image, error) {
|
||||
src, err := newImageSource(ctx, ref)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return image.FromSource(src)
|
||||
}
|
||||
|
||||
// NewImageSource returns a types.ImageSource for this reference,
|
||||
// asking the backend to use a manifest from requestedManifestMIMETypes if possible.
|
||||
// nil requestedManifestMIMETypes means manifest.DefaultRequestedManifestMIMETypes.
|
||||
// The caller must call .Close() on the returned ImageSource.
|
||||
func (ref daemonReference) NewImageSource(ctx *types.SystemContext, requestedManifestMIMETypes []string) (types.ImageSource, error) {
|
||||
return newImageSource(ctx, ref)
|
||||
}
|
||||
|
||||
// NewImageDestination returns a types.ImageDestination for this reference.
|
||||
// The caller must call .Close() on the returned ImageDestination.
|
||||
func (ref daemonReference) NewImageDestination(ctx *types.SystemContext) (types.ImageDestination, error) {
|
||||
return newImageDestination(ctx, ref)
|
||||
}
|
||||
|
||||
// DeleteImage deletes the named image from the registry, if supported.
|
||||
func (ref daemonReference) DeleteImage(ctx *types.SystemContext) error {
|
||||
// Should this just untag the image? Should this stop running containers?
|
||||
// The semantics is not quite as clear as for remote repositories.
|
||||
// The user can run (docker rmi) directly anyway, so, for now(?), punt instead of trying to guess what the user meant.
|
||||
return fmt.Errorf("Deleting images not implemented for docker-daemon: images")
|
||||
}
|
53
vendor/github.com/containers/image/docker/daemon/daemon_types.go
generated
vendored
Normal file
53
vendor/github.com/containers/image/docker/daemon/daemon_types.go
generated
vendored
Normal file
|
@ -0,0 +1,53 @@
|
|||
package daemon
|
||||
|
||||
import "github.com/docker/distribution/digest"
|
||||
|
||||
// Various data structures.
|
||||
|
||||
// Based on github.com/docker/docker/image/tarexport/tarexport.go
|
||||
const (
|
||||
manifestFileName = "manifest.json"
|
||||
// legacyLayerFileName = "layer.tar"
|
||||
// legacyConfigFileName = "json"
|
||||
// legacyVersionFileName = "VERSION"
|
||||
// legacyRepositoriesFileName = "repositories"
|
||||
)
|
||||
|
||||
type manifestItem struct {
|
||||
Config string
|
||||
RepoTags []string
|
||||
Layers []string
|
||||
Parent imageID `json:",omitempty"`
|
||||
LayerSources map[diffID]distributionDescriptor `json:",omitempty"`
|
||||
}
|
||||
|
||||
type imageID string
|
||||
type diffID digest.Digest
|
||||
|
||||
// Based on github.com/docker/distribution/blobs.go
|
||||
type distributionDescriptor struct {
|
||||
MediaType string `json:"mediaType,omitempty"`
|
||||
Size int64 `json:"size,omitempty"`
|
||||
Digest digest.Digest `json:"digest,omitempty"`
|
||||
URLs []string `json:"urls,omitempty"`
|
||||
}
|
||||
|
||||
// Based on github.com/docker/distribution/manifest/schema2/manifest.go
|
||||
// FIXME: We are repeating this all over the place; make a public copy?
|
||||
type schema2Manifest struct {
|
||||
SchemaVersion int `json:"schemaVersion"`
|
||||
MediaType string `json:"mediaType,omitempty"`
|
||||
Config distributionDescriptor `json:"config"`
|
||||
Layers []distributionDescriptor `json:"layers"`
|
||||
}
|
||||
|
||||
// Based on github.com/docker/docker/image/image.go
|
||||
// MOST CONTENT OMITTED AS UNNECESSARY
|
||||
type dockerImage struct {
|
||||
RootFS *rootFS `json:"rootfs,omitempty"`
|
||||
}
|
||||
|
||||
type rootFS struct {
|
||||
Type string `json:"type"`
|
||||
DiffIDs []diffID `json:"diff_ids,omitempty"`
|
||||
}
|
510
vendor/github.com/containers/image/docker/docker_client.go
generated
vendored
Normal file
510
vendor/github.com/containers/image/docker/docker_client.go
generated
vendored
Normal file
|
@ -0,0 +1,510 @@
|
|||
package docker
|
||||
|
||||
import (
|
||||
"crypto/tls"
|
||||
"encoding/base64"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"net"
|
||||
"net/http"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/Sirupsen/logrus"
|
||||
"github.com/containers/image/types"
|
||||
"github.com/containers/storage/pkg/homedir"
|
||||
"github.com/docker/go-connections/sockets"
|
||||
"github.com/docker/go-connections/tlsconfig"
|
||||
)
|
||||
|
||||
const (
|
||||
dockerHostname = "docker.io"
|
||||
dockerRegistry = "registry-1.docker.io"
|
||||
dockerAuthRegistry = "https://index.docker.io/v1/"
|
||||
|
||||
dockerCfg = ".docker"
|
||||
dockerCfgFileName = "config.json"
|
||||
dockerCfgObsolete = ".dockercfg"
|
||||
|
||||
baseURL = "%s://%s/v2/"
|
||||
baseURLV1 = "%s://%s/v1/_ping"
|
||||
tagsURL = "%s/tags/list"
|
||||
manifestURL = "%s/manifests/%s"
|
||||
blobsURL = "%s/blobs/%s"
|
||||
blobUploadURL = "%s/blobs/uploads/"
|
||||
)
|
||||
|
||||
// ErrV1NotSupported is returned when we're trying to talk to a
|
||||
// docker V1 registry.
|
||||
var ErrV1NotSupported = errors.New("can't talk to a V1 docker registry")
|
||||
|
||||
// dockerClient is configuration for dealing with a single Docker registry.
|
||||
type dockerClient struct {
|
||||
ctx *types.SystemContext
|
||||
registry string
|
||||
username string
|
||||
password string
|
||||
wwwAuthenticate string // Cache of a value set by ping() if scheme is not empty
|
||||
scheme string // Cache of a value returned by a successful ping() if not empty
|
||||
client *http.Client
|
||||
signatureBase signatureStorageBase
|
||||
}
|
||||
|
||||
// this is cloned from docker/go-connections because upstream docker has changed
|
||||
// it and make deps here fails otherwise.
|
||||
// We'll drop this once we upgrade to docker 1.13.x deps.
|
||||
func serverDefault() *tls.Config {
|
||||
return &tls.Config{
|
||||
// Avoid fallback to SSL protocols < TLS1.0
|
||||
MinVersion: tls.VersionTLS10,
|
||||
PreferServerCipherSuites: true,
|
||||
CipherSuites: tlsconfig.DefaultServerAcceptedCiphers,
|
||||
}
|
||||
}
|
||||
|
||||
func newTransport() *http.Transport {
|
||||
direct := &net.Dialer{
|
||||
Timeout: 30 * time.Second,
|
||||
KeepAlive: 30 * time.Second,
|
||||
DualStack: true,
|
||||
}
|
||||
tr := &http.Transport{
|
||||
Proxy: http.ProxyFromEnvironment,
|
||||
Dial: direct.Dial,
|
||||
TLSHandshakeTimeout: 10 * time.Second,
|
||||
// TODO(dmcgowan): Call close idle connections when complete and use keep alive
|
||||
DisableKeepAlives: true,
|
||||
}
|
||||
proxyDialer, err := sockets.DialerFromEnvironment(direct)
|
||||
if err == nil {
|
||||
tr.Dial = proxyDialer.Dial
|
||||
}
|
||||
return tr
|
||||
}
|
||||
|
||||
func setupCertificates(dir string, tlsc *tls.Config) error {
|
||||
if dir == "" {
|
||||
return nil
|
||||
}
|
||||
fs, err := ioutil.ReadDir(dir)
|
||||
if err != nil && !os.IsNotExist(err) {
|
||||
return err
|
||||
}
|
||||
|
||||
for _, f := range fs {
|
||||
fullPath := filepath.Join(dir, f.Name())
|
||||
if strings.HasSuffix(f.Name(), ".crt") {
|
||||
systemPool, err := tlsconfig.SystemCertPool()
|
||||
if err != nil {
|
||||
return fmt.Errorf("unable to get system cert pool: %v", err)
|
||||
}
|
||||
tlsc.RootCAs = systemPool
|
||||
logrus.Debugf("crt: %s", fullPath)
|
||||
data, err := ioutil.ReadFile(fullPath)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
tlsc.RootCAs.AppendCertsFromPEM(data)
|
||||
}
|
||||
if strings.HasSuffix(f.Name(), ".cert") {
|
||||
certName := f.Name()
|
||||
keyName := certName[:len(certName)-5] + ".key"
|
||||
logrus.Debugf("cert: %s", fullPath)
|
||||
if !hasFile(fs, keyName) {
|
||||
return fmt.Errorf("missing key %s for client certificate %s. Note that CA certificates should use the extension .crt", keyName, certName)
|
||||
}
|
||||
cert, err := tls.LoadX509KeyPair(filepath.Join(dir, certName), filepath.Join(dir, keyName))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
tlsc.Certificates = append(tlsc.Certificates, cert)
|
||||
}
|
||||
if strings.HasSuffix(f.Name(), ".key") {
|
||||
keyName := f.Name()
|
||||
certName := keyName[:len(keyName)-4] + ".cert"
|
||||
logrus.Debugf("key: %s", fullPath)
|
||||
if !hasFile(fs, certName) {
|
||||
return fmt.Errorf("missing client certificate %s for key %s", certName, keyName)
|
||||
}
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func hasFile(files []os.FileInfo, name string) bool {
|
||||
for _, f := range files {
|
||||
if f.Name() == name {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// newDockerClient returns a new dockerClient instance for refHostname (a host a specified in the Docker image reference, not canonicalized to dockerRegistry)
|
||||
// “write” specifies whether the client will be used for "write" access (in particular passed to lookaside.go:toplevelFromSection)
|
||||
func newDockerClient(ctx *types.SystemContext, ref dockerReference, write bool) (*dockerClient, error) {
|
||||
registry := ref.ref.Hostname()
|
||||
if registry == dockerHostname {
|
||||
registry = dockerRegistry
|
||||
}
|
||||
username, password, err := getAuth(ctx, ref.ref.Hostname())
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
tr := newTransport()
|
||||
if ctx != nil && (ctx.DockerCertPath != "" || ctx.DockerInsecureSkipTLSVerify) {
|
||||
tlsc := &tls.Config{}
|
||||
|
||||
if err := setupCertificates(ctx.DockerCertPath, tlsc); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
tlsc.InsecureSkipVerify = ctx.DockerInsecureSkipTLSVerify
|
||||
tr.TLSClientConfig = tlsc
|
||||
}
|
||||
if tr.TLSClientConfig == nil {
|
||||
tr.TLSClientConfig = serverDefault()
|
||||
}
|
||||
client := &http.Client{Transport: tr}
|
||||
|
||||
sigBase, err := configuredSignatureStorageBase(ctx, ref, write)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return &dockerClient{
|
||||
ctx: ctx,
|
||||
registry: registry,
|
||||
username: username,
|
||||
password: password,
|
||||
client: client,
|
||||
signatureBase: sigBase,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// makeRequest creates and executes a http.Request with the specified parameters, adding authentication and TLS options for the Docker client.
|
||||
// url is NOT an absolute URL, but a path relative to the /v2/ top-level API path. The host name and schema is taken from the client or autodetected.
|
||||
func (c *dockerClient) makeRequest(method, url string, headers map[string][]string, stream io.Reader) (*http.Response, error) {
|
||||
if c.scheme == "" {
|
||||
pr, err := c.ping()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
c.wwwAuthenticate = pr.WWWAuthenticate
|
||||
c.scheme = pr.scheme
|
||||
}
|
||||
|
||||
url = fmt.Sprintf(baseURL, c.scheme, c.registry) + url
|
||||
return c.makeRequestToResolvedURL(method, url, headers, stream, -1, true)
|
||||
}
|
||||
|
||||
// makeRequestToResolvedURL creates and executes a http.Request with the specified parameters, adding authentication and TLS options for the Docker client.
|
||||
// streamLen, if not -1, specifies the length of the data expected on stream.
|
||||
// makeRequest should generally be preferred.
|
||||
// TODO(runcom): too many arguments here, use a struct
|
||||
func (c *dockerClient) makeRequestToResolvedURL(method, url string, headers map[string][]string, stream io.Reader, streamLen int64, sendAuth bool) (*http.Response, error) {
|
||||
req, err := http.NewRequest(method, url, stream)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if streamLen != -1 { // Do not blindly overwrite if streamLen == -1, http.NewRequest above can figure out the length of bytes.Reader and similar objects without us having to compute it.
|
||||
req.ContentLength = streamLen
|
||||
}
|
||||
req.Header.Set("Docker-Distribution-API-Version", "registry/2.0")
|
||||
for n, h := range headers {
|
||||
for _, hh := range h {
|
||||
req.Header.Add(n, hh)
|
||||
}
|
||||
}
|
||||
if c.ctx != nil && c.ctx.DockerRegistryUserAgent != "" {
|
||||
req.Header.Add("User-Agent", c.ctx.DockerRegistryUserAgent)
|
||||
}
|
||||
if c.wwwAuthenticate != "" && sendAuth {
|
||||
if err := c.setupRequestAuth(req); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
logrus.Debugf("%s %s", method, url)
|
||||
res, err := c.client.Do(req)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return res, nil
|
||||
}
|
||||
|
||||
func (c *dockerClient) setupRequestAuth(req *http.Request) error {
|
||||
tokens := strings.SplitN(strings.TrimSpace(c.wwwAuthenticate), " ", 2)
|
||||
if len(tokens) != 2 {
|
||||
return fmt.Errorf("expected 2 tokens in WWW-Authenticate: %d, %s", len(tokens), c.wwwAuthenticate)
|
||||
}
|
||||
switch tokens[0] {
|
||||
case "Basic":
|
||||
req.SetBasicAuth(c.username, c.password)
|
||||
return nil
|
||||
case "Bearer":
|
||||
// FIXME? This gets a new token for every API request;
|
||||
// we may be easily able to reuse a previous token, e.g.
|
||||
// for OpenShift the token only identifies the user and does not vary
|
||||
// across operations. Should we just try the request first, and
|
||||
// only get a new token on failure?
|
||||
// OTOH what to do with the single-use body stream in that case?
|
||||
|
||||
// Try performing the request, expecting it to fail.
|
||||
testReq := *req
|
||||
// Do not use the body stream, or we couldn't reuse it for the "real" call later.
|
||||
testReq.Body = nil
|
||||
testReq.ContentLength = 0
|
||||
res, err := c.client.Do(&testReq)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
chs := parseAuthHeader(res.Header)
|
||||
if res.StatusCode != http.StatusUnauthorized || chs == nil || len(chs) == 0 {
|
||||
// no need for bearer? wtf?
|
||||
return nil
|
||||
}
|
||||
// Arbitrarily use the first challenge, there is no reason to expect more than one.
|
||||
challenge := chs[0]
|
||||
if challenge.Scheme != "bearer" { // Another artifact of trying to handle WWW-Authenticate before it actually happens.
|
||||
return fmt.Errorf("Unimplemented: WWW-Authenticate Bearer replaced by %#v", challenge.Scheme)
|
||||
}
|
||||
realm, ok := challenge.Parameters["realm"]
|
||||
if !ok {
|
||||
return fmt.Errorf("missing realm in bearer auth challenge")
|
||||
}
|
||||
service, _ := challenge.Parameters["service"] // Will be "" if not present
|
||||
scope, _ := challenge.Parameters["scope"] // Will be "" if not present
|
||||
token, err := c.getBearerToken(realm, service, scope)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
req.Header.Set("Authorization", fmt.Sprintf("Bearer %s", token))
|
||||
return nil
|
||||
}
|
||||
return fmt.Errorf("no handler for %s authentication", tokens[0])
|
||||
// support docker bearer with authconfig's Auth string? see docker2aci
|
||||
}
|
||||
|
||||
func (c *dockerClient) getBearerToken(realm, service, scope string) (string, error) {
|
||||
authReq, err := http.NewRequest("GET", realm, nil)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
getParams := authReq.URL.Query()
|
||||
if service != "" {
|
||||
getParams.Add("service", service)
|
||||
}
|
||||
if scope != "" {
|
||||
getParams.Add("scope", scope)
|
||||
}
|
||||
authReq.URL.RawQuery = getParams.Encode()
|
||||
if c.username != "" && c.password != "" {
|
||||
authReq.SetBasicAuth(c.username, c.password)
|
||||
}
|
||||
tr := newTransport()
|
||||
// TODO(runcom): insecure for now to contact the external token service
|
||||
tr.TLSClientConfig = &tls.Config{InsecureSkipVerify: true}
|
||||
client := &http.Client{Transport: tr}
|
||||
res, err := client.Do(authReq)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
defer res.Body.Close()
|
||||
switch res.StatusCode {
|
||||
case http.StatusUnauthorized:
|
||||
return "", fmt.Errorf("unable to retrieve auth token: 401 unauthorized")
|
||||
case http.StatusOK:
|
||||
break
|
||||
default:
|
||||
return "", fmt.Errorf("unexpected http code: %d, URL: %s", res.StatusCode, authReq.URL)
|
||||
}
|
||||
tokenBlob, err := ioutil.ReadAll(res.Body)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
tokenStruct := struct {
|
||||
Token string `json:"token"`
|
||||
}{}
|
||||
if err := json.Unmarshal(tokenBlob, &tokenStruct); err != nil {
|
||||
return "", err
|
||||
}
|
||||
// TODO(runcom): reuse tokens?
|
||||
//hostAuthTokens, ok = rb.hostsV2AuthTokens[req.URL.Host]
|
||||
//if !ok {
|
||||
//hostAuthTokens = make(map[string]string)
|
||||
//rb.hostsV2AuthTokens[req.URL.Host] = hostAuthTokens
|
||||
//}
|
||||
//hostAuthTokens[repo] = tokenStruct.Token
|
||||
return tokenStruct.Token, nil
|
||||
}
|
||||
|
||||
func getAuth(ctx *types.SystemContext, registry string) (string, string, error) {
|
||||
if ctx != nil && ctx.DockerAuthConfig != nil {
|
||||
return ctx.DockerAuthConfig.Username, ctx.DockerAuthConfig.Password, nil
|
||||
}
|
||||
var dockerAuth dockerConfigFile
|
||||
dockerCfgPath := filepath.Join(getDefaultConfigDir(".docker"), dockerCfgFileName)
|
||||
if _, err := os.Stat(dockerCfgPath); err == nil {
|
||||
j, err := ioutil.ReadFile(dockerCfgPath)
|
||||
if err != nil {
|
||||
return "", "", err
|
||||
}
|
||||
if err := json.Unmarshal(j, &dockerAuth); err != nil {
|
||||
return "", "", err
|
||||
}
|
||||
|
||||
} else if os.IsNotExist(err) {
|
||||
// try old config path
|
||||
oldDockerCfgPath := filepath.Join(getDefaultConfigDir(dockerCfgObsolete))
|
||||
if _, err := os.Stat(oldDockerCfgPath); err != nil {
|
||||
if os.IsNotExist(err) {
|
||||
return "", "", nil
|
||||
}
|
||||
return "", "", fmt.Errorf("%s - %v", oldDockerCfgPath, err)
|
||||
}
|
||||
|
||||
j, err := ioutil.ReadFile(oldDockerCfgPath)
|
||||
if err != nil {
|
||||
return "", "", err
|
||||
}
|
||||
if err := json.Unmarshal(j, &dockerAuth.AuthConfigs); err != nil {
|
||||
return "", "", err
|
||||
}
|
||||
|
||||
} else if err != nil {
|
||||
return "", "", fmt.Errorf("%s - %v", dockerCfgPath, err)
|
||||
}
|
||||
|
||||
// I'm feeling lucky
|
||||
if c, exists := dockerAuth.AuthConfigs[registry]; exists {
|
||||
return decodeDockerAuth(c.Auth)
|
||||
}
|
||||
|
||||
// bad luck; let's normalize the entries first
|
||||
registry = normalizeRegistry(registry)
|
||||
normalizedAuths := map[string]dockerAuthConfig{}
|
||||
for k, v := range dockerAuth.AuthConfigs {
|
||||
normalizedAuths[normalizeRegistry(k)] = v
|
||||
}
|
||||
if c, exists := normalizedAuths[registry]; exists {
|
||||
return decodeDockerAuth(c.Auth)
|
||||
}
|
||||
return "", "", nil
|
||||
}
|
||||
|
||||
type pingResponse struct {
|
||||
WWWAuthenticate string
|
||||
APIVersion string
|
||||
scheme string
|
||||
}
|
||||
|
||||
func (c *dockerClient) ping() (*pingResponse, error) {
|
||||
ping := func(scheme string) (*pingResponse, error) {
|
||||
url := fmt.Sprintf(baseURL, scheme, c.registry)
|
||||
resp, err := c.makeRequestToResolvedURL("GET", url, nil, nil, -1, true)
|
||||
logrus.Debugf("Ping %s err %#v", url, err)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
logrus.Debugf("Ping %s status %d", scheme+"://"+c.registry+"/v2/", resp.StatusCode)
|
||||
if resp.StatusCode != http.StatusOK && resp.StatusCode != http.StatusUnauthorized {
|
||||
return nil, fmt.Errorf("error pinging repository, response code %d", resp.StatusCode)
|
||||
}
|
||||
pr := &pingResponse{}
|
||||
pr.WWWAuthenticate = resp.Header.Get("WWW-Authenticate")
|
||||
pr.APIVersion = resp.Header.Get("Docker-Distribution-Api-Version")
|
||||
pr.scheme = scheme
|
||||
return pr, nil
|
||||
}
|
||||
pr, err := ping("https")
|
||||
if err != nil && c.ctx != nil && c.ctx.DockerInsecureSkipTLSVerify {
|
||||
pr, err = ping("http")
|
||||
}
|
||||
if err != nil {
|
||||
err = fmt.Errorf("pinging docker registry returned %+v", err)
|
||||
if c.ctx != nil && c.ctx.DockerDisableV1Ping {
|
||||
return nil, err
|
||||
}
|
||||
// best effort to understand if we're talking to a V1 registry
|
||||
pingV1 := func(scheme string) bool {
|
||||
url := fmt.Sprintf(baseURLV1, scheme, c.registry)
|
||||
resp, err := c.makeRequestToResolvedURL("GET", url, nil, nil, -1, true)
|
||||
logrus.Debugf("Ping %s err %#v", url, err)
|
||||
if err != nil {
|
||||
return false
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
logrus.Debugf("Ping %s status %d", scheme+"://"+c.registry+"/v1/_ping", resp.StatusCode)
|
||||
if resp.StatusCode != http.StatusOK && resp.StatusCode != http.StatusUnauthorized {
|
||||
return false
|
||||
}
|
||||
return true
|
||||
}
|
||||
isV1 := pingV1("https")
|
||||
if !isV1 && c.ctx != nil && c.ctx.DockerInsecureSkipTLSVerify {
|
||||
isV1 = pingV1("http")
|
||||
}
|
||||
if isV1 {
|
||||
err = ErrV1NotSupported
|
||||
}
|
||||
}
|
||||
return pr, err
|
||||
}
|
||||
|
||||
func getDefaultConfigDir(confPath string) string {
|
||||
return filepath.Join(homedir.Get(), confPath)
|
||||
}
|
||||
|
||||
type dockerAuthConfig struct {
|
||||
Auth string `json:"auth,omitempty"`
|
||||
}
|
||||
|
||||
type dockerConfigFile struct {
|
||||
AuthConfigs map[string]dockerAuthConfig `json:"auths"`
|
||||
}
|
||||
|
||||
func decodeDockerAuth(s string) (string, string, error) {
|
||||
decoded, err := base64.StdEncoding.DecodeString(s)
|
||||
if err != nil {
|
||||
return "", "", err
|
||||
}
|
||||
parts := strings.SplitN(string(decoded), ":", 2)
|
||||
if len(parts) != 2 {
|
||||
// if it's invalid just skip, as docker does
|
||||
return "", "", nil
|
||||
}
|
||||
user := parts[0]
|
||||
password := strings.Trim(parts[1], "\x00")
|
||||
return user, password, nil
|
||||
}
|
||||
|
||||
// convertToHostname converts a registry url which has http|https prepended
|
||||
// to just an hostname.
|
||||
// Copied from github.com/docker/docker/registry/auth.go
|
||||
func convertToHostname(url string) string {
|
||||
stripped := url
|
||||
if strings.HasPrefix(url, "http://") {
|
||||
stripped = strings.TrimPrefix(url, "http://")
|
||||
} else if strings.HasPrefix(url, "https://") {
|
||||
stripped = strings.TrimPrefix(url, "https://")
|
||||
}
|
||||
|
||||
nameParts := strings.SplitN(stripped, "/", 2)
|
||||
|
||||
return nameParts[0]
|
||||
}
|
||||
|
||||
func normalizeRegistry(registry string) string {
|
||||
normalized := convertToHostname(registry)
|
||||
switch normalized {
|
||||
case "registry-1.docker.io", "docker.io":
|
||||
return "index.docker.io"
|
||||
}
|
||||
return normalized
|
||||
}
|
59
vendor/github.com/containers/image/docker/docker_image.go
generated
vendored
Normal file
59
vendor/github.com/containers/image/docker/docker_image.go
generated
vendored
Normal file
|
@ -0,0 +1,59 @@
|
|||
package docker
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"net/http"
|
||||
|
||||
"github.com/containers/image/image"
|
||||
"github.com/containers/image/types"
|
||||
)
|
||||
|
||||
// Image is a Docker-specific implementation of types.Image with a few extra methods
|
||||
// which are specific to Docker.
|
||||
type Image struct {
|
||||
types.Image
|
||||
src *dockerImageSource
|
||||
}
|
||||
|
||||
// newImage returns a new Image interface type after setting up
|
||||
// a client to the registry hosting the given image.
|
||||
// The caller must call .Close() on the returned Image.
|
||||
func newImage(ctx *types.SystemContext, ref dockerReference) (types.Image, error) {
|
||||
s, err := newImageSource(ctx, ref, nil)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
img, err := image.FromSource(s)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &Image{Image: img, src: s}, nil
|
||||
}
|
||||
|
||||
// SourceRefFullName returns a fully expanded name for the repository this image is in.
|
||||
func (i *Image) SourceRefFullName() string {
|
||||
return i.src.ref.ref.FullName()
|
||||
}
|
||||
|
||||
// GetRepositoryTags list all tags available in the repository. Note that this has no connection with the tag(s) used for this specific image, if any.
|
||||
func (i *Image) GetRepositoryTags() ([]string, error) {
|
||||
url := fmt.Sprintf(tagsURL, i.src.ref.ref.RemoteName())
|
||||
res, err := i.src.c.makeRequest("GET", url, nil, nil)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer res.Body.Close()
|
||||
if res.StatusCode != http.StatusOK {
|
||||
// print url also
|
||||
return nil, fmt.Errorf("Invalid status code returned when fetching tags list %d", res.StatusCode)
|
||||
}
|
||||
type tagsRes struct {
|
||||
Tags []string
|
||||
}
|
||||
tags := &tagsRes{}
|
||||
if err := json.NewDecoder(res.Body).Decode(tags); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return tags.Tags, nil
|
||||
}
|
329
vendor/github.com/containers/image/docker/docker_image_dest.go
generated
vendored
Normal file
329
vendor/github.com/containers/image/docker/docker_image_dest.go
generated
vendored
Normal file
|
@ -0,0 +1,329 @@
|
|||
package docker
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"os"
|
||||
"path/filepath"
|
||||
|
||||
"github.com/Sirupsen/logrus"
|
||||
"github.com/containers/image/manifest"
|
||||
"github.com/containers/image/types"
|
||||
"github.com/docker/distribution/digest"
|
||||
)
|
||||
|
||||
type dockerImageDestination struct {
|
||||
ref dockerReference
|
||||
c *dockerClient
|
||||
// State
|
||||
manifestDigest digest.Digest // or "" if not yet known.
|
||||
}
|
||||
|
||||
// newImageDestination creates a new ImageDestination for the specified image reference.
|
||||
func newImageDestination(ctx *types.SystemContext, ref dockerReference) (types.ImageDestination, error) {
|
||||
c, err := newDockerClient(ctx, ref, true)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &dockerImageDestination{
|
||||
ref: ref,
|
||||
c: c,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Reference returns the reference used to set up this destination. Note that this should directly correspond to user's intent,
|
||||
// e.g. it should use the public hostname instead of the result of resolving CNAMEs or following redirects.
|
||||
func (d *dockerImageDestination) Reference() types.ImageReference {
|
||||
return d.ref
|
||||
}
|
||||
|
||||
// Close removes resources associated with an initialized ImageDestination, if any.
|
||||
func (d *dockerImageDestination) Close() {
|
||||
}
|
||||
|
||||
func (d *dockerImageDestination) SupportedManifestMIMETypes() []string {
|
||||
return []string{
|
||||
// TODO(runcom): we'll add OCI as part of another PR here
|
||||
manifest.DockerV2Schema2MediaType,
|
||||
manifest.DockerV2Schema1SignedMediaType,
|
||||
manifest.DockerV2Schema1MediaType,
|
||||
}
|
||||
}
|
||||
|
||||
// SupportsSignatures returns an error (to be displayed to the user) if the destination certainly can't store signatures.
|
||||
// Note: It is still possible for PutSignatures to fail if SupportsSignatures returns nil.
|
||||
func (d *dockerImageDestination) SupportsSignatures() error {
|
||||
return fmt.Errorf("Pushing signatures to a Docker Registry is not supported")
|
||||
}
|
||||
|
||||
// ShouldCompressLayers returns true iff it is desirable to compress layer blobs written to this destination.
|
||||
func (d *dockerImageDestination) ShouldCompressLayers() bool {
|
||||
return true
|
||||
}
|
||||
|
||||
// AcceptsForeignLayerURLs returns false iff foreign layers in manifest should be actually
|
||||
// uploaded to the image destination, true otherwise.
|
||||
func (d *dockerImageDestination) AcceptsForeignLayerURLs() bool {
|
||||
return true
|
||||
}
|
||||
|
||||
// sizeCounter is an io.Writer which only counts the total size of its input.
|
||||
type sizeCounter struct{ size int64 }
|
||||
|
||||
func (c *sizeCounter) Write(p []byte) (n int, err error) {
|
||||
c.size += int64(len(p))
|
||||
return len(p), nil
|
||||
}
|
||||
|
||||
// PutBlob writes contents of stream and returns data representing the result (with all data filled in).
|
||||
// inputInfo.Digest can be optionally provided if known; it is not mandatory for the implementation to verify it.
|
||||
// inputInfo.Size is the expected length of stream, if known.
|
||||
// WARNING: The contents of stream are being verified on the fly. Until stream.Read() returns io.EOF, the contents of the data SHOULD NOT be available
|
||||
// to any other readers for download using the supplied digest.
|
||||
// If stream.Read() at any time, ESPECIALLY at end of input, returns an error, PutBlob MUST 1) fail, and 2) delete any data stored so far.
|
||||
func (d *dockerImageDestination) PutBlob(stream io.Reader, inputInfo types.BlobInfo) (types.BlobInfo, error) {
|
||||
if inputInfo.Digest.String() != "" {
|
||||
checkURL := fmt.Sprintf(blobsURL, d.ref.ref.RemoteName(), inputInfo.Digest.String())
|
||||
|
||||
logrus.Debugf("Checking %s", checkURL)
|
||||
res, err := d.c.makeRequest("HEAD", checkURL, nil, nil)
|
||||
if err != nil {
|
||||
return types.BlobInfo{}, err
|
||||
}
|
||||
defer res.Body.Close()
|
||||
switch res.StatusCode {
|
||||
case http.StatusOK:
|
||||
logrus.Debugf("... already exists, not uploading")
|
||||
return types.BlobInfo{Digest: inputInfo.Digest, Size: getBlobSize(res)}, nil
|
||||
case http.StatusUnauthorized:
|
||||
logrus.Debugf("... not authorized")
|
||||
return types.BlobInfo{}, fmt.Errorf("not authorized to read from destination repository %s", d.ref.ref.RemoteName())
|
||||
case http.StatusNotFound:
|
||||
// noop
|
||||
default:
|
||||
return types.BlobInfo{}, fmt.Errorf("failed to read from destination repository %s: %v", d.ref.ref.RemoteName(), http.StatusText(res.StatusCode))
|
||||
}
|
||||
logrus.Debugf("... failed, status %d", res.StatusCode)
|
||||
}
|
||||
|
||||
// FIXME? Chunked upload, progress reporting, etc.
|
||||
uploadURL := fmt.Sprintf(blobUploadURL, d.ref.ref.RemoteName())
|
||||
logrus.Debugf("Uploading %s", uploadURL)
|
||||
res, err := d.c.makeRequest("POST", uploadURL, nil, nil)
|
||||
if err != nil {
|
||||
return types.BlobInfo{}, err
|
||||
}
|
||||
defer res.Body.Close()
|
||||
if res.StatusCode != http.StatusAccepted {
|
||||
logrus.Debugf("Error initiating layer upload, response %#v", *res)
|
||||
return types.BlobInfo{}, fmt.Errorf("Error initiating layer upload to %s, status %d", uploadURL, res.StatusCode)
|
||||
}
|
||||
uploadLocation, err := res.Location()
|
||||
if err != nil {
|
||||
return types.BlobInfo{}, fmt.Errorf("Error determining upload URL: %s", err.Error())
|
||||
}
|
||||
|
||||
digester := digest.Canonical.New()
|
||||
sizeCounter := &sizeCounter{}
|
||||
tee := io.TeeReader(stream, io.MultiWriter(digester.Hash(), sizeCounter))
|
||||
res, err = d.c.makeRequestToResolvedURL("PATCH", uploadLocation.String(), map[string][]string{"Content-Type": {"application/octet-stream"}}, tee, inputInfo.Size, true)
|
||||
if err != nil {
|
||||
logrus.Debugf("Error uploading layer chunked, response %#v", *res)
|
||||
return types.BlobInfo{}, err
|
||||
}
|
||||
defer res.Body.Close()
|
||||
computedDigest := digester.Digest()
|
||||
|
||||
uploadLocation, err = res.Location()
|
||||
if err != nil {
|
||||
return types.BlobInfo{}, fmt.Errorf("Error determining upload URL: %s", err.Error())
|
||||
}
|
||||
|
||||
// FIXME: DELETE uploadLocation on failure
|
||||
|
||||
locationQuery := uploadLocation.Query()
|
||||
// TODO: check inputInfo.Digest == computedDigest https://github.com/containers/image/pull/70#discussion_r77646717
|
||||
locationQuery.Set("digest", computedDigest.String())
|
||||
uploadLocation.RawQuery = locationQuery.Encode()
|
||||
res, err = d.c.makeRequestToResolvedURL("PUT", uploadLocation.String(), map[string][]string{"Content-Type": {"application/octet-stream"}}, nil, -1, true)
|
||||
if err != nil {
|
||||
return types.BlobInfo{}, err
|
||||
}
|
||||
defer res.Body.Close()
|
||||
if res.StatusCode != http.StatusCreated {
|
||||
logrus.Debugf("Error uploading layer, response %#v", *res)
|
||||
return types.BlobInfo{}, fmt.Errorf("Error uploading layer to %s, status %d", uploadLocation, res.StatusCode)
|
||||
}
|
||||
|
||||
logrus.Debugf("Upload of layer %s complete", computedDigest)
|
||||
return types.BlobInfo{Digest: computedDigest, Size: sizeCounter.size}, nil
|
||||
}
|
||||
|
||||
func (d *dockerImageDestination) HasBlob(info types.BlobInfo) (bool, int64, error) {
|
||||
if info.Digest == "" {
|
||||
return false, -1, fmt.Errorf(`"Can not check for a blob with unknown digest`)
|
||||
}
|
||||
checkURL := fmt.Sprintf(blobsURL, d.ref.ref.RemoteName(), info.Digest.String())
|
||||
|
||||
logrus.Debugf("Checking %s", checkURL)
|
||||
res, err := d.c.makeRequest("HEAD", checkURL, nil, nil)
|
||||
if err != nil {
|
||||
return false, -1, err
|
||||
}
|
||||
defer res.Body.Close()
|
||||
switch res.StatusCode {
|
||||
case http.StatusOK:
|
||||
logrus.Debugf("... already exists")
|
||||
return true, getBlobSize(res), nil
|
||||
case http.StatusUnauthorized:
|
||||
logrus.Debugf("... not authorized")
|
||||
return false, -1, fmt.Errorf("not authorized to read from destination repository %s", d.ref.ref.RemoteName())
|
||||
case http.StatusNotFound:
|
||||
logrus.Debugf("... not present")
|
||||
return false, -1, types.ErrBlobNotFound
|
||||
default:
|
||||
logrus.Errorf("failed to read from destination repository %s: %v", d.ref.ref.RemoteName(), http.StatusText(res.StatusCode))
|
||||
}
|
||||
logrus.Debugf("... failed, status %d, ignoring", res.StatusCode)
|
||||
return false, -1, types.ErrBlobNotFound
|
||||
}
|
||||
|
||||
func (d *dockerImageDestination) ReapplyBlob(info types.BlobInfo) (types.BlobInfo, error) {
|
||||
return info, nil
|
||||
}
|
||||
|
||||
func (d *dockerImageDestination) PutManifest(m []byte) error {
|
||||
digest, err := manifest.Digest(m)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
d.manifestDigest = digest
|
||||
|
||||
reference, err := d.ref.tagOrDigest()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
url := fmt.Sprintf(manifestURL, d.ref.ref.RemoteName(), reference)
|
||||
|
||||
headers := map[string][]string{}
|
||||
mimeType := manifest.GuessMIMEType(m)
|
||||
if mimeType != "" {
|
||||
headers["Content-Type"] = []string{mimeType}
|
||||
}
|
||||
res, err := d.c.makeRequest("PUT", url, headers, bytes.NewReader(m))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer res.Body.Close()
|
||||
if res.StatusCode != http.StatusCreated {
|
||||
body, err := ioutil.ReadAll(res.Body)
|
||||
if err == nil {
|
||||
logrus.Debugf("Error body %s", string(body))
|
||||
}
|
||||
logrus.Debugf("Error uploading manifest, status %d, %#v", res.StatusCode, res)
|
||||
return fmt.Errorf("Error uploading manifest to %s, status %d", url, res.StatusCode)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (d *dockerImageDestination) PutSignatures(signatures [][]byte) error {
|
||||
// FIXME? This overwrites files one at a time, definitely not atomic.
|
||||
// A failure when updating signatures with a reordered copy could lose some of them.
|
||||
|
||||
// Skip dealing with the manifest digest if not necessary.
|
||||
if len(signatures) == 0 {
|
||||
return nil
|
||||
}
|
||||
if d.c.signatureBase == nil {
|
||||
return fmt.Errorf("Pushing signatures to a Docker Registry is not supported, and there is no applicable signature storage configured")
|
||||
}
|
||||
|
||||
if d.manifestDigest.String() == "" {
|
||||
// This shouldn’t happen, ImageDestination users are required to call PutManifest before PutSignatures
|
||||
return fmt.Errorf("Unknown manifest digest, can't add signatures")
|
||||
}
|
||||
|
||||
for i, signature := range signatures {
|
||||
url := signatureStorageURL(d.c.signatureBase, d.manifestDigest, i)
|
||||
if url == nil {
|
||||
return fmt.Errorf("Internal error: signatureStorageURL with non-nil base returned nil")
|
||||
}
|
||||
err := d.putOneSignature(url, signature)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
// Remove any other signatures, if present.
|
||||
// We stop at the first missing signature; if a previous deleting loop aborted
|
||||
// prematurely, this may not clean up all of them, but one missing signature
|
||||
// is enough for dockerImageSource to stop looking for other signatures, so that
|
||||
// is sufficient.
|
||||
for i := len(signatures); ; i++ {
|
||||
url := signatureStorageURL(d.c.signatureBase, d.manifestDigest, i)
|
||||
if url == nil {
|
||||
return fmt.Errorf("Internal error: signatureStorageURL with non-nil base returned nil")
|
||||
}
|
||||
missing, err := d.c.deleteOneSignature(url)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if missing {
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// putOneSignature stores one signature to url.
|
||||
func (d *dockerImageDestination) putOneSignature(url *url.URL, signature []byte) error {
|
||||
switch url.Scheme {
|
||||
case "file":
|
||||
logrus.Debugf("Writing to %s", url.Path)
|
||||
err := os.MkdirAll(filepath.Dir(url.Path), 0755)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
err = ioutil.WriteFile(url.Path, signature, 0644)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
|
||||
case "http", "https":
|
||||
return fmt.Errorf("Writing directly to a %s sigstore %s is not supported. Configure a sigstore-staging: location", url.Scheme, url.String())
|
||||
default:
|
||||
return fmt.Errorf("Unsupported scheme when writing signature to %s", url.String())
|
||||
}
|
||||
}
|
||||
|
||||
// deleteOneSignature deletes a signature from url, if it exists.
|
||||
// If it successfully determines that the signature does not exist, returns (true, nil)
|
||||
func (c *dockerClient) deleteOneSignature(url *url.URL) (missing bool, err error) {
|
||||
switch url.Scheme {
|
||||
case "file":
|
||||
logrus.Debugf("Deleting %s", url.Path)
|
||||
err := os.Remove(url.Path)
|
||||
if err != nil && os.IsNotExist(err) {
|
||||
return true, nil
|
||||
}
|
||||
return false, err
|
||||
|
||||
case "http", "https":
|
||||
return false, fmt.Errorf("Writing directly to a %s sigstore %s is not supported. Configure a sigstore-staging: location", url.Scheme, url.String())
|
||||
default:
|
||||
return false, fmt.Errorf("Unsupported scheme when deleting signature from %s", url.String())
|
||||
}
|
||||
}
|
||||
|
||||
// Commit marks the process of storing the image as successful and asks for the image to be persisted.
|
||||
// WARNING: This does not have any transactional semantics:
|
||||
// - Uploaded data MAY be visible to others before Commit() is called
|
||||
// - Uploaded data MAY be removed or MAY remain around if Close() is called without Commit() (i.e. rollback is allowed but not guaranteed)
|
||||
func (d *dockerImageDestination) Commit() error {
|
||||
return nil
|
||||
}
|
325
vendor/github.com/containers/image/docker/docker_image_src.go
generated
vendored
Normal file
325
vendor/github.com/containers/image/docker/docker_image_src.go
generated
vendored
Normal file
|
@ -0,0 +1,325 @@
|
|||
package docker
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"mime"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"os"
|
||||
"strconv"
|
||||
|
||||
"github.com/Sirupsen/logrus"
|
||||
"github.com/containers/image/manifest"
|
||||
"github.com/containers/image/types"
|
||||
"github.com/docker/distribution/digest"
|
||||
"github.com/docker/distribution/registry/client"
|
||||
)
|
||||
|
||||
type dockerImageSource struct {
|
||||
ref dockerReference
|
||||
requestedManifestMIMETypes []string
|
||||
c *dockerClient
|
||||
// State
|
||||
cachedManifest []byte // nil if not loaded yet
|
||||
cachedManifestMIMEType string // Only valid if cachedManifest != nil
|
||||
}
|
||||
|
||||
// newImageSource creates a new ImageSource for the specified image reference,
|
||||
// asking the backend to use a manifest from requestedManifestMIMETypes if possible.
|
||||
// nil requestedManifestMIMETypes means manifest.DefaultRequestedManifestMIMETypes.
|
||||
// The caller must call .Close() on the returned ImageSource.
|
||||
func newImageSource(ctx *types.SystemContext, ref dockerReference, requestedManifestMIMETypes []string) (*dockerImageSource, error) {
|
||||
c, err := newDockerClient(ctx, ref, false)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if requestedManifestMIMETypes == nil {
|
||||
requestedManifestMIMETypes = manifest.DefaultRequestedManifestMIMETypes
|
||||
}
|
||||
return &dockerImageSource{
|
||||
ref: ref,
|
||||
requestedManifestMIMETypes: requestedManifestMIMETypes,
|
||||
c: c,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Reference returns the reference used to set up this source, _as specified by the user_
|
||||
// (not as the image itself, or its underlying storage, claims). This can be used e.g. to determine which public keys are trusted for this image.
|
||||
func (s *dockerImageSource) Reference() types.ImageReference {
|
||||
return s.ref
|
||||
}
|
||||
|
||||
// Close removes resources associated with an initialized ImageSource, if any.
|
||||
func (s *dockerImageSource) Close() {
|
||||
}
|
||||
|
||||
// simplifyContentType drops parameters from a HTTP media type (see https://tools.ietf.org/html/rfc7231#section-3.1.1.1)
|
||||
// Alternatively, an empty string is returned unchanged, and invalid values are "simplified" to an empty string.
|
||||
func simplifyContentType(contentType string) string {
|
||||
if contentType == "" {
|
||||
return contentType
|
||||
}
|
||||
mimeType, _, err := mime.ParseMediaType(contentType)
|
||||
if err != nil {
|
||||
return ""
|
||||
}
|
||||
return mimeType
|
||||
}
|
||||
|
||||
// GetManifest returns the image's manifest along with its MIME type (which may be empty when it can't be determined but the manifest is available).
|
||||
// It may use a remote (= slow) service.
|
||||
func (s *dockerImageSource) GetManifest() ([]byte, string, error) {
|
||||
err := s.ensureManifestIsLoaded()
|
||||
if err != nil {
|
||||
return nil, "", err
|
||||
}
|
||||
return s.cachedManifest, s.cachedManifestMIMEType, nil
|
||||
}
|
||||
|
||||
func (s *dockerImageSource) fetchManifest(tagOrDigest string) ([]byte, string, error) {
|
||||
url := fmt.Sprintf(manifestURL, s.ref.ref.RemoteName(), tagOrDigest)
|
||||
headers := make(map[string][]string)
|
||||
headers["Accept"] = s.requestedManifestMIMETypes
|
||||
res, err := s.c.makeRequest("GET", url, headers, nil)
|
||||
if err != nil {
|
||||
return nil, "", err
|
||||
}
|
||||
defer res.Body.Close()
|
||||
if res.StatusCode != http.StatusOK {
|
||||
return nil, "", client.HandleErrorResponse(res)
|
||||
}
|
||||
manblob, err := ioutil.ReadAll(res.Body)
|
||||
if err != nil {
|
||||
return nil, "", err
|
||||
}
|
||||
return manblob, simplifyContentType(res.Header.Get("Content-Type")), nil
|
||||
}
|
||||
|
||||
// GetTargetManifest returns an image's manifest given a digest.
|
||||
// This is mainly used to retrieve a single image's manifest out of a manifest list.
|
||||
func (s *dockerImageSource) GetTargetManifest(digest digest.Digest) ([]byte, string, error) {
|
||||
return s.fetchManifest(digest.String())
|
||||
}
|
||||
|
||||
// ensureManifestIsLoaded sets s.cachedManifest and s.cachedManifestMIMEType
|
||||
//
|
||||
// ImageSource implementations are not required or expected to do any caching,
|
||||
// but because our signatures are “attached” to the manifest digest,
|
||||
// we need to ensure that the digest of the manifest returned by GetManifest
|
||||
// and used by GetSignatures are consistent, otherwise we would get spurious
|
||||
// signature verification failures when pulling while a tag is being updated.
|
||||
func (s *dockerImageSource) ensureManifestIsLoaded() error {
|
||||
if s.cachedManifest != nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
reference, err := s.ref.tagOrDigest()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
manblob, mt, err := s.fetchManifest(reference)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
// We might validate manblob against the Docker-Content-Digest header here to protect against transport errors.
|
||||
s.cachedManifest = manblob
|
||||
s.cachedManifestMIMEType = mt
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *dockerImageSource) getExternalBlob(urls []string) (io.ReadCloser, int64, error) {
|
||||
var (
|
||||
resp *http.Response
|
||||
err error
|
||||
)
|
||||
for _, url := range urls {
|
||||
resp, err = s.c.makeRequestToResolvedURL("GET", url, nil, nil, -1, false)
|
||||
if err == nil {
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
err = fmt.Errorf("error fetching external blob from %q: %d", url, resp.StatusCode)
|
||||
logrus.Debug(err)
|
||||
continue
|
||||
}
|
||||
}
|
||||
}
|
||||
if resp.Body != nil && err == nil {
|
||||
return resp.Body, getBlobSize(resp), nil
|
||||
}
|
||||
return nil, 0, err
|
||||
}
|
||||
|
||||
func getBlobSize(resp *http.Response) int64 {
|
||||
size, err := strconv.ParseInt(resp.Header.Get("Content-Length"), 10, 64)
|
||||
if err != nil {
|
||||
size = -1
|
||||
}
|
||||
return size
|
||||
}
|
||||
|
||||
// GetBlob returns a stream for the specified blob, and the blob’s size (or -1 if unknown).
|
||||
func (s *dockerImageSource) GetBlob(info types.BlobInfo) (io.ReadCloser, int64, error) {
|
||||
if len(info.URLs) != 0 {
|
||||
return s.getExternalBlob(info.URLs)
|
||||
}
|
||||
|
||||
url := fmt.Sprintf(blobsURL, s.ref.ref.RemoteName(), info.Digest.String())
|
||||
logrus.Debugf("Downloading %s", url)
|
||||
res, err := s.c.makeRequest("GET", url, nil, nil)
|
||||
if err != nil {
|
||||
return nil, 0, err
|
||||
}
|
||||
if res.StatusCode != http.StatusOK {
|
||||
// print url also
|
||||
return nil, 0, fmt.Errorf("Invalid status code returned when fetching blob %d", res.StatusCode)
|
||||
}
|
||||
return res.Body, getBlobSize(res), nil
|
||||
}
|
||||
|
||||
func (s *dockerImageSource) GetSignatures() ([][]byte, error) {
|
||||
if s.c.signatureBase == nil { // Skip dealing with the manifest digest if not necessary.
|
||||
return [][]byte{}, nil
|
||||
}
|
||||
|
||||
if err := s.ensureManifestIsLoaded(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
manifestDigest, err := manifest.Digest(s.cachedManifest)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
signatures := [][]byte{}
|
||||
for i := 0; ; i++ {
|
||||
url := signatureStorageURL(s.c.signatureBase, manifestDigest, i)
|
||||
if url == nil {
|
||||
return nil, fmt.Errorf("Internal error: signatureStorageURL with non-nil base returned nil")
|
||||
}
|
||||
signature, missing, err := s.getOneSignature(url)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if missing {
|
||||
break
|
||||
}
|
||||
signatures = append(signatures, signature)
|
||||
}
|
||||
return signatures, nil
|
||||
}
|
||||
|
||||
// getOneSignature downloads one signature from url.
|
||||
// If it successfully determines that the signature does not exist, returns with missing set to true and error set to nil.
|
||||
func (s *dockerImageSource) getOneSignature(url *url.URL) (signature []byte, missing bool, err error) {
|
||||
switch url.Scheme {
|
||||
case "file":
|
||||
logrus.Debugf("Reading %s", url.Path)
|
||||
sig, err := ioutil.ReadFile(url.Path)
|
||||
if err != nil {
|
||||
if os.IsNotExist(err) {
|
||||
return nil, true, nil
|
||||
}
|
||||
return nil, false, err
|
||||
}
|
||||
return sig, false, nil
|
||||
|
||||
case "http", "https":
|
||||
logrus.Debugf("GET %s", url)
|
||||
res, err := s.c.client.Get(url.String())
|
||||
if err != nil {
|
||||
return nil, false, err
|
||||
}
|
||||
defer res.Body.Close()
|
||||
if res.StatusCode == http.StatusNotFound {
|
||||
return nil, true, nil
|
||||
} else if res.StatusCode != http.StatusOK {
|
||||
return nil, false, fmt.Errorf("Error reading signature from %s: status %d", url.String(), res.StatusCode)
|
||||
}
|
||||
sig, err := ioutil.ReadAll(res.Body)
|
||||
if err != nil {
|
||||
return nil, false, err
|
||||
}
|
||||
return sig, false, nil
|
||||
|
||||
default:
|
||||
return nil, false, fmt.Errorf("Unsupported scheme when reading signature from %s", url.String())
|
||||
}
|
||||
}
|
||||
|
||||
// deleteImage deletes the named image from the registry, if supported.
|
||||
func deleteImage(ctx *types.SystemContext, ref dockerReference) error {
|
||||
c, err := newDockerClient(ctx, ref, true)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// When retrieving the digest from a registry >= 2.3 use the following header:
|
||||
// "Accept": "application/vnd.docker.distribution.manifest.v2+json"
|
||||
headers := make(map[string][]string)
|
||||
headers["Accept"] = []string{manifest.DockerV2Schema2MediaType}
|
||||
|
||||
reference, err := ref.tagOrDigest()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
getURL := fmt.Sprintf(manifestURL, ref.ref.RemoteName(), reference)
|
||||
get, err := c.makeRequest("GET", getURL, headers, nil)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer get.Body.Close()
|
||||
manifestBody, err := ioutil.ReadAll(get.Body)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
switch get.StatusCode {
|
||||
case http.StatusOK:
|
||||
case http.StatusNotFound:
|
||||
return fmt.Errorf("Unable to delete %v. Image may not exist or is not stored with a v2 Schema in a v2 registry", ref.ref)
|
||||
default:
|
||||
return fmt.Errorf("Failed to delete %v: %s (%v)", ref.ref, manifestBody, get.Status)
|
||||
}
|
||||
|
||||
digest := get.Header.Get("Docker-Content-Digest")
|
||||
deleteURL := fmt.Sprintf(manifestURL, ref.ref.RemoteName(), digest)
|
||||
|
||||
// When retrieving the digest from a registry >= 2.3 use the following header:
|
||||
// "Accept": "application/vnd.docker.distribution.manifest.v2+json"
|
||||
delete, err := c.makeRequest("DELETE", deleteURL, headers, nil)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer delete.Body.Close()
|
||||
|
||||
body, err := ioutil.ReadAll(delete.Body)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if delete.StatusCode != http.StatusAccepted {
|
||||
return fmt.Errorf("Failed to delete %v: %s (%v)", deleteURL, string(body), delete.Status)
|
||||
}
|
||||
|
||||
if c.signatureBase != nil {
|
||||
manifestDigest, err := manifest.Digest(manifestBody)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
for i := 0; ; i++ {
|
||||
url := signatureStorageURL(c.signatureBase, manifestDigest, i)
|
||||
if url == nil {
|
||||
return fmt.Errorf("Internal error: signatureStorageURL with non-nil base returned nil")
|
||||
}
|
||||
missing, err := c.deleteOneSignature(url)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if missing {
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
155
vendor/github.com/containers/image/docker/docker_transport.go
generated
vendored
Normal file
155
vendor/github.com/containers/image/docker/docker_transport.go
generated
vendored
Normal file
|
@ -0,0 +1,155 @@
|
|||
package docker
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"strings"
|
||||
|
||||
"github.com/containers/image/docker/policyconfiguration"
|
||||
"github.com/containers/image/docker/reference"
|
||||
"github.com/containers/image/types"
|
||||
)
|
||||
|
||||
// Transport is an ImageTransport for Docker registry-hosted images.
|
||||
var Transport = dockerTransport{}
|
||||
|
||||
type dockerTransport struct{}
|
||||
|
||||
func (t dockerTransport) Name() string {
|
||||
return "docker"
|
||||
}
|
||||
|
||||
// ParseReference converts a string, which should not start with the ImageTransport.Name prefix, into an ImageReference.
|
||||
func (t dockerTransport) ParseReference(reference string) (types.ImageReference, error) {
|
||||
return ParseReference(reference)
|
||||
}
|
||||
|
||||
// ValidatePolicyConfigurationScope checks that scope is a valid name for a signature.PolicyTransportScopes keys
|
||||
// (i.e. a valid PolicyConfigurationIdentity() or PolicyConfigurationNamespaces() return value).
|
||||
// It is acceptable to allow an invalid value which will never be matched, it can "only" cause user confusion.
|
||||
// scope passed to this function will not be "", that value is always allowed.
|
||||
func (t dockerTransport) ValidatePolicyConfigurationScope(scope string) error {
|
||||
// FIXME? We could be verifying the various character set and length restrictions
|
||||
// from docker/distribution/reference.regexp.go, but other than that there
|
||||
// are few semantically invalid strings.
|
||||
return nil
|
||||
}
|
||||
|
||||
// dockerReference is an ImageReference for Docker images.
|
||||
type dockerReference struct {
|
||||
ref reference.Named // By construction we know that !reference.IsNameOnly(ref)
|
||||
}
|
||||
|
||||
// ParseReference converts a string, which should not start with the ImageTransport.Name prefix, into an Docker ImageReference.
|
||||
func ParseReference(refString string) (types.ImageReference, error) {
|
||||
if !strings.HasPrefix(refString, "//") {
|
||||
return nil, fmt.Errorf("docker: image reference %s does not start with //", refString)
|
||||
}
|
||||
ref, err := reference.ParseNamed(strings.TrimPrefix(refString, "//"))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
ref = reference.WithDefaultTag(ref)
|
||||
return NewReference(ref)
|
||||
}
|
||||
|
||||
// NewReference returns a Docker reference for a named reference. The reference must satisfy !reference.IsNameOnly().
|
||||
func NewReference(ref reference.Named) (types.ImageReference, error) {
|
||||
if reference.IsNameOnly(ref) {
|
||||
return nil, fmt.Errorf("Docker reference %s has neither a tag nor a digest", ref.String())
|
||||
}
|
||||
// A github.com/distribution/reference value can have a tag and a digest at the same time!
|
||||
// docker/reference does not handle that, so fail.
|
||||
// (Even if it were supported, the semantics of policy namespaces are unclear - should we drop
|
||||
// the tag or the digest first?)
|
||||
_, isTagged := ref.(reference.NamedTagged)
|
||||
_, isDigested := ref.(reference.Canonical)
|
||||
if isTagged && isDigested {
|
||||
return nil, fmt.Errorf("Docker references with both a tag and digest are currently not supported")
|
||||
}
|
||||
return dockerReference{
|
||||
ref: ref,
|
||||
}, nil
|
||||
}
|
||||
|
||||
func (ref dockerReference) Transport() types.ImageTransport {
|
||||
return Transport
|
||||
}
|
||||
|
||||
// StringWithinTransport returns a string representation of the reference, which MUST be such that
|
||||
// reference.Transport().ParseReference(reference.StringWithinTransport()) returns an equivalent reference.
|
||||
// NOTE: The returned string is not promised to be equal to the original input to ParseReference;
|
||||
// e.g. default attribute values omitted by the user may be filled in in the return value, or vice versa.
|
||||
// WARNING: Do not use the return value in the UI to describe an image, it does not contain the Transport().Name() prefix.
|
||||
func (ref dockerReference) StringWithinTransport() string {
|
||||
return "//" + ref.ref.String()
|
||||
}
|
||||
|
||||
// DockerReference returns a Docker reference associated with this reference
|
||||
// (fully explicit, i.e. !reference.IsNameOnly, but reflecting user intent,
|
||||
// not e.g. after redirect or alias processing), or nil if unknown/not applicable.
|
||||
func (ref dockerReference) DockerReference() reference.Named {
|
||||
return ref.ref
|
||||
}
|
||||
|
||||
// PolicyConfigurationIdentity returns a string representation of the reference, suitable for policy lookup.
|
||||
// This MUST reflect user intent, not e.g. after processing of third-party redirects or aliases;
|
||||
// The value SHOULD be fully explicit about its semantics, with no hidden defaults, AND canonical
|
||||
// (i.e. various references with exactly the same semantics should return the same configuration identity)
|
||||
// It is fine for the return value to be equal to StringWithinTransport(), and it is desirable but
|
||||
// not required/guaranteed that it will be a valid input to Transport().ParseReference().
|
||||
// Returns "" if configuration identities for these references are not supported.
|
||||
func (ref dockerReference) PolicyConfigurationIdentity() string {
|
||||
res, err := policyconfiguration.DockerReferenceIdentity(ref.ref)
|
||||
if res == "" || err != nil { // Coverage: Should never happen, NewReference above should refuse values which could cause a failure.
|
||||
panic(fmt.Sprintf("Internal inconsistency: policyconfiguration.DockerReferenceIdentity returned %#v, %v", res, err))
|
||||
}
|
||||
return res
|
||||
}
|
||||
|
||||
// PolicyConfigurationNamespaces returns a list of other policy configuration namespaces to search
|
||||
// for if explicit configuration for PolicyConfigurationIdentity() is not set. The list will be processed
|
||||
// in order, terminating on first match, and an implicit "" is always checked at the end.
|
||||
// It is STRONGLY recommended for the first element, if any, to be a prefix of PolicyConfigurationIdentity(),
|
||||
// and each following element to be a prefix of the element preceding it.
|
||||
func (ref dockerReference) PolicyConfigurationNamespaces() []string {
|
||||
return policyconfiguration.DockerReferenceNamespaces(ref.ref)
|
||||
}
|
||||
|
||||
// NewImage returns a types.Image for this reference, possibly specialized for this ImageTransport.
|
||||
// The caller must call .Close() on the returned Image.
|
||||
// NOTE: If any kind of signature verification should happen, build an UnparsedImage from the value returned by NewImageSource,
|
||||
// verify that UnparsedImage, and convert it into a real Image via image.FromUnparsedImage.
|
||||
func (ref dockerReference) NewImage(ctx *types.SystemContext) (types.Image, error) {
|
||||
return newImage(ctx, ref)
|
||||
}
|
||||
|
||||
// NewImageSource returns a types.ImageSource for this reference,
|
||||
// asking the backend to use a manifest from requestedManifestMIMETypes if possible.
|
||||
// nil requestedManifestMIMETypes means manifest.DefaultRequestedManifestMIMETypes.
|
||||
// The caller must call .Close() on the returned ImageSource.
|
||||
func (ref dockerReference) NewImageSource(ctx *types.SystemContext, requestedManifestMIMETypes []string) (types.ImageSource, error) {
|
||||
return newImageSource(ctx, ref, requestedManifestMIMETypes)
|
||||
}
|
||||
|
||||
// NewImageDestination returns a types.ImageDestination for this reference.
|
||||
// The caller must call .Close() on the returned ImageDestination.
|
||||
func (ref dockerReference) NewImageDestination(ctx *types.SystemContext) (types.ImageDestination, error) {
|
||||
return newImageDestination(ctx, ref)
|
||||
}
|
||||
|
||||
// DeleteImage deletes the named image from the registry, if supported.
|
||||
func (ref dockerReference) DeleteImage(ctx *types.SystemContext) error {
|
||||
return deleteImage(ctx, ref)
|
||||
}
|
||||
|
||||
// tagOrDigest returns a tag or digest from the reference.
|
||||
func (ref dockerReference) tagOrDigest() (string, error) {
|
||||
if ref, ok := ref.ref.(reference.Canonical); ok {
|
||||
return ref.Digest().String(), nil
|
||||
}
|
||||
if ref, ok := ref.ref.(reference.NamedTagged); ok {
|
||||
return ref.Tag(), nil
|
||||
}
|
||||
// This should not happen, NewReference above refuses reference.IsNameOnly values.
|
||||
return "", fmt.Errorf("Internal inconsistency: Reference %s unexpectedly has neither a digest nor a tag", ref.ref.String())
|
||||
}
|
199
vendor/github.com/containers/image/docker/lookaside.go
generated
vendored
Normal file
199
vendor/github.com/containers/image/docker/lookaside.go
generated
vendored
Normal file
|
@ -0,0 +1,199 @@
|
|||
package docker
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"net/url"
|
||||
"os"
|
||||
"path"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
|
||||
"github.com/docker/distribution/digest"
|
||||
"github.com/ghodss/yaml"
|
||||
|
||||
"github.com/Sirupsen/logrus"
|
||||
"github.com/containers/image/types"
|
||||
)
|
||||
|
||||
// systemRegistriesDirPath is the path to registries.d, used for locating lookaside Docker signature storage.
|
||||
// You can override this at build time with
|
||||
// -ldflags '-X github.com/containers/image/docker.systemRegistriesDirPath=$your_path'
|
||||
var systemRegistriesDirPath = builtinRegistriesDirPath
|
||||
|
||||
// builtinRegistriesDirPath is the path to registries.d.
|
||||
// DO NOT change this, instead see systemRegistriesDirPath above.
|
||||
const builtinRegistriesDirPath = "/etc/containers/registries.d"
|
||||
|
||||
// registryConfiguration is one of the files in registriesDirPath configuring lookaside locations, or the result of merging them all.
|
||||
// NOTE: Keep this in sync with docs/registries.d.md!
|
||||
type registryConfiguration struct {
|
||||
DefaultDocker *registryNamespace `json:"default-docker"`
|
||||
// The key is a namespace, using fully-expanded Docker reference format or parent namespaces (per dockerReference.PolicyConfiguration*),
|
||||
Docker map[string]registryNamespace `json:"docker"`
|
||||
}
|
||||
|
||||
// registryNamespace defines lookaside locations for a single namespace.
|
||||
type registryNamespace struct {
|
||||
SigStore string `json:"sigstore"` // For reading, and if SigStoreStaging is not present, for writing.
|
||||
SigStoreStaging string `json:"sigstore-staging"` // For writing only.
|
||||
}
|
||||
|
||||
// signatureStorageBase is an "opaque" type representing a lookaside Docker signature storage.
|
||||
// Users outside of this file should use configuredSignatureStorageBase and signatureStorageURL below.
|
||||
type signatureStorageBase *url.URL // The only documented value is nil, meaning storage is not supported.
|
||||
|
||||
// configuredSignatureStorageBase reads configuration to find an appropriate signature storage URL for ref, for write access if “write”.
|
||||
func configuredSignatureStorageBase(ctx *types.SystemContext, ref dockerReference, write bool) (signatureStorageBase, error) {
|
||||
// FIXME? Loading and parsing the config could be cached across calls.
|
||||
dirPath := registriesDirPath(ctx)
|
||||
logrus.Debugf(`Using registries.d directory %s for sigstore configuration`, dirPath)
|
||||
config, err := loadAndMergeConfig(dirPath)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
topLevel := config.signatureTopLevel(ref, write)
|
||||
if topLevel == "" {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
url, err := url.Parse(topLevel)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("Invalid signature storage URL %s: %v", topLevel, err)
|
||||
}
|
||||
// FIXME? Restrict to explicitly supported schemes?
|
||||
repo := ref.ref.FullName() // Note that this is without a tag or digest.
|
||||
if path.Clean(repo) != repo { // Coverage: This should not be reachable because /./ and /../ components are not valid in docker references
|
||||
return nil, fmt.Errorf("Unexpected path elements in Docker reference %s for signature storage", ref.ref.String())
|
||||
}
|
||||
url.Path = url.Path + "/" + repo
|
||||
return url, nil
|
||||
}
|
||||
|
||||
// registriesDirPath returns a path to registries.d
|
||||
func registriesDirPath(ctx *types.SystemContext) string {
|
||||
if ctx != nil {
|
||||
if ctx.RegistriesDirPath != "" {
|
||||
return ctx.RegistriesDirPath
|
||||
}
|
||||
if ctx.RootForImplicitAbsolutePaths != "" {
|
||||
return filepath.Join(ctx.RootForImplicitAbsolutePaths, systemRegistriesDirPath)
|
||||
}
|
||||
}
|
||||
return systemRegistriesDirPath
|
||||
}
|
||||
|
||||
// loadAndMergeConfig loads configuration files in dirPath
|
||||
func loadAndMergeConfig(dirPath string) (*registryConfiguration, error) {
|
||||
mergedConfig := registryConfiguration{Docker: map[string]registryNamespace{}}
|
||||
dockerDefaultMergedFrom := ""
|
||||
nsMergedFrom := map[string]string{}
|
||||
|
||||
dir, err := os.Open(dirPath)
|
||||
if err != nil {
|
||||
if os.IsNotExist(err) {
|
||||
return &mergedConfig, nil
|
||||
}
|
||||
return nil, err
|
||||
}
|
||||
configNames, err := dir.Readdirnames(0)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
for _, configName := range configNames {
|
||||
if !strings.HasSuffix(configName, ".yaml") {
|
||||
continue
|
||||
}
|
||||
configPath := filepath.Join(dirPath, configName)
|
||||
configBytes, err := ioutil.ReadFile(configPath)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
var config registryConfiguration
|
||||
err = yaml.Unmarshal(configBytes, &config)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("Error parsing %s: %v", configPath, err)
|
||||
}
|
||||
|
||||
if config.DefaultDocker != nil {
|
||||
if mergedConfig.DefaultDocker != nil {
|
||||
return nil, fmt.Errorf(`Error parsing signature storage configuration: "default-docker" defined both in "%s" and "%s"`,
|
||||
dockerDefaultMergedFrom, configPath)
|
||||
}
|
||||
mergedConfig.DefaultDocker = config.DefaultDocker
|
||||
dockerDefaultMergedFrom = configPath
|
||||
}
|
||||
|
||||
for nsName, nsConfig := range config.Docker { // includes config.Docker == nil
|
||||
if _, ok := mergedConfig.Docker[nsName]; ok {
|
||||
return nil, fmt.Errorf(`Error parsing signature storage configuration: "docker" namespace "%s" defined both in "%s" and "%s"`,
|
||||
nsName, nsMergedFrom[nsName], configPath)
|
||||
}
|
||||
mergedConfig.Docker[nsName] = nsConfig
|
||||
nsMergedFrom[nsName] = configPath
|
||||
}
|
||||
}
|
||||
|
||||
return &mergedConfig, nil
|
||||
}
|
||||
|
||||
// config.signatureTopLevel returns an URL string configured in config for ref, for write access if “write”.
|
||||
// (the top level of the storage, namespaced by repo.FullName etc.), or "" if no signature storage should be used.
|
||||
func (config *registryConfiguration) signatureTopLevel(ref dockerReference, write bool) string {
|
||||
if config.Docker != nil {
|
||||
// Look for a full match.
|
||||
identity := ref.PolicyConfigurationIdentity()
|
||||
if ns, ok := config.Docker[identity]; ok {
|
||||
logrus.Debugf(` Using "docker" namespace %s`, identity)
|
||||
if url := ns.signatureTopLevel(write); url != "" {
|
||||
return url
|
||||
}
|
||||
}
|
||||
|
||||
// Look for a match of the possible parent namespaces.
|
||||
for _, name := range ref.PolicyConfigurationNamespaces() {
|
||||
if ns, ok := config.Docker[name]; ok {
|
||||
logrus.Debugf(` Using "docker" namespace %s`, name)
|
||||
if url := ns.signatureTopLevel(write); url != "" {
|
||||
return url
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
// Look for a default location
|
||||
if config.DefaultDocker != nil {
|
||||
logrus.Debugf(` Using "default-docker" configuration`)
|
||||
if url := config.DefaultDocker.signatureTopLevel(write); url != "" {
|
||||
return url
|
||||
}
|
||||
}
|
||||
logrus.Debugf(" No signature storage configuration found for %s", ref.PolicyConfigurationIdentity())
|
||||
return ""
|
||||
}
|
||||
|
||||
// ns.signatureTopLevel returns an URL string configured in ns for ref, for write access if “write”.
|
||||
// or "" if nothing has been configured.
|
||||
func (ns registryNamespace) signatureTopLevel(write bool) string {
|
||||
if write && ns.SigStoreStaging != "" {
|
||||
logrus.Debugf(` Using %s`, ns.SigStoreStaging)
|
||||
return ns.SigStoreStaging
|
||||
}
|
||||
if ns.SigStore != "" {
|
||||
logrus.Debugf(` Using %s`, ns.SigStore)
|
||||
return ns.SigStore
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
// signatureStorageURL returns an URL usable for acessing signature index in base with known manifestDigest, or nil if not applicable.
|
||||
// Returns nil iff base == nil.
|
||||
func signatureStorageURL(base signatureStorageBase, manifestDigest digest.Digest, index int) *url.URL {
|
||||
if base == nil {
|
||||
return nil
|
||||
}
|
||||
url := *base
|
||||
url.Path = fmt.Sprintf("%s@%s/signature-%d", url.Path, manifestDigest.String(), index+1)
|
||||
return &url
|
||||
}
|
57
vendor/github.com/containers/image/docker/policyconfiguration/naming.go
generated
vendored
Normal file
57
vendor/github.com/containers/image/docker/policyconfiguration/naming.go
generated
vendored
Normal file
|
@ -0,0 +1,57 @@
|
|||
package policyconfiguration
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"strings"
|
||||
|
||||
"github.com/containers/image/docker/reference"
|
||||
)
|
||||
|
||||
// DockerReferenceIdentity returns a string representation of the reference, suitable for policy lookup,
|
||||
// as a backend for ImageReference.PolicyConfigurationIdentity.
|
||||
// The reference must satisfy !reference.IsNameOnly().
|
||||
func DockerReferenceIdentity(ref reference.Named) (string, error) {
|
||||
res := ref.FullName()
|
||||
tagged, isTagged := ref.(reference.NamedTagged)
|
||||
digested, isDigested := ref.(reference.Canonical)
|
||||
switch {
|
||||
case isTagged && isDigested: // This should not happen, docker/reference.ParseNamed drops the tag.
|
||||
return "", fmt.Errorf("Unexpected Docker reference %s with both a name and a digest", ref.String())
|
||||
case !isTagged && !isDigested: // This should not happen, the caller is expected to ensure !reference.IsNameOnly()
|
||||
return "", fmt.Errorf("Internal inconsistency: Docker reference %s with neither a tag nor a digest", ref.String())
|
||||
case isTagged:
|
||||
res = res + ":" + tagged.Tag()
|
||||
case isDigested:
|
||||
res = res + "@" + digested.Digest().String()
|
||||
default: // Coverage: The above was supposed to be exhaustive.
|
||||
return "", errors.New("Internal inconsistency, unexpected default branch")
|
||||
}
|
||||
return res, nil
|
||||
}
|
||||
|
||||
// DockerReferenceNamespaces returns a list of other policy configuration namespaces to search,
|
||||
// as a backend for ImageReference.PolicyConfigurationIdentity.
|
||||
// The reference must satisfy !reference.IsNameOnly().
|
||||
func DockerReferenceNamespaces(ref reference.Named) []string {
|
||||
// Look for a match of the repository, and then of the possible parent
|
||||
// namespaces. Note that this only happens on the expanded host names
|
||||
// and repository names, i.e. "busybox" is looked up as "docker.io/library/busybox",
|
||||
// then in its parent "docker.io/library"; in none of "busybox",
|
||||
// un-namespaced "library" nor in "" supposedly implicitly representing "library/".
|
||||
//
|
||||
// ref.FullName() == ref.Hostname() + "/" + ref.RemoteName(), so the last
|
||||
// iteration matches the host name (for any namespace).
|
||||
res := []string{}
|
||||
name := ref.FullName()
|
||||
for {
|
||||
res = append(res, name)
|
||||
|
||||
lastSlash := strings.LastIndex(name, "/")
|
||||
if lastSlash == -1 {
|
||||
break
|
||||
}
|
||||
name = name[:lastSlash]
|
||||
}
|
||||
return res
|
||||
}
|
6
vendor/github.com/containers/image/docker/reference/doc.go
generated
vendored
Normal file
6
vendor/github.com/containers/image/docker/reference/doc.go
generated
vendored
Normal file
|
@ -0,0 +1,6 @@
|
|||
// Package reference is a fork of the upstream docker/docker/reference package.
|
||||
// The package is forked because we need consistency especially when storing and
|
||||
// checking signatures (RH patches break this consistency because they modify
|
||||
// docker/docker/reference as part of a patch carried in projectatomic/docker).
|
||||
// The version of this package is v1.12.1 from upstream, update as necessary.
|
||||
package reference
|
220
vendor/github.com/containers/image/docker/reference/reference.go
generated
vendored
Normal file
220
vendor/github.com/containers/image/docker/reference/reference.go
generated
vendored
Normal file
|
@ -0,0 +1,220 @@
|
|||
package reference
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"regexp"
|
||||
"strings"
|
||||
|
||||
"github.com/docker/distribution/digest"
|
||||
distreference "github.com/docker/distribution/reference"
|
||||
)
|
||||
|
||||
const (
|
||||
// DefaultTag defines the default tag used when performing images related actions and no tag or digest is specified
|
||||
DefaultTag = "latest"
|
||||
// DefaultHostname is the default built-in hostname
|
||||
DefaultHostname = "docker.io"
|
||||
// LegacyDefaultHostname is automatically converted to DefaultHostname
|
||||
LegacyDefaultHostname = "index.docker.io"
|
||||
// DefaultRepoPrefix is the prefix used for default repositories in default host
|
||||
DefaultRepoPrefix = "library/"
|
||||
)
|
||||
|
||||
// Named is an object with a full name
|
||||
type Named interface {
|
||||
// Name returns normalized repository name, like "ubuntu".
|
||||
Name() string
|
||||
// String returns full reference, like "ubuntu@sha256:abcdef..."
|
||||
String() string
|
||||
// FullName returns full repository name with hostname, like "docker.io/library/ubuntu"
|
||||
FullName() string
|
||||
// Hostname returns hostname for the reference, like "docker.io"
|
||||
Hostname() string
|
||||
// RemoteName returns the repository component of the full name, like "library/ubuntu"
|
||||
RemoteName() string
|
||||
}
|
||||
|
||||
// NamedTagged is an object including a name and tag.
|
||||
type NamedTagged interface {
|
||||
Named
|
||||
Tag() string
|
||||
}
|
||||
|
||||
// Canonical reference is an object with a fully unique
|
||||
// name including a name with hostname and digest
|
||||
type Canonical interface {
|
||||
Named
|
||||
Digest() digest.Digest
|
||||
}
|
||||
|
||||
// ParseNamed parses s and returns a syntactically valid reference implementing
|
||||
// the Named interface. The reference must have a name, otherwise an error is
|
||||
// returned.
|
||||
// If an error was encountered it is returned, along with a nil Reference.
|
||||
func ParseNamed(s string) (Named, error) {
|
||||
named, err := distreference.ParseNamed(s)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("Error parsing reference: %q is not a valid repository/tag", s)
|
||||
}
|
||||
r, err := WithName(named.Name())
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if canonical, isCanonical := named.(distreference.Canonical); isCanonical {
|
||||
return WithDigest(r, canonical.Digest())
|
||||
}
|
||||
if tagged, isTagged := named.(distreference.NamedTagged); isTagged {
|
||||
return WithTag(r, tagged.Tag())
|
||||
}
|
||||
return r, nil
|
||||
}
|
||||
|
||||
// WithName returns a named object representing the given string. If the input
|
||||
// is invalid ErrReferenceInvalidFormat will be returned.
|
||||
func WithName(name string) (Named, error) {
|
||||
name, err := normalize(name)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if err := validateName(name); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
r, err := distreference.WithName(name)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &namedRef{r}, nil
|
||||
}
|
||||
|
||||
// WithTag combines the name from "name" and the tag from "tag" to form a
|
||||
// reference incorporating both the name and the tag.
|
||||
func WithTag(name Named, tag string) (NamedTagged, error) {
|
||||
r, err := distreference.WithTag(name, tag)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &taggedRef{namedRef{r}}, nil
|
||||
}
|
||||
|
||||
// WithDigest combines the name from "name" and the digest from "digest" to form
|
||||
// a reference incorporating both the name and the digest.
|
||||
func WithDigest(name Named, digest digest.Digest) (Canonical, error) {
|
||||
r, err := distreference.WithDigest(name, digest)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &canonicalRef{namedRef{r}}, nil
|
||||
}
|
||||
|
||||
type namedRef struct {
|
||||
distreference.Named
|
||||
}
|
||||
type taggedRef struct {
|
||||
namedRef
|
||||
}
|
||||
type canonicalRef struct {
|
||||
namedRef
|
||||
}
|
||||
|
||||
func (r *namedRef) FullName() string {
|
||||
hostname, remoteName := splitHostname(r.Name())
|
||||
return hostname + "/" + remoteName
|
||||
}
|
||||
func (r *namedRef) Hostname() string {
|
||||
hostname, _ := splitHostname(r.Name())
|
||||
return hostname
|
||||
}
|
||||
func (r *namedRef) RemoteName() string {
|
||||
_, remoteName := splitHostname(r.Name())
|
||||
return remoteName
|
||||
}
|
||||
func (r *taggedRef) Tag() string {
|
||||
return r.namedRef.Named.(distreference.NamedTagged).Tag()
|
||||
}
|
||||
func (r *canonicalRef) Digest() digest.Digest {
|
||||
return r.namedRef.Named.(distreference.Canonical).Digest()
|
||||
}
|
||||
|
||||
// WithDefaultTag adds a default tag to a reference if it only has a repo name.
|
||||
func WithDefaultTag(ref Named) Named {
|
||||
if IsNameOnly(ref) {
|
||||
ref, _ = WithTag(ref, DefaultTag)
|
||||
}
|
||||
return ref
|
||||
}
|
||||
|
||||
// IsNameOnly returns true if reference only contains a repo name.
|
||||
func IsNameOnly(ref Named) bool {
|
||||
if _, ok := ref.(NamedTagged); ok {
|
||||
return false
|
||||
}
|
||||
if _, ok := ref.(Canonical); ok {
|
||||
return false
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
// ParseIDOrReference parses string for an image ID or a reference. ID can be
|
||||
// without a default prefix.
|
||||
func ParseIDOrReference(idOrRef string) (digest.Digest, Named, error) {
|
||||
if err := validateID(idOrRef); err == nil {
|
||||
idOrRef = "sha256:" + idOrRef
|
||||
}
|
||||
if dgst, err := digest.ParseDigest(idOrRef); err == nil {
|
||||
return dgst, nil, nil
|
||||
}
|
||||
ref, err := ParseNamed(idOrRef)
|
||||
return "", ref, err
|
||||
}
|
||||
|
||||
// splitHostname splits a repository name to hostname and remotename string.
|
||||
// If no valid hostname is found, the default hostname is used. Repository name
|
||||
// needs to be already validated before.
|
||||
func splitHostname(name string) (hostname, remoteName string) {
|
||||
i := strings.IndexRune(name, '/')
|
||||
if i == -1 || (!strings.ContainsAny(name[:i], ".:") && name[:i] != "localhost") {
|
||||
hostname, remoteName = DefaultHostname, name
|
||||
} else {
|
||||
hostname, remoteName = name[:i], name[i+1:]
|
||||
}
|
||||
if hostname == LegacyDefaultHostname {
|
||||
hostname = DefaultHostname
|
||||
}
|
||||
if hostname == DefaultHostname && !strings.ContainsRune(remoteName, '/') {
|
||||
remoteName = DefaultRepoPrefix + remoteName
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// normalize returns a repository name in its normalized form, meaning it
|
||||
// will not contain default hostname nor library/ prefix for official images.
|
||||
func normalize(name string) (string, error) {
|
||||
host, remoteName := splitHostname(name)
|
||||
if strings.ToLower(remoteName) != remoteName {
|
||||
return "", errors.New("invalid reference format: repository name must be lowercase")
|
||||
}
|
||||
if host == DefaultHostname {
|
||||
if strings.HasPrefix(remoteName, DefaultRepoPrefix) {
|
||||
return strings.TrimPrefix(remoteName, DefaultRepoPrefix), nil
|
||||
}
|
||||
return remoteName, nil
|
||||
}
|
||||
return name, nil
|
||||
}
|
||||
|
||||
var validHex = regexp.MustCompile(`^([a-f0-9]{64})$`)
|
||||
|
||||
func validateID(id string) error {
|
||||
if ok := validHex.MatchString(id); !ok {
|
||||
return fmt.Errorf("image ID %q is invalid", id)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func validateName(name string) error {
|
||||
if err := validateID(name); err == nil {
|
||||
return fmt.Errorf("Invalid repository name (%s), cannot specify 64-byte hexadecimal strings", name)
|
||||
}
|
||||
return nil
|
||||
}
|
159
vendor/github.com/containers/image/docker/wwwauthenticate.go
generated
vendored
Normal file
159
vendor/github.com/containers/image/docker/wwwauthenticate.go
generated
vendored
Normal file
|
@ -0,0 +1,159 @@
|
|||
package docker
|
||||
|
||||
// Based on github.com/docker/distribution/registry/client/auth/authchallenge.go, primarily stripping unnecessary dependencies.
|
||||
|
||||
import (
|
||||
"net/http"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// challenge carries information from a WWW-Authenticate response header.
|
||||
// See RFC 7235.
|
||||
type challenge struct {
|
||||
// Scheme is the auth-scheme according to RFC 7235
|
||||
Scheme string
|
||||
|
||||
// Parameters are the auth-params according to RFC 7235
|
||||
Parameters map[string]string
|
||||
}
|
||||
|
||||
// Octet types from RFC 7230.
|
||||
type octetType byte
|
||||
|
||||
var octetTypes [256]octetType
|
||||
|
||||
const (
|
||||
isToken octetType = 1 << iota
|
||||
isSpace
|
||||
)
|
||||
|
||||
func init() {
|
||||
// OCTET = <any 8-bit sequence of data>
|
||||
// CHAR = <any US-ASCII character (octets 0 - 127)>
|
||||
// CTL = <any US-ASCII control character (octets 0 - 31) and DEL (127)>
|
||||
// CR = <US-ASCII CR, carriage return (13)>
|
||||
// LF = <US-ASCII LF, linefeed (10)>
|
||||
// SP = <US-ASCII SP, space (32)>
|
||||
// HT = <US-ASCII HT, horizontal-tab (9)>
|
||||
// <"> = <US-ASCII double-quote mark (34)>
|
||||
// CRLF = CR LF
|
||||
// LWS = [CRLF] 1*( SP | HT )
|
||||
// TEXT = <any OCTET except CTLs, but including LWS>
|
||||
// separators = "(" | ")" | "<" | ">" | "@" | "," | ";" | ":" | "\" | <">
|
||||
// | "/" | "[" | "]" | "?" | "=" | "{" | "}" | SP | HT
|
||||
// token = 1*<any CHAR except CTLs or separators>
|
||||
// qdtext = <any TEXT except <">>
|
||||
|
||||
for c := 0; c < 256; c++ {
|
||||
var t octetType
|
||||
isCtl := c <= 31 || c == 127
|
||||
isChar := 0 <= c && c <= 127
|
||||
isSeparator := strings.IndexRune(" \t\"(),/:;<=>?@[]\\{}", rune(c)) >= 0
|
||||
if strings.IndexRune(" \t\r\n", rune(c)) >= 0 {
|
||||
t |= isSpace
|
||||
}
|
||||
if isChar && !isCtl && !isSeparator {
|
||||
t |= isToken
|
||||
}
|
||||
octetTypes[c] = t
|
||||
}
|
||||
}
|
||||
|
||||
func parseAuthHeader(header http.Header) []challenge {
|
||||
challenges := []challenge{}
|
||||
for _, h := range header[http.CanonicalHeaderKey("WWW-Authenticate")] {
|
||||
v, p := parseValueAndParams(h)
|
||||
if v != "" {
|
||||
challenges = append(challenges, challenge{Scheme: v, Parameters: p})
|
||||
}
|
||||
}
|
||||
return challenges
|
||||
}
|
||||
|
||||
// NOTE: This is not a fully compliant parser per RFC 7235:
|
||||
// Most notably it does not support more than one challenge within a single header
|
||||
// Some of the whitespace parsing also seems noncompliant.
|
||||
// But it is clearly better than what we used to have…
|
||||
func parseValueAndParams(header string) (value string, params map[string]string) {
|
||||
params = make(map[string]string)
|
||||
value, s := expectToken(header)
|
||||
if value == "" {
|
||||
return
|
||||
}
|
||||
value = strings.ToLower(value)
|
||||
s = "," + skipSpace(s)
|
||||
for strings.HasPrefix(s, ",") {
|
||||
var pkey string
|
||||
pkey, s = expectToken(skipSpace(s[1:]))
|
||||
if pkey == "" {
|
||||
return
|
||||
}
|
||||
if !strings.HasPrefix(s, "=") {
|
||||
return
|
||||
}
|
||||
var pvalue string
|
||||
pvalue, s = expectTokenOrQuoted(s[1:])
|
||||
if pvalue == "" {
|
||||
return
|
||||
}
|
||||
pkey = strings.ToLower(pkey)
|
||||
params[pkey] = pvalue
|
||||
s = skipSpace(s)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func skipSpace(s string) (rest string) {
|
||||
i := 0
|
||||
for ; i < len(s); i++ {
|
||||
if octetTypes[s[i]]&isSpace == 0 {
|
||||
break
|
||||
}
|
||||
}
|
||||
return s[i:]
|
||||
}
|
||||
|
||||
func expectToken(s string) (token, rest string) {
|
||||
i := 0
|
||||
for ; i < len(s); i++ {
|
||||
if octetTypes[s[i]]&isToken == 0 {
|
||||
break
|
||||
}
|
||||
}
|
||||
return s[:i], s[i:]
|
||||
}
|
||||
|
||||
func expectTokenOrQuoted(s string) (value string, rest string) {
|
||||
if !strings.HasPrefix(s, "\"") {
|
||||
return expectToken(s)
|
||||
}
|
||||
s = s[1:]
|
||||
for i := 0; i < len(s); i++ {
|
||||
switch s[i] {
|
||||
case '"':
|
||||
return s[:i], s[i+1:]
|
||||
case '\\':
|
||||
p := make([]byte, len(s)-1)
|
||||
j := copy(p, s[:i])
|
||||
escape := true
|
||||
for i = i + 1; i < len(s); i++ {
|
||||
b := s[i]
|
||||
switch {
|
||||
case escape:
|
||||
escape = false
|
||||
p[j] = b
|
||||
j++
|
||||
case b == '\\':
|
||||
escape = true
|
||||
case b == '"':
|
||||
return string(p[:j]), s[i+1:]
|
||||
default:
|
||||
p[j] = b
|
||||
j++
|
||||
}
|
||||
}
|
||||
return "", ""
|
||||
}
|
||||
}
|
||||
return "", ""
|
||||
}
|
64
vendor/github.com/containers/image/image/docker_list.go
generated
vendored
Normal file
64
vendor/github.com/containers/image/image/docker_list.go
generated
vendored
Normal file
|
@ -0,0 +1,64 @@
|
|||
package image
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"runtime"
|
||||
|
||||
"github.com/containers/image/manifest"
|
||||
"github.com/containers/image/types"
|
||||
"github.com/docker/distribution/digest"
|
||||
)
|
||||
|
||||
type platformSpec struct {
|
||||
Architecture string `json:"architecture"`
|
||||
OS string `json:"os"`
|
||||
OSVersion string `json:"os.version,omitempty"`
|
||||
OSFeatures []string `json:"os.features,omitempty"`
|
||||
Variant string `json:"variant,omitempty"`
|
||||
Features []string `json:"features,omitempty"`
|
||||
}
|
||||
|
||||
// A manifestDescriptor references a platform-specific manifest.
|
||||
type manifestDescriptor struct {
|
||||
descriptor
|
||||
Platform platformSpec `json:"platform"`
|
||||
}
|
||||
|
||||
type manifestList struct {
|
||||
SchemaVersion int `json:"schemaVersion"`
|
||||
MediaType string `json:"mediaType"`
|
||||
Manifests []manifestDescriptor `json:"manifests"`
|
||||
}
|
||||
|
||||
func manifestSchema2FromManifestList(src types.ImageSource, manblob []byte) (genericManifest, error) {
|
||||
list := manifestList{}
|
||||
if err := json.Unmarshal(manblob, &list); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
var targetManifestDigest digest.Digest
|
||||
for _, d := range list.Manifests {
|
||||
if d.Platform.Architecture == runtime.GOARCH && d.Platform.OS == runtime.GOOS {
|
||||
targetManifestDigest = d.Digest
|
||||
break
|
||||
}
|
||||
}
|
||||
if targetManifestDigest == "" {
|
||||
return nil, errors.New("no supported platform found in manifest list")
|
||||
}
|
||||
manblob, mt, err := src.GetTargetManifest(targetManifestDigest)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
matches, err := manifest.MatchesDigest(manblob, targetManifestDigest)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("Error computing manifest digest: %v", err)
|
||||
}
|
||||
if !matches {
|
||||
return nil, fmt.Errorf("Manifest image does not match selected manifest digest %s", targetManifestDigest)
|
||||
}
|
||||
|
||||
return manifestInstanceFromBlob(src, manblob, mt)
|
||||
}
|
327
vendor/github.com/containers/image/image/docker_schema1.go
generated
vendored
Normal file
327
vendor/github.com/containers/image/image/docker_schema1.go
generated
vendored
Normal file
|
@ -0,0 +1,327 @@
|
|||
package image
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"regexp"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/containers/image/docker/reference"
|
||||
"github.com/containers/image/manifest"
|
||||
"github.com/containers/image/types"
|
||||
"github.com/docker/distribution/digest"
|
||||
)
|
||||
|
||||
var (
|
||||
validHex = regexp.MustCompile(`^([a-f0-9]{64})$`)
|
||||
)
|
||||
|
||||
type fsLayersSchema1 struct {
|
||||
BlobSum digest.Digest `json:"blobSum"`
|
||||
}
|
||||
|
||||
type historySchema1 struct {
|
||||
V1Compatibility string `json:"v1Compatibility"`
|
||||
}
|
||||
|
||||
// historySchema1 is a string containing this. It is similar to v1Image but not the same, in particular note the ThrowAway field.
|
||||
type v1Compatibility struct {
|
||||
ID string `json:"id"`
|
||||
Parent string `json:"parent,omitempty"`
|
||||
Comment string `json:"comment,omitempty"`
|
||||
Created time.Time `json:"created"`
|
||||
ContainerConfig struct {
|
||||
Cmd []string
|
||||
} `json:"container_config,omitempty"`
|
||||
Author string `json:"author,omitempty"`
|
||||
ThrowAway bool `json:"throwaway,omitempty"`
|
||||
}
|
||||
|
||||
type manifestSchema1 struct {
|
||||
Name string `json:"name"`
|
||||
Tag string `json:"tag"`
|
||||
Architecture string `json:"architecture"`
|
||||
FSLayers []fsLayersSchema1 `json:"fsLayers"`
|
||||
History []historySchema1 `json:"history"`
|
||||
SchemaVersion int `json:"schemaVersion"`
|
||||
}
|
||||
|
||||
func manifestSchema1FromManifest(manifest []byte) (genericManifest, error) {
|
||||
mschema1 := &manifestSchema1{}
|
||||
if err := json.Unmarshal(manifest, mschema1); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if mschema1.SchemaVersion != 1 {
|
||||
return nil, fmt.Errorf("unsupported schema version %d", mschema1.SchemaVersion)
|
||||
}
|
||||
if len(mschema1.FSLayers) != len(mschema1.History) {
|
||||
return nil, errors.New("length of history not equal to number of layers")
|
||||
}
|
||||
if len(mschema1.FSLayers) == 0 {
|
||||
return nil, errors.New("no FSLayers in manifest")
|
||||
}
|
||||
|
||||
if err := fixManifestLayers(mschema1); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return mschema1, nil
|
||||
}
|
||||
|
||||
// manifestSchema1FromComponents builds a new manifestSchema1 from the supplied data.
|
||||
func manifestSchema1FromComponents(ref reference.Named, fsLayers []fsLayersSchema1, history []historySchema1, architecture string) genericManifest {
|
||||
var name, tag string
|
||||
if ref != nil { // Well, what to do if it _is_ nil? Most consumers actually don't use these fields nowadays, so we might as well try not supplying them.
|
||||
name = ref.RemoteName()
|
||||
if tagged, ok := ref.(reference.NamedTagged); ok {
|
||||
tag = tagged.Tag()
|
||||
}
|
||||
}
|
||||
return &manifestSchema1{
|
||||
Name: name,
|
||||
Tag: tag,
|
||||
Architecture: architecture,
|
||||
FSLayers: fsLayers,
|
||||
History: history,
|
||||
SchemaVersion: 1,
|
||||
}
|
||||
}
|
||||
|
||||
func (m *manifestSchema1) serialize() ([]byte, error) {
|
||||
// docker/distribution requires a signature even if the incoming data uses the nominally unsigned DockerV2Schema1MediaType.
|
||||
unsigned, err := json.Marshal(*m)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return manifest.AddDummyV2S1Signature(unsigned)
|
||||
}
|
||||
|
||||
func (m *manifestSchema1) manifestMIMEType() string {
|
||||
return manifest.DockerV2Schema1SignedMediaType
|
||||
}
|
||||
|
||||
// ConfigInfo returns a complete BlobInfo for the separate config object, or a BlobInfo{Digest:""} if there isn't a separate object.
|
||||
// Note that the config object may not exist in the underlying storage in the return value of UpdatedImage! Use ConfigBlob() below.
|
||||
func (m *manifestSchema1) ConfigInfo() types.BlobInfo {
|
||||
return types.BlobInfo{}
|
||||
}
|
||||
|
||||
// ConfigBlob returns the blob described by ConfigInfo, iff ConfigInfo().Digest != ""; nil otherwise.
|
||||
// The result is cached; it is OK to call this however often you need.
|
||||
func (m *manifestSchema1) ConfigBlob() ([]byte, error) {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
// LayerInfos returns a list of BlobInfos of layers referenced by this image, in order (the root layer first, and then successive layered layers).
|
||||
// The Digest field is guaranteed to be provided; Size may be -1.
|
||||
// WARNING: The list may contain duplicates, and they are semantically relevant.
|
||||
func (m *manifestSchema1) LayerInfos() []types.BlobInfo {
|
||||
layers := make([]types.BlobInfo, len(m.FSLayers))
|
||||
for i, layer := range m.FSLayers { // NOTE: This includes empty layers (where m.History.V1Compatibility->ThrowAway)
|
||||
layers[(len(m.FSLayers)-1)-i] = types.BlobInfo{Digest: layer.BlobSum, Size: -1}
|
||||
}
|
||||
return layers
|
||||
}
|
||||
|
||||
func (m *manifestSchema1) imageInspectInfo() (*types.ImageInspectInfo, error) {
|
||||
v1 := &v1Image{}
|
||||
if err := json.Unmarshal([]byte(m.History[0].V1Compatibility), v1); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &types.ImageInspectInfo{
|
||||
Tag: m.Tag,
|
||||
DockerVersion: v1.DockerVersion,
|
||||
Created: v1.Created,
|
||||
Labels: v1.Config.Labels,
|
||||
Architecture: v1.Architecture,
|
||||
Os: v1.OS,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// UpdatedImageNeedsLayerDiffIDs returns true iff UpdatedImage(options) needs InformationOnly.LayerDiffIDs.
|
||||
// This is a horribly specific interface, but computing InformationOnly.LayerDiffIDs can be very expensive to compute
|
||||
// (most importantly it forces us to download the full layers even if they are already present at the destination).
|
||||
func (m *manifestSchema1) UpdatedImageNeedsLayerDiffIDs(options types.ManifestUpdateOptions) bool {
|
||||
return options.ManifestMIMEType == manifest.DockerV2Schema2MediaType
|
||||
}
|
||||
|
||||
// UpdatedImage returns a types.Image modified according to options.
|
||||
// This does not change the state of the original Image object.
|
||||
func (m *manifestSchema1) UpdatedImage(options types.ManifestUpdateOptions) (types.Image, error) {
|
||||
copy := *m
|
||||
if options.LayerInfos != nil {
|
||||
// Our LayerInfos includes empty layers (where m.History.V1Compatibility->ThrowAway), so expect them to be included here as well.
|
||||
if len(copy.FSLayers) != len(options.LayerInfos) {
|
||||
return nil, fmt.Errorf("Error preparing updated manifest: layer count changed from %d to %d", len(copy.FSLayers), len(options.LayerInfos))
|
||||
}
|
||||
for i, info := range options.LayerInfos {
|
||||
// (docker push) sets up m.History.V1Compatibility->{Id,Parent} based on values of info.Digest,
|
||||
// but (docker pull) ignores them in favor of computing DiffIDs from uncompressed data, except verifying the child->parent links and uniqueness.
|
||||
// So, we don't bother recomputing the IDs in m.History.V1Compatibility.
|
||||
copy.FSLayers[(len(options.LayerInfos)-1)-i].BlobSum = info.Digest
|
||||
}
|
||||
}
|
||||
|
||||
switch options.ManifestMIMEType {
|
||||
case "": // No conversion, OK
|
||||
case manifest.DockerV2Schema1MediaType, manifest.DockerV2Schema1SignedMediaType:
|
||||
// We have 2 MIME types for schema 1, which are basically equivalent (even the un-"Signed" MIME type will be rejected if there isn’t a signature; so,
|
||||
// handle conversions between them by doing nothing.
|
||||
case manifest.DockerV2Schema2MediaType:
|
||||
return copy.convertToManifestSchema2(options.InformationOnly.LayerInfos, options.InformationOnly.LayerDiffIDs)
|
||||
default:
|
||||
return nil, fmt.Errorf("Conversion of image manifest from %s to %s is not implemented", manifest.DockerV2Schema1SignedMediaType, options.ManifestMIMEType)
|
||||
}
|
||||
|
||||
return memoryImageFromManifest(©), nil
|
||||
}
|
||||
|
||||
// fixManifestLayers, after validating the supplied manifest
|
||||
// (to use correctly-formatted IDs, and to not have non-consecutive ID collisions in manifest.History),
|
||||
// modifies manifest to only have one entry for each layer ID in manifest.History (deleting the older duplicates,
|
||||
// both from manifest.History and manifest.FSLayers).
|
||||
// Note that even after this succeeds, manifest.FSLayers may contain duplicate entries
|
||||
// (for Dockerfile operations which change the configuration but not the filesystem).
|
||||
func fixManifestLayers(manifest *manifestSchema1) error {
|
||||
type imageV1 struct {
|
||||
ID string
|
||||
Parent string
|
||||
}
|
||||
// Per the specification, we can assume that len(manifest.FSLayers) == len(manifest.History)
|
||||
imgs := make([]*imageV1, len(manifest.FSLayers))
|
||||
for i := range manifest.FSLayers {
|
||||
img := &imageV1{}
|
||||
|
||||
if err := json.Unmarshal([]byte(manifest.History[i].V1Compatibility), img); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
imgs[i] = img
|
||||
if err := validateV1ID(img.ID); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
if imgs[len(imgs)-1].Parent != "" {
|
||||
return errors.New("Invalid parent ID in the base layer of the image")
|
||||
}
|
||||
// check general duplicates to error instead of a deadlock
|
||||
idmap := make(map[string]struct{})
|
||||
var lastID string
|
||||
for _, img := range imgs {
|
||||
// skip IDs that appear after each other, we handle those later
|
||||
if _, exists := idmap[img.ID]; img.ID != lastID && exists {
|
||||
return fmt.Errorf("ID %+v appears multiple times in manifest", img.ID)
|
||||
}
|
||||
lastID = img.ID
|
||||
idmap[lastID] = struct{}{}
|
||||
}
|
||||
// backwards loop so that we keep the remaining indexes after removing items
|
||||
for i := len(imgs) - 2; i >= 0; i-- {
|
||||
if imgs[i].ID == imgs[i+1].ID { // repeated ID. remove and continue
|
||||
manifest.FSLayers = append(manifest.FSLayers[:i], manifest.FSLayers[i+1:]...)
|
||||
manifest.History = append(manifest.History[:i], manifest.History[i+1:]...)
|
||||
} else if imgs[i].Parent != imgs[i+1].ID {
|
||||
return fmt.Errorf("Invalid parent ID. Expected %v, got %v", imgs[i+1].ID, imgs[i].Parent)
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func validateV1ID(id string) error {
|
||||
if ok := validHex.MatchString(id); !ok {
|
||||
return fmt.Errorf("image ID %q is invalid", id)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Based on github.com/docker/docker/distribution/pull_v2.go
|
||||
func (m *manifestSchema1) convertToManifestSchema2(uploadedLayerInfos []types.BlobInfo, layerDiffIDs []digest.Digest) (types.Image, error) {
|
||||
if len(m.History) == 0 {
|
||||
// What would this even mean?! Anyhow, the rest of the code depends on fsLayers[0] and history[0] existing.
|
||||
return nil, fmt.Errorf("Cannot convert an image with 0 history entries to %s", manifest.DockerV2Schema2MediaType)
|
||||
}
|
||||
if len(m.History) != len(m.FSLayers) {
|
||||
return nil, fmt.Errorf("Inconsistent schema 1 manifest: %d history entries, %d fsLayers entries", len(m.History), len(m.FSLayers))
|
||||
}
|
||||
if len(uploadedLayerInfos) != len(m.FSLayers) {
|
||||
return nil, fmt.Errorf("Internal error: uploaded %d blobs, but schema1 manifest has %d fsLayers", len(uploadedLayerInfos), len(m.FSLayers))
|
||||
}
|
||||
if len(layerDiffIDs) != len(m.FSLayers) {
|
||||
return nil, fmt.Errorf("Internal error: collected %d DiffID values, but schema1 manifest has %d fsLayers", len(layerDiffIDs), len(m.FSLayers))
|
||||
}
|
||||
|
||||
rootFS := rootFS{
|
||||
Type: "layers",
|
||||
DiffIDs: []digest.Digest{},
|
||||
BaseLayer: "",
|
||||
}
|
||||
var layers []descriptor
|
||||
history := make([]imageHistory, len(m.History))
|
||||
for v1Index := len(m.History) - 1; v1Index >= 0; v1Index-- {
|
||||
v2Index := (len(m.History) - 1) - v1Index
|
||||
|
||||
var v1compat v1Compatibility
|
||||
if err := json.Unmarshal([]byte(m.History[v1Index].V1Compatibility), &v1compat); err != nil {
|
||||
return nil, fmt.Errorf("Error decoding history entry %d: %v", v1Index, err)
|
||||
}
|
||||
history[v2Index] = imageHistory{
|
||||
Created: v1compat.Created,
|
||||
Author: v1compat.Author,
|
||||
CreatedBy: strings.Join(v1compat.ContainerConfig.Cmd, " "),
|
||||
Comment: v1compat.Comment,
|
||||
EmptyLayer: v1compat.ThrowAway,
|
||||
}
|
||||
|
||||
if !v1compat.ThrowAway {
|
||||
layers = append(layers, descriptor{
|
||||
MediaType: "application/vnd.docker.image.rootfs.diff.tar.gzip",
|
||||
Size: uploadedLayerInfos[v2Index].Size,
|
||||
Digest: m.FSLayers[v1Index].BlobSum,
|
||||
})
|
||||
rootFS.DiffIDs = append(rootFS.DiffIDs, layerDiffIDs[v2Index])
|
||||
}
|
||||
}
|
||||
configJSON, err := configJSONFromV1Config([]byte(m.History[0].V1Compatibility), rootFS, history)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
configDescriptor := descriptor{
|
||||
MediaType: "application/vnd.docker.container.image.v1+json",
|
||||
Size: int64(len(configJSON)),
|
||||
Digest: digest.FromBytes(configJSON),
|
||||
}
|
||||
|
||||
m2 := manifestSchema2FromComponents(configDescriptor, nil, configJSON, layers)
|
||||
return memoryImageFromManifest(m2), nil
|
||||
}
|
||||
|
||||
func configJSONFromV1Config(v1ConfigJSON []byte, rootFS rootFS, history []imageHistory) ([]byte, error) {
|
||||
// github.com/docker/docker/image/v1/imagev1.go:MakeConfigFromV1Config unmarshals and re-marshals the input if docker_version is < 1.8.3 to remove blank fields;
|
||||
// we don't do that here. FIXME? Should we? AFAICT it would only affect the digest value of the schema2 manifest, and we don't particularly need that to be
|
||||
// a consistently reproducible value.
|
||||
|
||||
// Preserve everything we don't specifically know about.
|
||||
// (This must be a *json.RawMessage, even though *[]byte is fairly redundant, because only *RawMessage implements json.Marshaler.)
|
||||
rawContents := map[string]*json.RawMessage{}
|
||||
if err := json.Unmarshal(v1ConfigJSON, &rawContents); err != nil { // We have already unmarshaled it before, using a more detailed schema?!
|
||||
return nil, err
|
||||
}
|
||||
|
||||
delete(rawContents, "id")
|
||||
delete(rawContents, "parent")
|
||||
delete(rawContents, "Size")
|
||||
delete(rawContents, "parent_id")
|
||||
delete(rawContents, "layer_id")
|
||||
delete(rawContents, "throwaway")
|
||||
|
||||
updates := map[string]interface{}{"rootfs": rootFS, "history": history}
|
||||
for field, value := range updates {
|
||||
encoded, err := json.Marshal(value)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
rawContents[field] = (*json.RawMessage)(&encoded)
|
||||
}
|
||||
return json.Marshal(rawContents)
|
||||
}
|
299
vendor/github.com/containers/image/image/docker_schema2.go
generated
vendored
Normal file
299
vendor/github.com/containers/image/image/docker_schema2.go
generated
vendored
Normal file
|
@ -0,0 +1,299 @@
|
|||
package image
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"crypto/sha256"
|
||||
"encoding/hex"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"strings"
|
||||
|
||||
"github.com/Sirupsen/logrus"
|
||||
"github.com/containers/image/manifest"
|
||||
"github.com/containers/image/types"
|
||||
"github.com/docker/distribution/digest"
|
||||
)
|
||||
|
||||
// gzippedEmptyLayer is a gzip-compressed version of an empty tar file (1024 NULL bytes)
|
||||
// This comes from github.com/docker/distribution/manifest/schema1/config_builder.go; there is
|
||||
// a non-zero embedded timestamp; we could zero that, but that would just waste storage space
|
||||
// in registries, so let’s use the same values.
|
||||
var gzippedEmptyLayer = []byte{
|
||||
31, 139, 8, 0, 0, 9, 110, 136, 0, 255, 98, 24, 5, 163, 96, 20, 140, 88,
|
||||
0, 8, 0, 0, 255, 255, 46, 175, 181, 239, 0, 4, 0, 0,
|
||||
}
|
||||
|
||||
// gzippedEmptyLayerDigest is a digest of gzippedEmptyLayer
|
||||
const gzippedEmptyLayerDigest = digest.Digest("sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4")
|
||||
|
||||
type descriptor struct {
|
||||
MediaType string `json:"mediaType"`
|
||||
Size int64 `json:"size"`
|
||||
Digest digest.Digest `json:"digest"`
|
||||
URLs []string `json:"urls,omitempty"`
|
||||
}
|
||||
|
||||
type manifestSchema2 struct {
|
||||
src types.ImageSource // May be nil if configBlob is not nil
|
||||
configBlob []byte // If set, corresponds to contents of ConfigDescriptor.
|
||||
SchemaVersion int `json:"schemaVersion"`
|
||||
MediaType string `json:"mediaType"`
|
||||
ConfigDescriptor descriptor `json:"config"`
|
||||
LayersDescriptors []descriptor `json:"layers"`
|
||||
}
|
||||
|
||||
func manifestSchema2FromManifest(src types.ImageSource, manifest []byte) (genericManifest, error) {
|
||||
v2s2 := manifestSchema2{src: src}
|
||||
if err := json.Unmarshal(manifest, &v2s2); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &v2s2, nil
|
||||
}
|
||||
|
||||
// manifestSchema2FromComponents builds a new manifestSchema2 from the supplied data:
|
||||
func manifestSchema2FromComponents(config descriptor, src types.ImageSource, configBlob []byte, layers []descriptor) genericManifest {
|
||||
return &manifestSchema2{
|
||||
src: src,
|
||||
configBlob: configBlob,
|
||||
SchemaVersion: 2,
|
||||
MediaType: manifest.DockerV2Schema2MediaType,
|
||||
ConfigDescriptor: config,
|
||||
LayersDescriptors: layers,
|
||||
}
|
||||
}
|
||||
|
||||
func (m *manifestSchema2) serialize() ([]byte, error) {
|
||||
return json.Marshal(*m)
|
||||
}
|
||||
|
||||
func (m *manifestSchema2) manifestMIMEType() string {
|
||||
return m.MediaType
|
||||
}
|
||||
|
||||
// ConfigInfo returns a complete BlobInfo for the separate config object, or a BlobInfo{Digest:""} if there isn't a separate object.
|
||||
// Note that the config object may not exist in the underlying storage in the return value of UpdatedImage! Use ConfigBlob() below.
|
||||
func (m *manifestSchema2) ConfigInfo() types.BlobInfo {
|
||||
return types.BlobInfo{Digest: m.ConfigDescriptor.Digest, Size: m.ConfigDescriptor.Size}
|
||||
}
|
||||
|
||||
// ConfigBlob returns the blob described by ConfigInfo, iff ConfigInfo().Digest != ""; nil otherwise.
|
||||
// The result is cached; it is OK to call this however often you need.
|
||||
func (m *manifestSchema2) ConfigBlob() ([]byte, error) {
|
||||
if m.configBlob == nil {
|
||||
if m.src == nil {
|
||||
return nil, fmt.Errorf("Internal error: neither src nor configBlob set in manifestSchema2")
|
||||
}
|
||||
stream, _, err := m.src.GetBlob(types.BlobInfo{
|
||||
Digest: m.ConfigDescriptor.Digest,
|
||||
Size: m.ConfigDescriptor.Size,
|
||||
URLs: m.ConfigDescriptor.URLs,
|
||||
})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer stream.Close()
|
||||
blob, err := ioutil.ReadAll(stream)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
computedDigest := digest.FromBytes(blob)
|
||||
if computedDigest != m.ConfigDescriptor.Digest {
|
||||
return nil, fmt.Errorf("Download config.json digest %s does not match expected %s", computedDigest, m.ConfigDescriptor.Digest)
|
||||
}
|
||||
m.configBlob = blob
|
||||
}
|
||||
return m.configBlob, nil
|
||||
}
|
||||
|
||||
// LayerInfos returns a list of BlobInfos of layers referenced by this image, in order (the root layer first, and then successive layered layers).
|
||||
// The Digest field is guaranteed to be provided; Size may be -1.
|
||||
// WARNING: The list may contain duplicates, and they are semantically relevant.
|
||||
func (m *manifestSchema2) LayerInfos() []types.BlobInfo {
|
||||
blobs := []types.BlobInfo{}
|
||||
for _, layer := range m.LayersDescriptors {
|
||||
blobs = append(blobs, types.BlobInfo{
|
||||
Digest: layer.Digest,
|
||||
Size: layer.Size,
|
||||
URLs: layer.URLs,
|
||||
})
|
||||
}
|
||||
return blobs
|
||||
}
|
||||
|
||||
func (m *manifestSchema2) imageInspectInfo() (*types.ImageInspectInfo, error) {
|
||||
config, err := m.ConfigBlob()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
v1 := &v1Image{}
|
||||
if err := json.Unmarshal(config, v1); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &types.ImageInspectInfo{
|
||||
DockerVersion: v1.DockerVersion,
|
||||
Created: v1.Created,
|
||||
Labels: v1.Config.Labels,
|
||||
Architecture: v1.Architecture,
|
||||
Os: v1.OS,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// UpdatedImageNeedsLayerDiffIDs returns true iff UpdatedImage(options) needs InformationOnly.LayerDiffIDs.
|
||||
// This is a horribly specific interface, but computing InformationOnly.LayerDiffIDs can be very expensive to compute
|
||||
// (most importantly it forces us to download the full layers even if they are already present at the destination).
|
||||
func (m *manifestSchema2) UpdatedImageNeedsLayerDiffIDs(options types.ManifestUpdateOptions) bool {
|
||||
return false
|
||||
}
|
||||
|
||||
// UpdatedImage returns a types.Image modified according to options.
|
||||
// This does not change the state of the original Image object.
|
||||
func (m *manifestSchema2) UpdatedImage(options types.ManifestUpdateOptions) (types.Image, error) {
|
||||
copy := *m // NOTE: This is not a deep copy, it still shares slices etc.
|
||||
if options.LayerInfos != nil {
|
||||
if len(copy.LayersDescriptors) != len(options.LayerInfos) {
|
||||
return nil, fmt.Errorf("Error preparing updated manifest: layer count changed from %d to %d", len(copy.LayersDescriptors), len(options.LayerInfos))
|
||||
}
|
||||
copy.LayersDescriptors = make([]descriptor, len(options.LayerInfos))
|
||||
for i, info := range options.LayerInfos {
|
||||
copy.LayersDescriptors[i].Digest = info.Digest
|
||||
copy.LayersDescriptors[i].Size = info.Size
|
||||
copy.LayersDescriptors[i].URLs = info.URLs
|
||||
}
|
||||
}
|
||||
|
||||
switch options.ManifestMIMEType {
|
||||
case "": // No conversion, OK
|
||||
case manifest.DockerV2Schema1SignedMediaType, manifest.DockerV2Schema1MediaType:
|
||||
return copy.convertToManifestSchema1(options.InformationOnly.Destination)
|
||||
default:
|
||||
return nil, fmt.Errorf("Conversion of image manifest from %s to %s is not implemented", manifest.DockerV2Schema2MediaType, options.ManifestMIMEType)
|
||||
}
|
||||
|
||||
return memoryImageFromManifest(©), nil
|
||||
}
|
||||
|
||||
// Based on docker/distribution/manifest/schema1/config_builder.go
|
||||
func (m *manifestSchema2) convertToManifestSchema1(dest types.ImageDestination) (types.Image, error) {
|
||||
configBytes, err := m.ConfigBlob()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
imageConfig := &image{}
|
||||
if err := json.Unmarshal(configBytes, imageConfig); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Build fsLayers and History, discarding all configs. We will patch the top-level config in later.
|
||||
fsLayers := make([]fsLayersSchema1, len(imageConfig.History))
|
||||
history := make([]historySchema1, len(imageConfig.History))
|
||||
nonemptyLayerIndex := 0
|
||||
var parentV1ID string // Set in the loop
|
||||
v1ID := ""
|
||||
haveGzippedEmptyLayer := false
|
||||
if len(imageConfig.History) == 0 {
|
||||
// What would this even mean?! Anyhow, the rest of the code depends on fsLayers[0] and history[0] existing.
|
||||
return nil, fmt.Errorf("Cannot convert an image with 0 history entries to %s", manifest.DockerV2Schema1SignedMediaType)
|
||||
}
|
||||
for v2Index, historyEntry := range imageConfig.History {
|
||||
parentV1ID = v1ID
|
||||
v1Index := len(imageConfig.History) - 1 - v2Index
|
||||
|
||||
var blobDigest digest.Digest
|
||||
if historyEntry.EmptyLayer {
|
||||
if !haveGzippedEmptyLayer {
|
||||
logrus.Debugf("Uploading empty layer during conversion to schema 1")
|
||||
info, err := dest.PutBlob(bytes.NewReader(gzippedEmptyLayer), types.BlobInfo{Digest: gzippedEmptyLayerDigest, Size: int64(len(gzippedEmptyLayer))})
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("Error uploading empty layer: %v", err)
|
||||
}
|
||||
if info.Digest != gzippedEmptyLayerDigest {
|
||||
return nil, fmt.Errorf("Internal error: Uploaded empty layer has digest %#v instead of %s", info.Digest, gzippedEmptyLayerDigest)
|
||||
}
|
||||
haveGzippedEmptyLayer = true
|
||||
}
|
||||
blobDigest = gzippedEmptyLayerDigest
|
||||
} else {
|
||||
if nonemptyLayerIndex >= len(m.LayersDescriptors) {
|
||||
return nil, fmt.Errorf("Invalid image configuration, needs more than the %d distributed layers", len(m.LayersDescriptors))
|
||||
}
|
||||
blobDigest = m.LayersDescriptors[nonemptyLayerIndex].Digest
|
||||
nonemptyLayerIndex++
|
||||
}
|
||||
|
||||
// AFAICT pull ignores these ID values, at least nowadays, so we could use anything unique, including a simple counter. Use what Docker uses for cargo-cult consistency.
|
||||
v, err := v1IDFromBlobDigestAndComponents(blobDigest, parentV1ID)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
v1ID = v
|
||||
|
||||
fakeImage := v1Compatibility{
|
||||
ID: v1ID,
|
||||
Parent: parentV1ID,
|
||||
Comment: historyEntry.Comment,
|
||||
Created: historyEntry.Created,
|
||||
Author: historyEntry.Author,
|
||||
ThrowAway: historyEntry.EmptyLayer,
|
||||
}
|
||||
fakeImage.ContainerConfig.Cmd = []string{historyEntry.CreatedBy}
|
||||
v1CompatibilityBytes, err := json.Marshal(&fakeImage)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("Internal error: Error creating v1compatibility for %#v", fakeImage)
|
||||
}
|
||||
|
||||
fsLayers[v1Index] = fsLayersSchema1{BlobSum: blobDigest}
|
||||
history[v1Index] = historySchema1{V1Compatibility: string(v1CompatibilityBytes)}
|
||||
// Note that parentV1ID of the top layer is preserved when exiting this loop
|
||||
}
|
||||
|
||||
// Now patch in real configuration for the top layer (v1Index == 0)
|
||||
v1ID, err = v1IDFromBlobDigestAndComponents(fsLayers[0].BlobSum, parentV1ID, string(configBytes)) // See above WRT v1ID value generation and cargo-cult consistency.
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
v1Config, err := v1ConfigFromConfigJSON(configBytes, v1ID, parentV1ID, imageConfig.History[len(imageConfig.History)-1].EmptyLayer)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
history[0].V1Compatibility = string(v1Config)
|
||||
|
||||
m1 := manifestSchema1FromComponents(dest.Reference().DockerReference(), fsLayers, history, imageConfig.Architecture)
|
||||
return memoryImageFromManifest(m1), nil
|
||||
}
|
||||
|
||||
func v1IDFromBlobDigestAndComponents(blobDigest digest.Digest, others ...string) (string, error) {
|
||||
if err := blobDigest.Validate(); err != nil {
|
||||
return "", err
|
||||
}
|
||||
parts := append([]string{blobDigest.Hex()}, others...)
|
||||
v1IDHash := sha256.Sum256([]byte(strings.Join(parts, " ")))
|
||||
return hex.EncodeToString(v1IDHash[:]), nil
|
||||
}
|
||||
|
||||
func v1ConfigFromConfigJSON(configJSON []byte, v1ID, parentV1ID string, throwaway bool) ([]byte, error) {
|
||||
// Preserve everything we don't specifically know about.
|
||||
// (This must be a *json.RawMessage, even though *[]byte is fairly redundant, because only *RawMessage implements json.Marshaler.)
|
||||
rawContents := map[string]*json.RawMessage{}
|
||||
if err := json.Unmarshal(configJSON, &rawContents); err != nil { // We have already unmarshaled it before, using a more detailed schema?!
|
||||
return nil, err
|
||||
}
|
||||
delete(rawContents, "rootfs")
|
||||
delete(rawContents, "history")
|
||||
|
||||
updates := map[string]interface{}{"id": v1ID}
|
||||
if parentV1ID != "" {
|
||||
updates["parent"] = parentV1ID
|
||||
}
|
||||
if throwaway {
|
||||
updates["throwaway"] = throwaway
|
||||
}
|
||||
for field, value := range updates {
|
||||
encoded, err := json.Marshal(value)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
rawContents[field] = (*json.RawMessage)(&encoded)
|
||||
}
|
||||
return json.Marshal(rawContents)
|
||||
}
|
121
vendor/github.com/containers/image/image/manifest.go
generated
vendored
Normal file
121
vendor/github.com/containers/image/image/manifest.go
generated
vendored
Normal file
|
@ -0,0 +1,121 @@
|
|||
package image
|
||||
|
||||
import (
|
||||
"time"
|
||||
|
||||
"github.com/docker/distribution/digest"
|
||||
"github.com/docker/engine-api/types/strslice"
|
||||
|
||||
"github.com/containers/image/manifest"
|
||||
"github.com/containers/image/types"
|
||||
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
|
||||
)
|
||||
|
||||
type config struct {
|
||||
Cmd strslice.StrSlice
|
||||
Labels map[string]string
|
||||
}
|
||||
|
||||
type v1Image struct {
|
||||
ID string `json:"id,omitempty"`
|
||||
Parent string `json:"parent,omitempty"`
|
||||
Comment string `json:"comment,omitempty"`
|
||||
Created time.Time `json:"created"`
|
||||
ContainerConfig *config `json:"container_config,omitempty"`
|
||||
DockerVersion string `json:"docker_version,omitempty"`
|
||||
Author string `json:"author,omitempty"`
|
||||
// Config is the configuration of the container received from the client
|
||||
Config *config `json:"config,omitempty"`
|
||||
// Architecture is the hardware that the image is build and runs on
|
||||
Architecture string `json:"architecture,omitempty"`
|
||||
// OS is the operating system used to build and run the image
|
||||
OS string `json:"os,omitempty"`
|
||||
}
|
||||
|
||||
type image struct {
|
||||
v1Image
|
||||
History []imageHistory `json:"history,omitempty"`
|
||||
RootFS *rootFS `json:"rootfs,omitempty"`
|
||||
}
|
||||
|
||||
type imageHistory struct {
|
||||
Created time.Time `json:"created"`
|
||||
Author string `json:"author,omitempty"`
|
||||
CreatedBy string `json:"created_by,omitempty"`
|
||||
Comment string `json:"comment,omitempty"`
|
||||
EmptyLayer bool `json:"empty_layer,omitempty"`
|
||||
}
|
||||
|
||||
type rootFS struct {
|
||||
Type string `json:"type"`
|
||||
DiffIDs []digest.Digest `json:"diff_ids,omitempty"`
|
||||
BaseLayer string `json:"base_layer,omitempty"`
|
||||
}
|
||||
|
||||
// genericManifest is an interface for parsing, modifying image manifests and related data.
|
||||
// Note that the public methods are intended to be a subset of types.Image
|
||||
// so that embedding a genericManifest into structs works.
|
||||
// will support v1 one day...
|
||||
type genericManifest interface {
|
||||
serialize() ([]byte, error)
|
||||
manifestMIMEType() string
|
||||
// ConfigInfo returns a complete BlobInfo for the separate config object, or a BlobInfo{Digest:""} if there isn't a separate object.
|
||||
// Note that the config object may not exist in the underlying storage in the return value of UpdatedImage! Use ConfigBlob() below.
|
||||
ConfigInfo() types.BlobInfo
|
||||
// ConfigBlob returns the blob described by ConfigInfo, iff ConfigInfo().Digest != ""; nil otherwise.
|
||||
// The result is cached; it is OK to call this however often you need.
|
||||
ConfigBlob() ([]byte, error)
|
||||
// LayerInfos returns a list of BlobInfos of layers referenced by this image, in order (the root layer first, and then successive layered layers).
|
||||
// The Digest field is guaranteed to be provided; Size may be -1.
|
||||
// WARNING: The list may contain duplicates, and they are semantically relevant.
|
||||
LayerInfos() []types.BlobInfo
|
||||
imageInspectInfo() (*types.ImageInspectInfo, error) // To be called by inspectManifest
|
||||
// UpdatedImageNeedsLayerDiffIDs returns true iff UpdatedImage(options) needs InformationOnly.LayerDiffIDs.
|
||||
// This is a horribly specific interface, but computing InformationOnly.LayerDiffIDs can be very expensive to compute
|
||||
// (most importantly it forces us to download the full layers even if they are already present at the destination).
|
||||
UpdatedImageNeedsLayerDiffIDs(options types.ManifestUpdateOptions) bool
|
||||
// UpdatedImage returns a types.Image modified according to options.
|
||||
// This does not change the state of the original Image object.
|
||||
UpdatedImage(options types.ManifestUpdateOptions) (types.Image, error)
|
||||
}
|
||||
|
||||
func manifestInstanceFromBlob(src types.ImageSource, manblob []byte, mt string) (genericManifest, error) {
|
||||
switch mt {
|
||||
// "application/json" is a valid v2s1 value per https://github.com/docker/distribution/blob/master/docs/spec/manifest-v2-1.md .
|
||||
// This works for now, when nothing else seems to return "application/json"; if that were not true, the mapping/detection might
|
||||
// need to happen within the ImageSource.
|
||||
case manifest.DockerV2Schema1MediaType, manifest.DockerV2Schema1SignedMediaType, "application/json":
|
||||
return manifestSchema1FromManifest(manblob)
|
||||
case imgspecv1.MediaTypeImageManifest:
|
||||
return manifestOCI1FromManifest(src, manblob)
|
||||
case manifest.DockerV2Schema2MediaType:
|
||||
return manifestSchema2FromManifest(src, manblob)
|
||||
case manifest.DockerV2ListMediaType:
|
||||
return manifestSchema2FromManifestList(src, manblob)
|
||||
default:
|
||||
// If it's not a recognized manifest media type, or we have failed determining the type, we'll try one last time
|
||||
// to deserialize using v2s1 as per https://github.com/docker/distribution/blob/master/manifests.go#L108
|
||||
// and https://github.com/docker/distribution/blob/master/manifest/schema1/manifest.go#L50
|
||||
//
|
||||
// Crane registries can also return "text/plain", or pretty much anything else depending on a file extension “recognized” in the tag.
|
||||
// This makes no real sense, but it happens
|
||||
// because requests for manifests are
|
||||
// redirected to a content distribution
|
||||
// network which is configured that way. See https://bugzilla.redhat.com/show_bug.cgi?id=1389442
|
||||
return manifestSchema1FromManifest(manblob)
|
||||
}
|
||||
}
|
||||
|
||||
// inspectManifest is an implementation of types.Image.Inspect
|
||||
func inspectManifest(m genericManifest) (*types.ImageInspectInfo, error) {
|
||||
info, err := m.imageInspectInfo()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
layers := m.LayerInfos()
|
||||
info.Layers = make([]string, len(layers))
|
||||
for i, layer := range layers {
|
||||
info.Layers[i] = layer.Digest.String()
|
||||
}
|
||||
return info, nil
|
||||
}
|
74
vendor/github.com/containers/image/image/memory.go
generated
vendored
Normal file
74
vendor/github.com/containers/image/image/memory.go
generated
vendored
Normal file
|
@ -0,0 +1,74 @@
|
|||
package image
|
||||
|
||||
import (
|
||||
"errors"
|
||||
|
||||
"github.com/containers/image/types"
|
||||
)
|
||||
|
||||
// memoryImage is a mostly-implementation of types.Image assembled from data
|
||||
// created in memory, used primarily as a return value of types.Image.UpdatedImage
|
||||
// as a way to carry various structured information in a type-safe and easy-to-use way.
|
||||
// Note that this _only_ carries the immediate metadata; it is _not_ a stand-alone
|
||||
// collection of all related information, e.g. there is no way to get layer blobs
|
||||
// from a memoryImage.
|
||||
type memoryImage struct {
|
||||
genericManifest
|
||||
serializedManifest []byte // A private cache for Manifest()
|
||||
}
|
||||
|
||||
func memoryImageFromManifest(m genericManifest) types.Image {
|
||||
return &memoryImage{
|
||||
genericManifest: m,
|
||||
serializedManifest: nil,
|
||||
}
|
||||
}
|
||||
|
||||
// Reference returns the reference used to set up this source, _as specified by the user_
|
||||
// (not as the image itself, or its underlying storage, claims). This can be used e.g. to determine which public keys are trusted for this image.
|
||||
func (i *memoryImage) Reference() types.ImageReference {
|
||||
// It would really be inappropriate to return the ImageReference of the image this was based on.
|
||||
return nil
|
||||
}
|
||||
|
||||
// Close removes resources associated with an initialized UnparsedImage, if any.
|
||||
func (i *memoryImage) Close() {
|
||||
}
|
||||
|
||||
// Size returns the size of the image as stored, if known, or -1 if not.
|
||||
func (i *memoryImage) Size() (int64, error) {
|
||||
s, err := i.serialize()
|
||||
if err != nil {
|
||||
return -1, err
|
||||
}
|
||||
return int64(len(s)), nil
|
||||
}
|
||||
|
||||
// Manifest is like ImageSource.GetManifest, but the result is cached; it is OK to call this however often you need.
|
||||
func (i *memoryImage) Manifest() ([]byte, string, error) {
|
||||
if i.serializedManifest == nil {
|
||||
m, err := i.genericManifest.serialize()
|
||||
if err != nil {
|
||||
return nil, "", err
|
||||
}
|
||||
i.serializedManifest = m
|
||||
}
|
||||
return i.serializedManifest, i.genericManifest.manifestMIMEType(), nil
|
||||
}
|
||||
|
||||
// Signatures is like ImageSource.GetSignatures, but the result is cached; it is OK to call this however often you need.
|
||||
func (i *memoryImage) Signatures() ([][]byte, error) {
|
||||
// Modifying an image invalidates signatures; a caller asking the updated image for signatures
|
||||
// is probably confused.
|
||||
return nil, errors.New("Internal error: Image.Signatures() is not supported for images modified in memory")
|
||||
}
|
||||
|
||||
// Inspect returns various information for (skopeo inspect) parsed from the manifest and configuration.
|
||||
func (i *memoryImage) Inspect() (*types.ImageInspectInfo, error) {
|
||||
return inspectManifest(i.genericManifest)
|
||||
}
|
||||
|
||||
// IsMultiImage returns true if the image's manifest is a list of images, false otherwise.
|
||||
func (i *memoryImage) IsMultiImage() bool {
|
||||
return false
|
||||
}
|
167
vendor/github.com/containers/image/image/oci.go
generated
vendored
Normal file
167
vendor/github.com/containers/image/image/oci.go
generated
vendored
Normal file
|
@ -0,0 +1,167 @@
|
|||
package image
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
|
||||
"github.com/containers/image/manifest"
|
||||
"github.com/containers/image/types"
|
||||
"github.com/docker/distribution/digest"
|
||||
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
|
||||
)
|
||||
|
||||
type manifestOCI1 struct {
|
||||
src types.ImageSource // May be nil if configBlob is not nil
|
||||
configBlob []byte // If set, corresponds to contents of ConfigDescriptor.
|
||||
SchemaVersion int `json:"schemaVersion"`
|
||||
MediaType string `json:"mediaType"`
|
||||
ConfigDescriptor descriptor `json:"config"`
|
||||
LayersDescriptors []descriptor `json:"layers"`
|
||||
}
|
||||
|
||||
func manifestOCI1FromManifest(src types.ImageSource, manifest []byte) (genericManifest, error) {
|
||||
oci := manifestOCI1{src: src}
|
||||
if err := json.Unmarshal(manifest, &oci); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &oci, nil
|
||||
}
|
||||
|
||||
// manifestOCI1FromComponents builds a new manifestOCI1 from the supplied data:
|
||||
func manifestOCI1FromComponents(config descriptor, configBlob []byte, layers []descriptor) genericManifest {
|
||||
return &manifestOCI1{
|
||||
src: nil,
|
||||
configBlob: configBlob,
|
||||
SchemaVersion: 2,
|
||||
MediaType: imgspecv1.MediaTypeImageManifest,
|
||||
ConfigDescriptor: config,
|
||||
LayersDescriptors: layers,
|
||||
}
|
||||
}
|
||||
|
||||
func (m *manifestOCI1) serialize() ([]byte, error) {
|
||||
return json.Marshal(*m)
|
||||
}
|
||||
|
||||
func (m *manifestOCI1) manifestMIMEType() string {
|
||||
return m.MediaType
|
||||
}
|
||||
|
||||
// ConfigInfo returns a complete BlobInfo for the separate config object, or a BlobInfo{Digest:""} if there isn't a separate object.
|
||||
// Note that the config object may not exist in the underlying storage in the return value of UpdatedImage! Use ConfigBlob() below.
|
||||
func (m *manifestOCI1) ConfigInfo() types.BlobInfo {
|
||||
return types.BlobInfo{Digest: m.ConfigDescriptor.Digest, Size: m.ConfigDescriptor.Size}
|
||||
}
|
||||
|
||||
// ConfigBlob returns the blob described by ConfigInfo, iff ConfigInfo().Digest != ""; nil otherwise.
|
||||
// The result is cached; it is OK to call this however often you need.
|
||||
func (m *manifestOCI1) ConfigBlob() ([]byte, error) {
|
||||
if m.configBlob == nil {
|
||||
if m.src == nil {
|
||||
return nil, fmt.Errorf("Internal error: neither src nor configBlob set in manifestOCI1")
|
||||
}
|
||||
stream, _, err := m.src.GetBlob(types.BlobInfo{
|
||||
Digest: m.ConfigDescriptor.Digest,
|
||||
Size: m.ConfigDescriptor.Size,
|
||||
URLs: m.ConfigDescriptor.URLs,
|
||||
})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer stream.Close()
|
||||
blob, err := ioutil.ReadAll(stream)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
computedDigest := digest.FromBytes(blob)
|
||||
if computedDigest != m.ConfigDescriptor.Digest {
|
||||
return nil, fmt.Errorf("Download config.json digest %s does not match expected %s", computedDigest, m.ConfigDescriptor.Digest)
|
||||
}
|
||||
m.configBlob = blob
|
||||
}
|
||||
return m.configBlob, nil
|
||||
}
|
||||
|
||||
// LayerInfos returns a list of BlobInfos of layers referenced by this image, in order (the root layer first, and then successive layered layers).
|
||||
// The Digest field is guaranteed to be provided; Size may be -1.
|
||||
// WARNING: The list may contain duplicates, and they are semantically relevant.
|
||||
func (m *manifestOCI1) LayerInfos() []types.BlobInfo {
|
||||
blobs := []types.BlobInfo{}
|
||||
for _, layer := range m.LayersDescriptors {
|
||||
blobs = append(blobs, types.BlobInfo{Digest: layer.Digest, Size: layer.Size})
|
||||
}
|
||||
return blobs
|
||||
}
|
||||
|
||||
func (m *manifestOCI1) imageInspectInfo() (*types.ImageInspectInfo, error) {
|
||||
config, err := m.ConfigBlob()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
v1 := &v1Image{}
|
||||
if err := json.Unmarshal(config, v1); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &types.ImageInspectInfo{
|
||||
DockerVersion: v1.DockerVersion,
|
||||
Created: v1.Created,
|
||||
Labels: v1.Config.Labels,
|
||||
Architecture: v1.Architecture,
|
||||
Os: v1.OS,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// UpdatedImageNeedsLayerDiffIDs returns true iff UpdatedImage(options) needs InformationOnly.LayerDiffIDs.
|
||||
// This is a horribly specific interface, but computing InformationOnly.LayerDiffIDs can be very expensive to compute
|
||||
// (most importantly it forces us to download the full layers even if they are already present at the destination).
|
||||
func (m *manifestOCI1) UpdatedImageNeedsLayerDiffIDs(options types.ManifestUpdateOptions) bool {
|
||||
return false
|
||||
}
|
||||
|
||||
// UpdatedImage returns a types.Image modified according to options.
|
||||
// This does not change the state of the original Image object.
|
||||
func (m *manifestOCI1) UpdatedImage(options types.ManifestUpdateOptions) (types.Image, error) {
|
||||
copy := *m // NOTE: This is not a deep copy, it still shares slices etc.
|
||||
if options.LayerInfos != nil {
|
||||
if len(copy.LayersDescriptors) != len(options.LayerInfos) {
|
||||
return nil, fmt.Errorf("Error preparing updated manifest: layer count changed from %d to %d", len(copy.LayersDescriptors), len(options.LayerInfos))
|
||||
}
|
||||
copy.LayersDescriptors = make([]descriptor, len(options.LayerInfos))
|
||||
for i, info := range options.LayerInfos {
|
||||
copy.LayersDescriptors[i].Digest = info.Digest
|
||||
copy.LayersDescriptors[i].Size = info.Size
|
||||
}
|
||||
}
|
||||
|
||||
switch options.ManifestMIMEType {
|
||||
case "": // No conversion, OK
|
||||
case manifest.DockerV2Schema2MediaType:
|
||||
return copy.convertToManifestSchema2()
|
||||
default:
|
||||
return nil, fmt.Errorf("Conversion of image manifest from %s to %s is not implemented", imgspecv1.MediaTypeImageManifest, options.ManifestMIMEType)
|
||||
}
|
||||
|
||||
return memoryImageFromManifest(©), nil
|
||||
}
|
||||
|
||||
func (m *manifestOCI1) convertToManifestSchema2() (types.Image, error) {
|
||||
// Create a copy of the descriptor.
|
||||
config := m.ConfigDescriptor
|
||||
|
||||
// The only difference between OCI and DockerSchema2 is the mediatypes. The
|
||||
// media type of the manifest is handled by manifestSchema2FromComponents.
|
||||
config.MediaType = manifest.DockerV2Schema2ConfigMediaType
|
||||
|
||||
layers := make([]descriptor, len(m.LayersDescriptors))
|
||||
for idx := range layers {
|
||||
layers[idx] = m.LayersDescriptors[idx]
|
||||
layers[idx].MediaType = manifest.DockerV2Schema2LayerMediaType
|
||||
}
|
||||
|
||||
// Rather than copying the ConfigBlob now, we just pass m.src to the
|
||||
// translated manifest, since the only difference is the mediatype of
|
||||
// descriptors there is no change to any blob stored in m.src.
|
||||
m1 := manifestSchema2FromComponents(config, m.src, nil, layers)
|
||||
return memoryImageFromManifest(m1), nil
|
||||
}
|
90
vendor/github.com/containers/image/image/sourced.go
generated
vendored
Normal file
90
vendor/github.com/containers/image/image/sourced.go
generated
vendored
Normal file
|
@ -0,0 +1,90 @@
|
|||
// Package image consolidates knowledge about various container image formats
|
||||
// (as opposed to image storage mechanisms, which are handled by types.ImageSource)
|
||||
// and exposes all of them using an unified interface.
|
||||
package image
|
||||
|
||||
import (
|
||||
"github.com/containers/image/manifest"
|
||||
"github.com/containers/image/types"
|
||||
)
|
||||
|
||||
// FromSource returns a types.Image implementation for source.
|
||||
// The caller must call .Close() on the returned Image.
|
||||
//
|
||||
// FromSource “takes ownership” of the input ImageSource and will call src.Close()
|
||||
// when the image is closed. (This does not prevent callers from using both the
|
||||
// Image and ImageSource objects simultaneously, but it means that they only need to
|
||||
// the Image.)
|
||||
//
|
||||
// NOTE: If any kind of signature verification should happen, build an UnparsedImage from the value returned by NewImageSource,
|
||||
// verify that UnparsedImage, and convert it into a real Image via image.FromUnparsedImage instead of calling this function.
|
||||
func FromSource(src types.ImageSource) (types.Image, error) {
|
||||
return FromUnparsedImage(UnparsedFromSource(src))
|
||||
}
|
||||
|
||||
// sourcedImage is a general set of utilities for working with container images,
|
||||
// whatever is their underlying location (i.e. dockerImageSource-independent).
|
||||
// Note the existence of skopeo/docker.Image: some instances of a `types.Image`
|
||||
// may not be a `sourcedImage` directly. However, most users of `types.Image`
|
||||
// do not care, and those who care about `skopeo/docker.Image` know they do.
|
||||
type sourcedImage struct {
|
||||
*UnparsedImage
|
||||
manifestBlob []byte
|
||||
manifestMIMEType string
|
||||
// genericManifest contains data corresponding to manifestBlob.
|
||||
// NOTE: The manifest may have been modified in the process; DO NOT reserialize and store genericManifest
|
||||
// if you want to preserve the original manifest; use manifestBlob directly.
|
||||
genericManifest
|
||||
}
|
||||
|
||||
// FromUnparsedImage returns a types.Image implementation for unparsed.
|
||||
// The caller must call .Close() on the returned Image.
|
||||
//
|
||||
// FromSource “takes ownership” of the input UnparsedImage and will call uparsed.Close()
|
||||
// when the image is closed. (This does not prevent callers from using both the
|
||||
// UnparsedImage and ImageSource objects simultaneously, but it means that they only need to
|
||||
// keep a reference to the Image.)
|
||||
func FromUnparsedImage(unparsed *UnparsedImage) (types.Image, error) {
|
||||
// Note that the input parameter above is specifically *image.UnparsedImage, not types.UnparsedImage:
|
||||
// we want to be able to use unparsed.src. We could make that an explicit interface, but, well,
|
||||
// this is the only UnparsedImage implementation around, anyway.
|
||||
|
||||
// Also, we do not explicitly implement types.Image.Close; we let the implementation fall through to
|
||||
// unparsed.Close.
|
||||
|
||||
// NOTE: It is essential for signature verification that all parsing done in this object happens on the same manifest which is returned by unparsed.Manifest().
|
||||
manifestBlob, manifestMIMEType, err := unparsed.Manifest()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
parsedManifest, err := manifestInstanceFromBlob(unparsed.src, manifestBlob, manifestMIMEType)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return &sourcedImage{
|
||||
UnparsedImage: unparsed,
|
||||
manifestBlob: manifestBlob,
|
||||
manifestMIMEType: manifestMIMEType,
|
||||
genericManifest: parsedManifest,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Size returns the size of the image as stored, if it's known, or -1 if it isn't.
|
||||
func (i *sourcedImage) Size() (int64, error) {
|
||||
return -1, nil
|
||||
}
|
||||
|
||||
// Manifest overrides the UnparsedImage.Manifest to always use the fields which we have already fetched.
|
||||
func (i *sourcedImage) Manifest() ([]byte, string, error) {
|
||||
return i.manifestBlob, i.manifestMIMEType, nil
|
||||
}
|
||||
|
||||
func (i *sourcedImage) Inspect() (*types.ImageInspectInfo, error) {
|
||||
return inspectManifest(i.genericManifest)
|
||||
}
|
||||
|
||||
func (i *sourcedImage) IsMultiImage() bool {
|
||||
return i.manifestMIMEType == manifest.DockerV2ListMediaType
|
||||
}
|
83
vendor/github.com/containers/image/image/unparsed.go
generated
vendored
Normal file
83
vendor/github.com/containers/image/image/unparsed.go
generated
vendored
Normal file
|
@ -0,0 +1,83 @@
|
|||
package image
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
"github.com/containers/image/docker/reference"
|
||||
"github.com/containers/image/manifest"
|
||||
"github.com/containers/image/types"
|
||||
)
|
||||
|
||||
// UnparsedImage implements types.UnparsedImage .
|
||||
type UnparsedImage struct {
|
||||
src types.ImageSource
|
||||
cachedManifest []byte // A private cache for Manifest(); nil if not yet known.
|
||||
// A private cache for Manifest(), may be the empty string if guessing failed.
|
||||
// Valid iff cachedManifest is not nil.
|
||||
cachedManifestMIMEType string
|
||||
cachedSignatures [][]byte // A private cache for Signatures(); nil if not yet known.
|
||||
}
|
||||
|
||||
// UnparsedFromSource returns a types.UnparsedImage implementation for source.
|
||||
// The caller must call .Close() on the returned UnparsedImage.
|
||||
//
|
||||
// UnparsedFromSource “takes ownership” of the input ImageSource and will call src.Close()
|
||||
// when the image is closed. (This does not prevent callers from using both the
|
||||
// UnparsedImage and ImageSource objects simultaneously, but it means that they only need to
|
||||
// keep a reference to the UnparsedImage.)
|
||||
func UnparsedFromSource(src types.ImageSource) *UnparsedImage {
|
||||
return &UnparsedImage{src: src}
|
||||
}
|
||||
|
||||
// Reference returns the reference used to set up this source, _as specified by the user_
|
||||
// (not as the image itself, or its underlying storage, claims). This can be used e.g. to determine which public keys are trusted for this image.
|
||||
func (i *UnparsedImage) Reference() types.ImageReference {
|
||||
return i.src.Reference()
|
||||
}
|
||||
|
||||
// Close removes resources associated with an initialized UnparsedImage, if any.
|
||||
func (i *UnparsedImage) Close() {
|
||||
i.src.Close()
|
||||
}
|
||||
|
||||
// Manifest is like ImageSource.GetManifest, but the result is cached; it is OK to call this however often you need.
|
||||
func (i *UnparsedImage) Manifest() ([]byte, string, error) {
|
||||
if i.cachedManifest == nil {
|
||||
m, mt, err := i.src.GetManifest()
|
||||
if err != nil {
|
||||
return nil, "", err
|
||||
}
|
||||
|
||||
// ImageSource.GetManifest does not do digest verification, but we do;
|
||||
// this immediately protects also any user of types.Image.
|
||||
ref := i.Reference().DockerReference()
|
||||
if ref != nil {
|
||||
if canonical, ok := ref.(reference.Canonical); ok {
|
||||
digest := canonical.Digest()
|
||||
matches, err := manifest.MatchesDigest(m, digest)
|
||||
if err != nil {
|
||||
return nil, "", fmt.Errorf("Error computing manifest digest: %v", err)
|
||||
}
|
||||
if !matches {
|
||||
return nil, "", fmt.Errorf("Manifest does not match provided manifest digest %s", digest)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
i.cachedManifest = m
|
||||
i.cachedManifestMIMEType = mt
|
||||
}
|
||||
return i.cachedManifest, i.cachedManifestMIMEType, nil
|
||||
}
|
||||
|
||||
// Signatures is like ImageSource.GetSignatures, but the result is cached; it is OK to call this however often you need.
|
||||
func (i *UnparsedImage) Signatures() ([][]byte, error) {
|
||||
if i.cachedSignatures == nil {
|
||||
sigs, err := i.src.GetSignatures()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
i.cachedSignatures = sigs
|
||||
}
|
||||
return i.cachedSignatures, nil
|
||||
}
|
120
vendor/github.com/containers/image/manifest/manifest.go
generated
vendored
Normal file
120
vendor/github.com/containers/image/manifest/manifest.go
generated
vendored
Normal file
|
@ -0,0 +1,120 @@
|
|||
package manifest
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
|
||||
"github.com/docker/distribution/digest"
|
||||
"github.com/docker/libtrust"
|
||||
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
|
||||
)
|
||||
|
||||
// FIXME: Should we just use docker/distribution and docker/docker implementations directly?
|
||||
|
||||
// FIXME(runcom, mitr): should we havea mediatype pkg??
|
||||
const (
|
||||
// DockerV2Schema1MediaType MIME type represents Docker manifest schema 1
|
||||
DockerV2Schema1MediaType = "application/vnd.docker.distribution.manifest.v1+json"
|
||||
// DockerV2Schema1MediaType MIME type represents Docker manifest schema 1 with a JWS signature
|
||||
DockerV2Schema1SignedMediaType = "application/vnd.docker.distribution.manifest.v1+prettyjws"
|
||||
// DockerV2Schema2MediaType MIME type represents Docker manifest schema 2
|
||||
DockerV2Schema2MediaType = "application/vnd.docker.distribution.manifest.v2+json"
|
||||
// DockerV2Schema2ConfigMediaType is the MIME type used for schema 2 config blobs.
|
||||
DockerV2Schema2ConfigMediaType = "application/vnd.docker.container.image.v1+json"
|
||||
// DockerV2Schema2LayerMediaType is the MIME type used for schema 2 layers.
|
||||
DockerV2Schema2LayerMediaType = "application/vnd.docker.image.rootfs.diff.tar.gzip"
|
||||
// DockerV2ListMediaType MIME type represents Docker manifest schema 2 list
|
||||
DockerV2ListMediaType = "application/vnd.docker.distribution.manifest.list.v2+json"
|
||||
// DockerV2Schema2ForeignLayerMediaType is the MIME type used for schema 2 foreign layers.
|
||||
DockerV2Schema2ForeignLayerMediaType = "application/vnd.docker.image.rootfs.foreign.diff.tar.gzip"
|
||||
)
|
||||
|
||||
// DefaultRequestedManifestMIMETypes is a list of MIME types a types.ImageSource
|
||||
// should request from the backend unless directed otherwise.
|
||||
var DefaultRequestedManifestMIMETypes = []string{
|
||||
imgspecv1.MediaTypeImageManifest,
|
||||
DockerV2Schema2MediaType,
|
||||
DockerV2Schema1SignedMediaType,
|
||||
DockerV2Schema1MediaType,
|
||||
DockerV2ListMediaType,
|
||||
}
|
||||
|
||||
// GuessMIMEType guesses MIME type of a manifest and returns it _if it is recognized_, or "" if unknown or unrecognized.
|
||||
// FIXME? We should, in general, prefer out-of-band MIME type instead of blindly parsing the manifest,
|
||||
// but we may not have such metadata available (e.g. when the manifest is a local file).
|
||||
func GuessMIMEType(manifest []byte) string {
|
||||
// A subset of manifest fields; the rest is silently ignored by json.Unmarshal.
|
||||
// Also docker/distribution/manifest.Versioned.
|
||||
meta := struct {
|
||||
MediaType string `json:"mediaType"`
|
||||
SchemaVersion int `json:"schemaVersion"`
|
||||
Signatures interface{} `json:"signatures"`
|
||||
}{}
|
||||
if err := json.Unmarshal(manifest, &meta); err != nil {
|
||||
return ""
|
||||
}
|
||||
|
||||
switch meta.MediaType {
|
||||
case DockerV2Schema2MediaType, DockerV2ListMediaType, imgspecv1.MediaTypeImageManifest, imgspecv1.MediaTypeImageManifestList: // A recognized type.
|
||||
return meta.MediaType
|
||||
}
|
||||
// this is the only way the function can return DockerV2Schema1MediaType, and recognizing that is essential for stripping the JWS signatures = computing the correct manifest digest.
|
||||
switch meta.SchemaVersion {
|
||||
case 1:
|
||||
if meta.Signatures != nil {
|
||||
return DockerV2Schema1SignedMediaType
|
||||
}
|
||||
return DockerV2Schema1MediaType
|
||||
case 2: // Really should not happen, meta.MediaType should have been set. But given the data, this is our best guess.
|
||||
return DockerV2Schema2MediaType
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
// Digest returns the a digest of a docker manifest, with any necessary implied transformations like stripping v1s1 signatures.
|
||||
func Digest(manifest []byte) (digest.Digest, error) {
|
||||
if GuessMIMEType(manifest) == DockerV2Schema1SignedMediaType {
|
||||
sig, err := libtrust.ParsePrettySignature(manifest, "signatures")
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
manifest, err = sig.Payload()
|
||||
if err != nil {
|
||||
// Coverage: This should never happen, libtrust's Payload() can fail only if joseBase64UrlDecode() fails, on a string
|
||||
// that libtrust itself has josebase64UrlEncode()d
|
||||
return "", err
|
||||
}
|
||||
}
|
||||
|
||||
return digest.FromBytes(manifest), nil
|
||||
}
|
||||
|
||||
// MatchesDigest returns true iff the manifest matches expectedDigest.
|
||||
// Error may be set if this returns false.
|
||||
// Note that this is not doing ConstantTimeCompare; by the time we get here, the cryptographic signature must already have been verified,
|
||||
// or we are not using a cryptographic channel and the attacker can modify the digest along with the manifest blob.
|
||||
func MatchesDigest(manifest []byte, expectedDigest digest.Digest) (bool, error) {
|
||||
// This should eventually support various digest types.
|
||||
actualDigest, err := Digest(manifest)
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
return expectedDigest == actualDigest, nil
|
||||
}
|
||||
|
||||
// AddDummyV2S1Signature adds an JWS signature with a temporary key (i.e. useless) to a v2s1 manifest.
|
||||
// This is useful to make the manifest acceptable to a Docker Registry (even though nothing needs or wants the JWS signature).
|
||||
func AddDummyV2S1Signature(manifest []byte) ([]byte, error) {
|
||||
key, err := libtrust.GenerateECP256PrivateKey()
|
||||
if err != nil {
|
||||
return nil, err // Coverage: This can fail only if rand.Reader fails.
|
||||
}
|
||||
|
||||
js, err := libtrust.NewJSONSignature(manifest)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if err := js.Sign(key); err != nil { // Coverage: This can fail basically only if rand.Reader fails.
|
||||
return nil, err
|
||||
}
|
||||
return js.PrettySignature("signatures")
|
||||
}
|
241
vendor/github.com/containers/image/oci/layout/oci_dest.go
generated
vendored
Normal file
241
vendor/github.com/containers/image/oci/layout/oci_dest.go
generated
vendored
Normal file
|
@ -0,0 +1,241 @@
|
|||
package layout
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"path/filepath"
|
||||
|
||||
"github.com/containers/image/manifest"
|
||||
"github.com/containers/image/types"
|
||||
"github.com/docker/distribution/digest"
|
||||
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
|
||||
)
|
||||
|
||||
type ociImageDestination struct {
|
||||
ref ociReference
|
||||
}
|
||||
|
||||
// newImageDestination returns an ImageDestination for writing to an existing directory.
|
||||
func newImageDestination(ref ociReference) types.ImageDestination {
|
||||
return &ociImageDestination{ref: ref}
|
||||
}
|
||||
|
||||
// Reference returns the reference used to set up this destination. Note that this should directly correspond to user's intent,
|
||||
// e.g. it should use the public hostname instead of the result of resolving CNAMEs or following redirects.
|
||||
func (d *ociImageDestination) Reference() types.ImageReference {
|
||||
return d.ref
|
||||
}
|
||||
|
||||
// Close removes resources associated with an initialized ImageDestination, if any.
|
||||
func (d *ociImageDestination) Close() {
|
||||
}
|
||||
|
||||
func (d *ociImageDestination) SupportedManifestMIMETypes() []string {
|
||||
return []string{
|
||||
imgspecv1.MediaTypeImageManifest,
|
||||
manifest.DockerV2Schema2MediaType,
|
||||
}
|
||||
}
|
||||
|
||||
// SupportsSignatures returns an error (to be displayed to the user) if the destination certainly can't store signatures.
|
||||
// Note: It is still possible for PutSignatures to fail if SupportsSignatures returns nil.
|
||||
func (d *ociImageDestination) SupportsSignatures() error {
|
||||
return fmt.Errorf("Pushing signatures for OCI images is not supported")
|
||||
}
|
||||
|
||||
// ShouldCompressLayers returns true iff it is desirable to compress layer blobs written to this destination.
|
||||
func (d *ociImageDestination) ShouldCompressLayers() bool {
|
||||
return true
|
||||
}
|
||||
|
||||
// AcceptsForeignLayerURLs returns false iff foreign layers in manifest should be actually
|
||||
// uploaded to the image destination, true otherwise.
|
||||
func (d *ociImageDestination) AcceptsForeignLayerURLs() bool {
|
||||
return false
|
||||
}
|
||||
|
||||
// PutBlob writes contents of stream and returns data representing the result (with all data filled in).
|
||||
// inputInfo.Digest can be optionally provided if known; it is not mandatory for the implementation to verify it.
|
||||
// inputInfo.Size is the expected length of stream, if known.
|
||||
// WARNING: The contents of stream are being verified on the fly. Until stream.Read() returns io.EOF, the contents of the data SHOULD NOT be available
|
||||
// to any other readers for download using the supplied digest.
|
||||
// If stream.Read() at any time, ESPECIALLY at end of input, returns an error, PutBlob MUST 1) fail, and 2) delete any data stored so far.
|
||||
func (d *ociImageDestination) PutBlob(stream io.Reader, inputInfo types.BlobInfo) (types.BlobInfo, error) {
|
||||
if err := ensureDirectoryExists(d.ref.dir); err != nil {
|
||||
return types.BlobInfo{}, err
|
||||
}
|
||||
blobFile, err := ioutil.TempFile(d.ref.dir, "oci-put-blob")
|
||||
if err != nil {
|
||||
return types.BlobInfo{}, err
|
||||
}
|
||||
succeeded := false
|
||||
defer func() {
|
||||
blobFile.Close()
|
||||
if !succeeded {
|
||||
os.Remove(blobFile.Name())
|
||||
}
|
||||
}()
|
||||
|
||||
digester := digest.Canonical.New()
|
||||
tee := io.TeeReader(stream, digester.Hash())
|
||||
|
||||
size, err := io.Copy(blobFile, tee)
|
||||
if err != nil {
|
||||
return types.BlobInfo{}, err
|
||||
}
|
||||
computedDigest := digester.Digest()
|
||||
if inputInfo.Size != -1 && size != inputInfo.Size {
|
||||
return types.BlobInfo{}, fmt.Errorf("Size mismatch when copying %s, expected %d, got %d", computedDigest, inputInfo.Size, size)
|
||||
}
|
||||
if err := blobFile.Sync(); err != nil {
|
||||
return types.BlobInfo{}, err
|
||||
}
|
||||
if err := blobFile.Chmod(0644); err != nil {
|
||||
return types.BlobInfo{}, err
|
||||
}
|
||||
|
||||
blobPath, err := d.ref.blobPath(computedDigest)
|
||||
if err != nil {
|
||||
return types.BlobInfo{}, err
|
||||
}
|
||||
if err := ensureParentDirectoryExists(blobPath); err != nil {
|
||||
return types.BlobInfo{}, err
|
||||
}
|
||||
if err := os.Rename(blobFile.Name(), blobPath); err != nil {
|
||||
return types.BlobInfo{}, err
|
||||
}
|
||||
succeeded = true
|
||||
return types.BlobInfo{Digest: computedDigest, Size: size}, nil
|
||||
}
|
||||
|
||||
func (d *ociImageDestination) HasBlob(info types.BlobInfo) (bool, int64, error) {
|
||||
if info.Digest == "" {
|
||||
return false, -1, fmt.Errorf(`"Can not check for a blob with unknown digest`)
|
||||
}
|
||||
blobPath, err := d.ref.blobPath(info.Digest)
|
||||
if err != nil {
|
||||
return false, -1, err
|
||||
}
|
||||
finfo, err := os.Stat(blobPath)
|
||||
if err != nil && os.IsNotExist(err) {
|
||||
return false, -1, types.ErrBlobNotFound
|
||||
}
|
||||
if err != nil {
|
||||
return false, -1, err
|
||||
}
|
||||
return true, finfo.Size(), nil
|
||||
}
|
||||
|
||||
func (d *ociImageDestination) ReapplyBlob(info types.BlobInfo) (types.BlobInfo, error) {
|
||||
return info, nil
|
||||
}
|
||||
|
||||
func createManifest(m []byte) ([]byte, string, error) {
|
||||
om := imgspecv1.Manifest{}
|
||||
mt := manifest.GuessMIMEType(m)
|
||||
switch mt {
|
||||
case manifest.DockerV2Schema1MediaType, manifest.DockerV2Schema1SignedMediaType:
|
||||
// There a simple reason about not yet implementing this.
|
||||
// OCI image-spec assure about backward compatibility with docker v2s2 but not v2s1
|
||||
// generating a v2s2 is a migration docker does when upgrading to 1.10.3
|
||||
// and I don't think we should bother about this now (I don't want to have migration code here in skopeo)
|
||||
return nil, "", errors.New("can't create an OCI manifest from Docker V2 schema 1 manifest")
|
||||
case manifest.DockerV2Schema2MediaType:
|
||||
if err := json.Unmarshal(m, &om); err != nil {
|
||||
return nil, "", err
|
||||
}
|
||||
om.MediaType = imgspecv1.MediaTypeImageManifest
|
||||
for i, l := range om.Layers {
|
||||
if l.MediaType == manifest.DockerV2Schema2ForeignLayerMediaType {
|
||||
om.Layers[i].MediaType = imgspecv1.MediaTypeImageLayerNonDistributable
|
||||
} else {
|
||||
om.Layers[i].MediaType = imgspecv1.MediaTypeImageLayer
|
||||
}
|
||||
}
|
||||
om.Config.MediaType = imgspecv1.MediaTypeImageConfig
|
||||
b, err := json.Marshal(om)
|
||||
if err != nil {
|
||||
return nil, "", err
|
||||
}
|
||||
return b, om.MediaType, nil
|
||||
case manifest.DockerV2ListMediaType:
|
||||
return nil, "", errors.New("can't create an OCI manifest from Docker V2 schema 2 manifest list")
|
||||
case imgspecv1.MediaTypeImageManifestList:
|
||||
return nil, "", errors.New("can't create an OCI manifest from OCI manifest list")
|
||||
case imgspecv1.MediaTypeImageManifest:
|
||||
return m, mt, nil
|
||||
}
|
||||
return nil, "", fmt.Errorf("unrecognized manifest media type %q", mt)
|
||||
}
|
||||
|
||||
func (d *ociImageDestination) PutManifest(m []byte) error {
|
||||
// TODO(mitr, runcom): this breaks signatures entirely since at this point we're creating a new manifest
|
||||
// and signatures don't apply anymore. Will fix.
|
||||
ociMan, mt, err := createManifest(m)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
digest, err := manifest.Digest(ociMan)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
desc := imgspecv1.Descriptor{}
|
||||
desc.Digest = digest.String()
|
||||
// TODO(runcom): beaware and add support for OCI manifest list
|
||||
desc.MediaType = mt
|
||||
desc.Size = int64(len(ociMan))
|
||||
data, err := json.Marshal(desc)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
blobPath, err := d.ref.blobPath(digest)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if err := ioutil.WriteFile(blobPath, ociMan, 0644); err != nil {
|
||||
return err
|
||||
}
|
||||
// TODO(runcom): ugly here?
|
||||
if err := ioutil.WriteFile(d.ref.ociLayoutPath(), []byte(`{"imageLayoutVersion": "1.0.0"}`), 0644); err != nil {
|
||||
return err
|
||||
}
|
||||
descriptorPath := d.ref.descriptorPath(d.ref.tag)
|
||||
if err := ensureParentDirectoryExists(descriptorPath); err != nil {
|
||||
return err
|
||||
}
|
||||
return ioutil.WriteFile(descriptorPath, data, 0644)
|
||||
}
|
||||
|
||||
func ensureDirectoryExists(path string) error {
|
||||
if _, err := os.Stat(path); err != nil && os.IsNotExist(err) {
|
||||
if err := os.MkdirAll(path, 0755); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// ensureParentDirectoryExists ensures the parent of the supplied path exists.
|
||||
func ensureParentDirectoryExists(path string) error {
|
||||
return ensureDirectoryExists(filepath.Dir(path))
|
||||
}
|
||||
|
||||
func (d *ociImageDestination) PutSignatures(signatures [][]byte) error {
|
||||
if len(signatures) != 0 {
|
||||
return fmt.Errorf("Pushing signatures for OCI images is not supported")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Commit marks the process of storing the image as successful and asks for the image to be persisted.
|
||||
// WARNING: This does not have any transactional semantics:
|
||||
// - Uploaded data MAY be visible to others before Commit() is called
|
||||
// - Uploaded data MAY be removed or MAY remain around if Close() is called without Commit() (i.e. rollback is allowed but not guaranteed)
|
||||
func (d *ociImageDestination) Commit() error {
|
||||
return nil
|
||||
}
|
94
vendor/github.com/containers/image/oci/layout/oci_src.go
generated
vendored
Normal file
94
vendor/github.com/containers/image/oci/layout/oci_src.go
generated
vendored
Normal file
|
@ -0,0 +1,94 @@
|
|||
package layout
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
|
||||
"github.com/containers/image/manifest"
|
||||
"github.com/containers/image/types"
|
||||
"github.com/docker/distribution/digest"
|
||||
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
|
||||
)
|
||||
|
||||
type ociImageSource struct {
|
||||
ref ociReference
|
||||
}
|
||||
|
||||
// newImageSource returns an ImageSource for reading from an existing directory.
|
||||
func newImageSource(ref ociReference) types.ImageSource {
|
||||
return &ociImageSource{ref: ref}
|
||||
}
|
||||
|
||||
// Reference returns the reference used to set up this source.
|
||||
func (s *ociImageSource) Reference() types.ImageReference {
|
||||
return s.ref
|
||||
}
|
||||
|
||||
// Close removes resources associated with an initialized ImageSource, if any.
|
||||
func (s *ociImageSource) Close() {
|
||||
}
|
||||
|
||||
// GetManifest returns the image's manifest along with its MIME type (which may be empty when it can't be determined but the manifest is available).
|
||||
// It may use a remote (= slow) service.
|
||||
func (s *ociImageSource) GetManifest() ([]byte, string, error) {
|
||||
descriptorPath := s.ref.descriptorPath(s.ref.tag)
|
||||
data, err := ioutil.ReadFile(descriptorPath)
|
||||
if err != nil {
|
||||
return nil, "", err
|
||||
}
|
||||
|
||||
desc := imgspecv1.Descriptor{}
|
||||
err = json.Unmarshal(data, &desc)
|
||||
if err != nil {
|
||||
return nil, "", err
|
||||
}
|
||||
|
||||
manifestPath, err := s.ref.blobPath(digest.Digest(desc.Digest))
|
||||
if err != nil {
|
||||
return nil, "", err
|
||||
}
|
||||
m, err := ioutil.ReadFile(manifestPath)
|
||||
if err != nil {
|
||||
return nil, "", err
|
||||
}
|
||||
|
||||
return m, manifest.GuessMIMEType(m), nil
|
||||
}
|
||||
|
||||
func (s *ociImageSource) GetTargetManifest(digest digest.Digest) ([]byte, string, error) {
|
||||
manifestPath, err := s.ref.blobPath(digest)
|
||||
if err != nil {
|
||||
return nil, "", err
|
||||
}
|
||||
|
||||
m, err := ioutil.ReadFile(manifestPath)
|
||||
if err != nil {
|
||||
return nil, "", err
|
||||
}
|
||||
|
||||
return m, manifest.GuessMIMEType(m), nil
|
||||
}
|
||||
|
||||
// GetBlob returns a stream for the specified blob, and the blob's size.
|
||||
func (s *ociImageSource) GetBlob(info types.BlobInfo) (io.ReadCloser, int64, error) {
|
||||
path, err := s.ref.blobPath(info.Digest)
|
||||
if err != nil {
|
||||
return nil, 0, err
|
||||
}
|
||||
|
||||
r, err := os.Open(path)
|
||||
if err != nil {
|
||||
return nil, 0, err
|
||||
}
|
||||
fi, err := r.Stat()
|
||||
if err != nil {
|
||||
return nil, 0, err
|
||||
}
|
||||
return r, fi.Size(), nil
|
||||
}
|
||||
|
||||
func (s *ociImageSource) GetSignatures() ([][]byte, error) {
|
||||
return [][]byte{}, nil
|
||||
}
|
213
vendor/github.com/containers/image/oci/layout/oci_transport.go
generated
vendored
Normal file
213
vendor/github.com/containers/image/oci/layout/oci_transport.go
generated
vendored
Normal file
|
@ -0,0 +1,213 @@
|
|||
package layout
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"path/filepath"
|
||||
"regexp"
|
||||
"strings"
|
||||
|
||||
"github.com/containers/image/directory/explicitfilepath"
|
||||
"github.com/containers/image/docker/reference"
|
||||
"github.com/containers/image/image"
|
||||
"github.com/containers/image/types"
|
||||
"github.com/docker/distribution/digest"
|
||||
)
|
||||
|
||||
// Transport is an ImageTransport for OCI directories.
|
||||
var Transport = ociTransport{}
|
||||
|
||||
type ociTransport struct{}
|
||||
|
||||
func (t ociTransport) Name() string {
|
||||
return "oci"
|
||||
}
|
||||
|
||||
// ParseReference converts a string, which should not start with the ImageTransport.Name prefix, into an ImageReference.
|
||||
func (t ociTransport) ParseReference(reference string) (types.ImageReference, error) {
|
||||
return ParseReference(reference)
|
||||
}
|
||||
|
||||
var refRegexp = regexp.MustCompile(`^([A-Za-z0-9._-]+)+$`)
|
||||
|
||||
// ValidatePolicyConfigurationScope checks that scope is a valid name for a signature.PolicyTransportScopes keys
|
||||
// (i.e. a valid PolicyConfigurationIdentity() or PolicyConfigurationNamespaces() return value).
|
||||
// It is acceptable to allow an invalid value which will never be matched, it can "only" cause user confusion.
|
||||
// scope passed to this function will not be "", that value is always allowed.
|
||||
func (t ociTransport) ValidatePolicyConfigurationScope(scope string) error {
|
||||
var dir string
|
||||
sep := strings.LastIndex(scope, ":")
|
||||
if sep == -1 {
|
||||
dir = scope
|
||||
} else {
|
||||
dir = scope[:sep]
|
||||
tag := scope[sep+1:]
|
||||
if !refRegexp.MatchString(tag) {
|
||||
return fmt.Errorf("Invalid tag %s", tag)
|
||||
}
|
||||
}
|
||||
|
||||
if strings.Contains(dir, ":") {
|
||||
return fmt.Errorf("Invalid OCI reference %s: path contains a colon", scope)
|
||||
}
|
||||
|
||||
if !strings.HasPrefix(dir, "/") {
|
||||
return fmt.Errorf("Invalid scope %s: must be an absolute path", scope)
|
||||
}
|
||||
// Refuse also "/", otherwise "/" and "" would have the same semantics,
|
||||
// and "" could be unexpectedly shadowed by the "/" entry.
|
||||
// (Note: we do allow "/:sometag", a bit ridiculous but why refuse it?)
|
||||
if scope == "/" {
|
||||
return errors.New(`Invalid scope "/": Use the generic default scope ""`)
|
||||
}
|
||||
cleaned := filepath.Clean(dir)
|
||||
if cleaned != dir {
|
||||
return fmt.Errorf(`Invalid scope %s: Uses non-canonical path format, perhaps try with path %s`, scope, cleaned)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// ociReference is an ImageReference for OCI directory paths.
|
||||
type ociReference struct {
|
||||
// Note that the interpretation of paths below depends on the underlying filesystem state, which may change under us at any time!
|
||||
// Either of the paths may point to a different, or no, inode over time. resolvedDir may contain symbolic links, and so on.
|
||||
|
||||
// Generally we follow the intent of the user, and use the "dir" member for filesystem operations (e.g. the user can use a relative path to avoid
|
||||
// being exposed to symlinks and renames in the parent directories to the working directory).
|
||||
// (But in general, we make no attempt to be completely safe against concurrent hostile filesystem modifications.)
|
||||
dir string // As specified by the user. May be relative, contain symlinks, etc.
|
||||
resolvedDir string // Absolute path with no symlinks, at least at the time of its creation. Primarily used for policy namespaces.
|
||||
tag string
|
||||
}
|
||||
|
||||
// ParseReference converts a string, which should not start with the ImageTransport.Name prefix, into an OCI ImageReference.
|
||||
func ParseReference(reference string) (types.ImageReference, error) {
|
||||
var dir, tag string
|
||||
sep := strings.LastIndex(reference, ":")
|
||||
if sep == -1 {
|
||||
dir = reference
|
||||
tag = "latest"
|
||||
} else {
|
||||
dir = reference[:sep]
|
||||
tag = reference[sep+1:]
|
||||
}
|
||||
return NewReference(dir, tag)
|
||||
}
|
||||
|
||||
// NewReference returns an OCI reference for a directory and a tag.
|
||||
//
|
||||
// We do not expose an API supplying the resolvedDir; we could, but recomputing it
|
||||
// is generally cheap enough that we prefer being confident about the properties of resolvedDir.
|
||||
func NewReference(dir, tag string) (types.ImageReference, error) {
|
||||
resolved, err := explicitfilepath.ResolvePathToFullyExplicit(dir)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
// This is necessary to prevent directory paths returned by PolicyConfigurationNamespaces
|
||||
// from being ambiguous with values of PolicyConfigurationIdentity.
|
||||
if strings.Contains(resolved, ":") {
|
||||
return nil, fmt.Errorf("Invalid OCI reference %s:%s: path %s contains a colon", dir, tag, resolved)
|
||||
}
|
||||
if !refRegexp.MatchString(tag) {
|
||||
return nil, fmt.Errorf("Invalid tag %s", tag)
|
||||
}
|
||||
return ociReference{dir: dir, resolvedDir: resolved, tag: tag}, nil
|
||||
}
|
||||
|
||||
func (ref ociReference) Transport() types.ImageTransport {
|
||||
return Transport
|
||||
}
|
||||
|
||||
// StringWithinTransport returns a string representation of the reference, which MUST be such that
|
||||
// reference.Transport().ParseReference(reference.StringWithinTransport()) returns an equivalent reference.
|
||||
// NOTE: The returned string is not promised to be equal to the original input to ParseReference;
|
||||
// e.g. default attribute values omitted by the user may be filled in in the return value, or vice versa.
|
||||
// WARNING: Do not use the return value in the UI to describe an image, it does not contain the Transport().Name() prefix.
|
||||
func (ref ociReference) StringWithinTransport() string {
|
||||
return fmt.Sprintf("%s:%s", ref.dir, ref.tag)
|
||||
}
|
||||
|
||||
// DockerReference returns a Docker reference associated with this reference
|
||||
// (fully explicit, i.e. !reference.IsNameOnly, but reflecting user intent,
|
||||
// not e.g. after redirect or alias processing), or nil if unknown/not applicable.
|
||||
func (ref ociReference) DockerReference() reference.Named {
|
||||
return nil
|
||||
}
|
||||
|
||||
// PolicyConfigurationIdentity returns a string representation of the reference, suitable for policy lookup.
|
||||
// This MUST reflect user intent, not e.g. after processing of third-party redirects or aliases;
|
||||
// The value SHOULD be fully explicit about its semantics, with no hidden defaults, AND canonical
|
||||
// (i.e. various references with exactly the same semantics should return the same configuration identity)
|
||||
// It is fine for the return value to be equal to StringWithinTransport(), and it is desirable but
|
||||
// not required/guaranteed that it will be a valid input to Transport().ParseReference().
|
||||
// Returns "" if configuration identities for these references are not supported.
|
||||
func (ref ociReference) PolicyConfigurationIdentity() string {
|
||||
return fmt.Sprintf("%s:%s", ref.resolvedDir, ref.tag)
|
||||
}
|
||||
|
||||
// PolicyConfigurationNamespaces returns a list of other policy configuration namespaces to search
|
||||
// for if explicit configuration for PolicyConfigurationIdentity() is not set. The list will be processed
|
||||
// in order, terminating on first match, and an implicit "" is always checked at the end.
|
||||
// It is STRONGLY recommended for the first element, if any, to be a prefix of PolicyConfigurationIdentity(),
|
||||
// and each following element to be a prefix of the element preceding it.
|
||||
func (ref ociReference) PolicyConfigurationNamespaces() []string {
|
||||
res := []string{}
|
||||
path := ref.resolvedDir
|
||||
for {
|
||||
lastSlash := strings.LastIndex(path, "/")
|
||||
// Note that we do not include "/"; it is redundant with the default "" global default,
|
||||
// and rejected by ociTransport.ValidatePolicyConfigurationScope above.
|
||||
if lastSlash == -1 || path == "/" {
|
||||
break
|
||||
}
|
||||
res = append(res, path)
|
||||
path = path[:lastSlash]
|
||||
}
|
||||
return res
|
||||
}
|
||||
|
||||
// NewImage returns a types.Image for this reference, possibly specialized for this ImageTransport.
|
||||
// The caller must call .Close() on the returned Image.
|
||||
// NOTE: If any kind of signature verification should happen, build an UnparsedImage from the value returned by NewImageSource,
|
||||
// verify that UnparsedImage, and convert it into a real Image via image.FromUnparsedImage.
|
||||
func (ref ociReference) NewImage(ctx *types.SystemContext) (types.Image, error) {
|
||||
src := newImageSource(ref)
|
||||
return image.FromSource(src)
|
||||
}
|
||||
|
||||
// NewImageSource returns a types.ImageSource for this reference,
|
||||
// asking the backend to use a manifest from requestedManifestMIMETypes if possible.
|
||||
// nil requestedManifestMIMETypes means manifest.DefaultRequestedManifestMIMETypes.
|
||||
// The caller must call .Close() on the returned ImageSource.
|
||||
func (ref ociReference) NewImageSource(ctx *types.SystemContext, requestedManifestMIMETypes []string) (types.ImageSource, error) {
|
||||
return newImageSource(ref), nil
|
||||
}
|
||||
|
||||
// NewImageDestination returns a types.ImageDestination for this reference.
|
||||
// The caller must call .Close() on the returned ImageDestination.
|
||||
func (ref ociReference) NewImageDestination(ctx *types.SystemContext) (types.ImageDestination, error) {
|
||||
return newImageDestination(ref), nil
|
||||
}
|
||||
|
||||
// DeleteImage deletes the named image from the registry, if supported.
|
||||
func (ref ociReference) DeleteImage(ctx *types.SystemContext) error {
|
||||
return fmt.Errorf("Deleting images not implemented for oci: images")
|
||||
}
|
||||
|
||||
// ociLayoutPathPath returns a path for the oci-layout within a directory using OCI conventions.
|
||||
func (ref ociReference) ociLayoutPath() string {
|
||||
return filepath.Join(ref.dir, "oci-layout")
|
||||
}
|
||||
|
||||
// blobPath returns a path for a blob within a directory using OCI image-layout conventions.
|
||||
func (ref ociReference) blobPath(digest digest.Digest) (string, error) {
|
||||
if err := digest.Validate(); err != nil {
|
||||
return "", fmt.Errorf("unexpected digest reference %s: %v", digest, err)
|
||||
}
|
||||
return filepath.Join(ref.dir, "blobs", digest.Algorithm().String(), digest.Hex()), nil
|
||||
}
|
||||
|
||||
// descriptorPath returns a path for the manifest within a directory using OCI conventions.
|
||||
func (ref ociReference) descriptorPath(digest string) string {
|
||||
return filepath.Join(ref.dir, "refs", digest)
|
||||
}
|
1071
vendor/github.com/containers/image/openshift/openshift-copies.go
generated
vendored
Normal file
1071
vendor/github.com/containers/image/openshift/openshift-copies.go
generated
vendored
Normal file
File diff suppressed because it is too large
Load diff
522
vendor/github.com/containers/image/openshift/openshift.go
generated
vendored
Normal file
522
vendor/github.com/containers/image/openshift/openshift.go
generated
vendored
Normal file
|
@ -0,0 +1,522 @@
|
|||
package openshift
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"crypto/rand"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"strings"
|
||||
|
||||
"github.com/Sirupsen/logrus"
|
||||
"github.com/containers/image/docker"
|
||||
"github.com/containers/image/manifest"
|
||||
"github.com/containers/image/types"
|
||||
"github.com/containers/image/version"
|
||||
"github.com/docker/distribution/digest"
|
||||
)
|
||||
|
||||
// openshiftClient is configuration for dealing with a single image stream, for reading or writing.
|
||||
type openshiftClient struct {
|
||||
ref openshiftReference
|
||||
baseURL *url.URL
|
||||
// Values from Kubernetes configuration
|
||||
httpClient *http.Client
|
||||
bearerToken string // "" if not used
|
||||
username string // "" if not used
|
||||
password string // if username != ""
|
||||
}
|
||||
|
||||
// newOpenshiftClient creates a new openshiftClient for the specified reference.
|
||||
func newOpenshiftClient(ref openshiftReference) (*openshiftClient, error) {
|
||||
// We have already done this parsing in ParseReference, but thrown away
|
||||
// httpClient. So, parse again.
|
||||
// (We could also rework/split restClientFor to "get base URL" to be done
|
||||
// in ParseReference, and "get httpClient" to be done here. But until/unless
|
||||
// we support non-default clusters, this is good enough.)
|
||||
|
||||
// Overall, this is modelled on openshift/origin/pkg/cmd/util/clientcmd.New().ClientConfig() and openshift/origin/pkg/client.
|
||||
cmdConfig := defaultClientConfig()
|
||||
logrus.Debugf("cmdConfig: %#v", cmdConfig)
|
||||
restConfig, err := cmdConfig.ClientConfig()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
// REMOVED: SetOpenShiftDefaults (values are not overridable in config files, so hard-coded these defaults.)
|
||||
logrus.Debugf("restConfig: %#v", restConfig)
|
||||
baseURL, httpClient, err := restClientFor(restConfig)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
logrus.Debugf("URL: %#v", *baseURL)
|
||||
|
||||
if httpClient == nil {
|
||||
httpClient = http.DefaultClient
|
||||
}
|
||||
|
||||
return &openshiftClient{
|
||||
ref: ref,
|
||||
baseURL: baseURL,
|
||||
httpClient: httpClient,
|
||||
bearerToken: restConfig.BearerToken,
|
||||
username: restConfig.Username,
|
||||
password: restConfig.Password,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// doRequest performs a correctly authenticated request to a specified path, and returns response body or an error object.
|
||||
func (c *openshiftClient) doRequest(method, path string, requestBody []byte) ([]byte, error) {
|
||||
url := *c.baseURL
|
||||
url.Path = path
|
||||
var requestBodyReader io.Reader
|
||||
if requestBody != nil {
|
||||
logrus.Debugf("Will send body: %s", requestBody)
|
||||
requestBodyReader = bytes.NewReader(requestBody)
|
||||
}
|
||||
req, err := http.NewRequest(method, url.String(), requestBodyReader)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if len(c.bearerToken) != 0 {
|
||||
req.Header.Set("Authorization", "Bearer "+c.bearerToken)
|
||||
} else if len(c.username) != 0 {
|
||||
req.SetBasicAuth(c.username, c.password)
|
||||
}
|
||||
req.Header.Set("Accept", "application/json, */*")
|
||||
req.Header.Set("User-Agent", fmt.Sprintf("skopeo/%s", version.Version))
|
||||
if requestBody != nil {
|
||||
req.Header.Set("Content-Type", "application/json")
|
||||
}
|
||||
|
||||
logrus.Debugf("%s %s", method, url)
|
||||
res, err := c.httpClient.Do(req)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer res.Body.Close()
|
||||
body, err := ioutil.ReadAll(res.Body)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
logrus.Debugf("Got body: %s", body)
|
||||
// FIXME: Just throwing this useful information away only to try to guess later...
|
||||
logrus.Debugf("Got content-type: %s", res.Header.Get("Content-Type"))
|
||||
|
||||
var status status
|
||||
statusValid := false
|
||||
if err := json.Unmarshal(body, &status); err == nil && len(status.Status) > 0 {
|
||||
statusValid = true
|
||||
}
|
||||
|
||||
switch {
|
||||
case res.StatusCode == http.StatusSwitchingProtocols: // FIXME?! No idea why this weird case exists in k8s.io/kubernetes/pkg/client/restclient.
|
||||
if statusValid && status.Status != "Success" {
|
||||
return nil, errors.New(status.Message)
|
||||
}
|
||||
case res.StatusCode >= http.StatusOK && res.StatusCode <= http.StatusPartialContent:
|
||||
// OK.
|
||||
default:
|
||||
if statusValid {
|
||||
return nil, errors.New(status.Message)
|
||||
}
|
||||
return nil, fmt.Errorf("HTTP error: status code: %d, body: %s", res.StatusCode, string(body))
|
||||
}
|
||||
|
||||
return body, nil
|
||||
}
|
||||
|
||||
// getImage loads the specified image object.
|
||||
func (c *openshiftClient) getImage(imageStreamImageName string) (*image, error) {
|
||||
// FIXME: validate components per validation.IsValidPathSegmentName?
|
||||
path := fmt.Sprintf("/oapi/v1/namespaces/%s/imagestreamimages/%s@%s", c.ref.namespace, c.ref.stream, imageStreamImageName)
|
||||
body, err := c.doRequest("GET", path, nil)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
// Note: This does absolutely no kind/version checking or conversions.
|
||||
var isi imageStreamImage
|
||||
if err := json.Unmarshal(body, &isi); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &isi.Image, nil
|
||||
}
|
||||
|
||||
// convertDockerImageReference takes an image API DockerImageReference value and returns a reference we can actually use;
|
||||
// currently OpenShift stores the cluster-internal service IPs here, which are unusable from the outside.
|
||||
func (c *openshiftClient) convertDockerImageReference(ref string) (string, error) {
|
||||
parts := strings.SplitN(ref, "/", 2)
|
||||
if len(parts) != 2 {
|
||||
return "", fmt.Errorf("Invalid format of docker reference %s: missing '/'", ref)
|
||||
}
|
||||
return c.ref.dockerReference.Hostname() + "/" + parts[1], nil
|
||||
}
|
||||
|
||||
type openshiftImageSource struct {
|
||||
client *openshiftClient
|
||||
// Values specific to this image
|
||||
ctx *types.SystemContext
|
||||
requestedManifestMIMETypes []string
|
||||
// State
|
||||
docker types.ImageSource // The Docker Registry endpoint, or nil if not resolved yet
|
||||
imageStreamImageName string // Resolved image identifier, or "" if not known yet
|
||||
}
|
||||
|
||||
// newImageSource creates a new ImageSource for the specified reference,
|
||||
// asking the backend to use a manifest from requestedManifestMIMETypes if possible.
|
||||
// nil requestedManifestMIMETypes means manifest.DefaultRequestedManifestMIMETypes.
|
||||
// The caller must call .Close() on the returned ImageSource.
|
||||
func newImageSource(ctx *types.SystemContext, ref openshiftReference, requestedManifestMIMETypes []string) (types.ImageSource, error) {
|
||||
client, err := newOpenshiftClient(ref)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return &openshiftImageSource{
|
||||
client: client,
|
||||
ctx: ctx,
|
||||
requestedManifestMIMETypes: requestedManifestMIMETypes,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Reference returns the reference used to set up this source, _as specified by the user_
|
||||
// (not as the image itself, or its underlying storage, claims). This can be used e.g. to determine which public keys are trusted for this image.
|
||||
func (s *openshiftImageSource) Reference() types.ImageReference {
|
||||
return s.client.ref
|
||||
}
|
||||
|
||||
// Close removes resources associated with an initialized ImageSource, if any.
|
||||
func (s *openshiftImageSource) Close() {
|
||||
if s.docker != nil {
|
||||
s.docker.Close()
|
||||
s.docker = nil
|
||||
}
|
||||
}
|
||||
|
||||
func (s *openshiftImageSource) GetTargetManifest(digest digest.Digest) ([]byte, string, error) {
|
||||
if err := s.ensureImageIsResolved(); err != nil {
|
||||
return nil, "", err
|
||||
}
|
||||
return s.docker.GetTargetManifest(digest)
|
||||
}
|
||||
|
||||
// GetManifest returns the image's manifest along with its MIME type (which may be empty when it can't be determined but the manifest is available).
|
||||
// It may use a remote (= slow) service.
|
||||
func (s *openshiftImageSource) GetManifest() ([]byte, string, error) {
|
||||
if err := s.ensureImageIsResolved(); err != nil {
|
||||
return nil, "", err
|
||||
}
|
||||
return s.docker.GetManifest()
|
||||
}
|
||||
|
||||
// GetBlob returns a stream for the specified blob, and the blob’s size (or -1 if unknown).
|
||||
func (s *openshiftImageSource) GetBlob(info types.BlobInfo) (io.ReadCloser, int64, error) {
|
||||
if err := s.ensureImageIsResolved(); err != nil {
|
||||
return nil, 0, err
|
||||
}
|
||||
return s.docker.GetBlob(info)
|
||||
}
|
||||
|
||||
func (s *openshiftImageSource) GetSignatures() ([][]byte, error) {
|
||||
if err := s.ensureImageIsResolved(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
image, err := s.client.getImage(s.imageStreamImageName)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
var sigs [][]byte
|
||||
for _, sig := range image.Signatures {
|
||||
if sig.Type == imageSignatureTypeAtomic {
|
||||
sigs = append(sigs, sig.Content)
|
||||
}
|
||||
}
|
||||
return sigs, nil
|
||||
}
|
||||
|
||||
// ensureImageIsResolved sets up s.docker and s.imageStreamImageName
|
||||
func (s *openshiftImageSource) ensureImageIsResolved() error {
|
||||
if s.docker != nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
// FIXME: validate components per validation.IsValidPathSegmentName?
|
||||
path := fmt.Sprintf("/oapi/v1/namespaces/%s/imagestreams/%s", s.client.ref.namespace, s.client.ref.stream)
|
||||
body, err := s.client.doRequest("GET", path, nil)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
// Note: This does absolutely no kind/version checking or conversions.
|
||||
var is imageStream
|
||||
if err := json.Unmarshal(body, &is); err != nil {
|
||||
return err
|
||||
}
|
||||
var te *tagEvent
|
||||
for _, tag := range is.Status.Tags {
|
||||
if tag.Tag != s.client.ref.dockerReference.Tag() {
|
||||
continue
|
||||
}
|
||||
if len(tag.Items) > 0 {
|
||||
te = &tag.Items[0]
|
||||
break
|
||||
}
|
||||
}
|
||||
if te == nil {
|
||||
return fmt.Errorf("No matching tag found")
|
||||
}
|
||||
logrus.Debugf("tag event %#v", te)
|
||||
dockerRefString, err := s.client.convertDockerImageReference(te.DockerImageReference)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
logrus.Debugf("Resolved reference %#v", dockerRefString)
|
||||
dockerRef, err := docker.ParseReference("//" + dockerRefString)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
d, err := dockerRef.NewImageSource(s.ctx, s.requestedManifestMIMETypes)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
s.docker = d
|
||||
s.imageStreamImageName = te.Image
|
||||
return nil
|
||||
}
|
||||
|
||||
type openshiftImageDestination struct {
|
||||
client *openshiftClient
|
||||
docker types.ImageDestination // The Docker Registry endpoint
|
||||
// State
|
||||
imageStreamImageName string // "" if not yet known
|
||||
}
|
||||
|
||||
// newImageDestination creates a new ImageDestination for the specified reference.
|
||||
func newImageDestination(ctx *types.SystemContext, ref openshiftReference) (types.ImageDestination, error) {
|
||||
client, err := newOpenshiftClient(ref)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// FIXME: Should this always use a digest, not a tag? Uploading to Docker by tag requires the tag _inside_ the manifest to match,
|
||||
// i.e. a single signed image cannot be available under multiple tags. But with types.ImageDestination, we don't know
|
||||
// the manifest digest at this point.
|
||||
dockerRefString := fmt.Sprintf("//%s/%s/%s:%s", client.ref.dockerReference.Hostname(), client.ref.namespace, client.ref.stream, client.ref.dockerReference.Tag())
|
||||
dockerRef, err := docker.ParseReference(dockerRefString)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
docker, err := dockerRef.NewImageDestination(ctx)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return &openshiftImageDestination{
|
||||
client: client,
|
||||
docker: docker,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Reference returns the reference used to set up this destination. Note that this should directly correspond to user's intent,
|
||||
// e.g. it should use the public hostname instead of the result of resolving CNAMEs or following redirects.
|
||||
func (d *openshiftImageDestination) Reference() types.ImageReference {
|
||||
return d.client.ref
|
||||
}
|
||||
|
||||
// Close removes resources associated with an initialized ImageDestination, if any.
|
||||
func (d *openshiftImageDestination) Close() {
|
||||
d.docker.Close()
|
||||
}
|
||||
|
||||
func (d *openshiftImageDestination) SupportedManifestMIMETypes() []string {
|
||||
return []string{
|
||||
manifest.DockerV2Schema1SignedMediaType,
|
||||
manifest.DockerV2Schema1MediaType,
|
||||
}
|
||||
}
|
||||
|
||||
// SupportsSignatures returns an error (to be displayed to the user) if the destination certainly can't store signatures.
|
||||
// Note: It is still possible for PutSignatures to fail if SupportsSignatures returns nil.
|
||||
func (d *openshiftImageDestination) SupportsSignatures() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// ShouldCompressLayers returns true iff it is desirable to compress layer blobs written to this destination.
|
||||
func (d *openshiftImageDestination) ShouldCompressLayers() bool {
|
||||
return true
|
||||
}
|
||||
|
||||
// AcceptsForeignLayerURLs returns false iff foreign layers in manifest should be actually
|
||||
// uploaded to the image destination, true otherwise.
|
||||
func (d *openshiftImageDestination) AcceptsForeignLayerURLs() bool {
|
||||
return true
|
||||
}
|
||||
|
||||
// PutBlob writes contents of stream and returns data representing the result (with all data filled in).
|
||||
// inputInfo.Digest can be optionally provided if known; it is not mandatory for the implementation to verify it.
|
||||
// inputInfo.Size is the expected length of stream, if known.
|
||||
// WARNING: The contents of stream are being verified on the fly. Until stream.Read() returns io.EOF, the contents of the data SHOULD NOT be available
|
||||
// to any other readers for download using the supplied digest.
|
||||
// If stream.Read() at any time, ESPECIALLY at end of input, returns an error, PutBlob MUST 1) fail, and 2) delete any data stored so far.
|
||||
func (d *openshiftImageDestination) PutBlob(stream io.Reader, inputInfo types.BlobInfo) (types.BlobInfo, error) {
|
||||
return d.docker.PutBlob(stream, inputInfo)
|
||||
}
|
||||
|
||||
func (d *openshiftImageDestination) HasBlob(info types.BlobInfo) (bool, int64, error) {
|
||||
return d.docker.HasBlob(info)
|
||||
}
|
||||
|
||||
func (d *openshiftImageDestination) ReapplyBlob(info types.BlobInfo) (types.BlobInfo, error) {
|
||||
return d.docker.ReapplyBlob(info)
|
||||
}
|
||||
|
||||
func (d *openshiftImageDestination) PutManifest(m []byte) error {
|
||||
manifestDigest, err := manifest.Digest(m)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
d.imageStreamImageName = manifestDigest.String()
|
||||
|
||||
return d.docker.PutManifest(m)
|
||||
}
|
||||
|
||||
func (d *openshiftImageDestination) PutSignatures(signatures [][]byte) error {
|
||||
if d.imageStreamImageName == "" {
|
||||
return fmt.Errorf("Internal error: Unknown manifest digest, can't add signatures")
|
||||
}
|
||||
// Because image signatures are a shared resource in Atomic Registry, the default upload
|
||||
// always adds signatures. Eventually we should also allow removing signatures.
|
||||
|
||||
if len(signatures) == 0 {
|
||||
return nil // No need to even read the old state.
|
||||
}
|
||||
|
||||
image, err := d.client.getImage(d.imageStreamImageName)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
existingSigNames := map[string]struct{}{}
|
||||
for _, sig := range image.Signatures {
|
||||
existingSigNames[sig.objectMeta.Name] = struct{}{}
|
||||
}
|
||||
|
||||
sigExists:
|
||||
for _, newSig := range signatures {
|
||||
for _, existingSig := range image.Signatures {
|
||||
if existingSig.Type == imageSignatureTypeAtomic && bytes.Equal(existingSig.Content, newSig) {
|
||||
continue sigExists
|
||||
}
|
||||
}
|
||||
|
||||
// The API expect us to invent a new unique name. This is racy, but hopefully good enough.
|
||||
var signatureName string
|
||||
for {
|
||||
randBytes := make([]byte, 16)
|
||||
n, err := rand.Read(randBytes)
|
||||
if err != nil || n != 16 {
|
||||
return fmt.Errorf("Error generating random signature ID: %v, len %d", err, n)
|
||||
}
|
||||
signatureName = fmt.Sprintf("%s@%032x", d.imageStreamImageName, randBytes)
|
||||
if _, ok := existingSigNames[signatureName]; !ok {
|
||||
break
|
||||
}
|
||||
}
|
||||
// Note: This does absolutely no kind/version checking or conversions.
|
||||
sig := imageSignature{
|
||||
typeMeta: typeMeta{
|
||||
Kind: "ImageSignature",
|
||||
APIVersion: "v1",
|
||||
},
|
||||
objectMeta: objectMeta{Name: signatureName},
|
||||
Type: imageSignatureTypeAtomic,
|
||||
Content: newSig,
|
||||
}
|
||||
body, err := json.Marshal(sig)
|
||||
_, err = d.client.doRequest("POST", "/oapi/v1/imagesignatures", body)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Commit marks the process of storing the image as successful and asks for the image to be persisted.
|
||||
// WARNING: This does not have any transactional semantics:
|
||||
// - Uploaded data MAY be visible to others before Commit() is called
|
||||
// - Uploaded data MAY be removed or MAY remain around if Close() is called without Commit() (i.e. rollback is allowed but not guaranteed)
|
||||
func (d *openshiftImageDestination) Commit() error {
|
||||
return d.docker.Commit()
|
||||
}
|
||||
|
||||
// These structs are subsets of github.com/openshift/origin/pkg/image/api/v1 and its dependencies.
|
||||
type imageStream struct {
|
||||
Status imageStreamStatus `json:"status,omitempty"`
|
||||
}
|
||||
type imageStreamStatus struct {
|
||||
DockerImageRepository string `json:"dockerImageRepository"`
|
||||
Tags []namedTagEventList `json:"tags,omitempty"`
|
||||
}
|
||||
type namedTagEventList struct {
|
||||
Tag string `json:"tag"`
|
||||
Items []tagEvent `json:"items"`
|
||||
}
|
||||
type tagEvent struct {
|
||||
DockerImageReference string `json:"dockerImageReference"`
|
||||
Image string `json:"image"`
|
||||
}
|
||||
type imageStreamImage struct {
|
||||
Image image `json:"image"`
|
||||
}
|
||||
type image struct {
|
||||
objectMeta `json:"metadata,omitempty"`
|
||||
DockerImageReference string `json:"dockerImageReference,omitempty"`
|
||||
// DockerImageMetadata runtime.RawExtension `json:"dockerImageMetadata,omitempty"`
|
||||
DockerImageMetadataVersion string `json:"dockerImageMetadataVersion,omitempty"`
|
||||
DockerImageManifest string `json:"dockerImageManifest,omitempty"`
|
||||
// DockerImageLayers []ImageLayer `json:"dockerImageLayers"`
|
||||
Signatures []imageSignature `json:"signatures,omitempty"`
|
||||
}
|
||||
|
||||
const imageSignatureTypeAtomic string = "atomic"
|
||||
|
||||
type imageSignature struct {
|
||||
typeMeta `json:",inline"`
|
||||
objectMeta `json:"metadata,omitempty"`
|
||||
Type string `json:"type"`
|
||||
Content []byte `json:"content"`
|
||||
// Conditions []SignatureCondition `json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type"`
|
||||
// ImageIdentity string `json:"imageIdentity,omitempty"`
|
||||
// SignedClaims map[string]string `json:"signedClaims,omitempty"`
|
||||
// Created *unversioned.Time `json:"created,omitempty"`
|
||||
// IssuedBy SignatureIssuer `json:"issuedBy,omitempty"`
|
||||
// IssuedTo SignatureSubject `json:"issuedTo,omitempty"`
|
||||
}
|
||||
type typeMeta struct {
|
||||
Kind string `json:"kind,omitempty"`
|
||||
APIVersion string `json:"apiVersion,omitempty"`
|
||||
}
|
||||
type objectMeta struct {
|
||||
Name string `json:"name,omitempty"`
|
||||
GenerateName string `json:"generateName,omitempty"`
|
||||
Namespace string `json:"namespace,omitempty"`
|
||||
SelfLink string `json:"selfLink,omitempty"`
|
||||
ResourceVersion string `json:"resourceVersion,omitempty"`
|
||||
Generation int64 `json:"generation,omitempty"`
|
||||
DeletionGracePeriodSeconds *int64 `json:"deletionGracePeriodSeconds,omitempty"`
|
||||
Labels map[string]string `json:"labels,omitempty"`
|
||||
Annotations map[string]string `json:"annotations,omitempty"`
|
||||
}
|
||||
|
||||
// A subset of k8s.io/kubernetes/pkg/api/unversioned/Status
|
||||
type status struct {
|
||||
Status string `json:"status,omitempty"`
|
||||
Message string `json:"message,omitempty"`
|
||||
// Reason StatusReason `json:"reason,omitempty"`
|
||||
// Details *StatusDetails `json:"details,omitempty"`
|
||||
Code int32 `json:"code,omitempty"`
|
||||
}
|
150
vendor/github.com/containers/image/openshift/openshift_transport.go
generated
vendored
Normal file
150
vendor/github.com/containers/image/openshift/openshift_transport.go
generated
vendored
Normal file
|
@ -0,0 +1,150 @@
|
|||
package openshift
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"regexp"
|
||||
"strings"
|
||||
|
||||
"github.com/containers/image/docker/policyconfiguration"
|
||||
"github.com/containers/image/docker/reference"
|
||||
genericImage "github.com/containers/image/image"
|
||||
"github.com/containers/image/types"
|
||||
)
|
||||
|
||||
// Transport is an ImageTransport for OpenShift registry-hosted images.
|
||||
var Transport = openshiftTransport{}
|
||||
|
||||
type openshiftTransport struct{}
|
||||
|
||||
func (t openshiftTransport) Name() string {
|
||||
return "atomic"
|
||||
}
|
||||
|
||||
// ParseReference converts a string, which should not start with the ImageTransport.Name prefix, into an ImageReference.
|
||||
func (t openshiftTransport) ParseReference(reference string) (types.ImageReference, error) {
|
||||
return ParseReference(reference)
|
||||
}
|
||||
|
||||
// Note that imageNameRegexp is namespace/stream:tag, this
|
||||
// is HOSTNAME/namespace/stream:tag or parent prefixes.
|
||||
// Keep this in sync with imageNameRegexp!
|
||||
var scopeRegexp = regexp.MustCompile("^[^/]*(/[^:/]*(/[^:/]*(:[^:/]*)?)?)?$")
|
||||
|
||||
// ValidatePolicyConfigurationScope checks that scope is a valid name for a signature.PolicyTransportScopes keys
|
||||
// (i.e. a valid PolicyConfigurationIdentity() or PolicyConfigurationNamespaces() return value).
|
||||
// It is acceptable to allow an invalid value which will never be matched, it can "only" cause user confusion.
|
||||
// scope passed to this function will not be "", that value is always allowed.
|
||||
func (t openshiftTransport) ValidatePolicyConfigurationScope(scope string) error {
|
||||
if scopeRegexp.FindStringIndex(scope) == nil {
|
||||
return fmt.Errorf("Invalid scope name %s", scope)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// openshiftReference is an ImageReference for OpenShift images.
|
||||
type openshiftReference struct {
|
||||
dockerReference reference.NamedTagged
|
||||
namespace string // Computed from dockerReference in advance.
|
||||
stream string // Computed from dockerReference in advance.
|
||||
}
|
||||
|
||||
// ParseReference converts a string, which should not start with the ImageTransport.Name prefix, into an OpenShift ImageReference.
|
||||
func ParseReference(ref string) (types.ImageReference, error) {
|
||||
r, err := reference.ParseNamed(ref)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to parse image reference %q, %v", ref, err)
|
||||
}
|
||||
tagged, ok := r.(reference.NamedTagged)
|
||||
if !ok {
|
||||
return nil, fmt.Errorf("invalid image reference %s, %#v", ref, r)
|
||||
}
|
||||
return NewReference(tagged)
|
||||
}
|
||||
|
||||
// NewReference returns an OpenShift reference for a reference.NamedTagged
|
||||
func NewReference(dockerRef reference.NamedTagged) (types.ImageReference, error) {
|
||||
r := strings.SplitN(dockerRef.RemoteName(), "/", 3)
|
||||
if len(r) != 2 {
|
||||
return nil, fmt.Errorf("invalid image reference %s", dockerRef.String())
|
||||
}
|
||||
return openshiftReference{
|
||||
namespace: r[0],
|
||||
stream: r[1],
|
||||
dockerReference: dockerRef,
|
||||
}, nil
|
||||
}
|
||||
|
||||
func (ref openshiftReference) Transport() types.ImageTransport {
|
||||
return Transport
|
||||
}
|
||||
|
||||
// StringWithinTransport returns a string representation of the reference, which MUST be such that
|
||||
// reference.Transport().ParseReference(reference.StringWithinTransport()) returns an equivalent reference.
|
||||
// NOTE: The returned string is not promised to be equal to the original input to ParseReference;
|
||||
// e.g. default attribute values omitted by the user may be filled in in the return value, or vice versa.
|
||||
// WARNING: Do not use the return value in the UI to describe an image, it does not contain the Transport().Name() prefix.
|
||||
func (ref openshiftReference) StringWithinTransport() string {
|
||||
return ref.dockerReference.String()
|
||||
}
|
||||
|
||||
// DockerReference returns a Docker reference associated with this reference
|
||||
// (fully explicit, i.e. !reference.IsNameOnly, but reflecting user intent,
|
||||
// not e.g. after redirect or alias processing), or nil if unknown/not applicable.
|
||||
func (ref openshiftReference) DockerReference() reference.Named {
|
||||
return ref.dockerReference
|
||||
}
|
||||
|
||||
// PolicyConfigurationIdentity returns a string representation of the reference, suitable for policy lookup.
|
||||
// This MUST reflect user intent, not e.g. after processing of third-party redirects or aliases;
|
||||
// The value SHOULD be fully explicit about its semantics, with no hidden defaults, AND canonical
|
||||
// (i.e. various references with exactly the same semantics should return the same configuration identity)
|
||||
// It is fine for the return value to be equal to StringWithinTransport(), and it is desirable but
|
||||
// not required/guaranteed that it will be a valid input to Transport().ParseReference().
|
||||
// Returns "" if configuration identities for these references are not supported.
|
||||
func (ref openshiftReference) PolicyConfigurationIdentity() string {
|
||||
res, err := policyconfiguration.DockerReferenceIdentity(ref.dockerReference)
|
||||
if res == "" || err != nil { // Coverage: Should never happen, NewReference constructs a valid tagged reference.
|
||||
panic(fmt.Sprintf("Internal inconsistency: policyconfiguration.DockerReferenceIdentity returned %#v, %v", res, err))
|
||||
}
|
||||
return res
|
||||
}
|
||||
|
||||
// PolicyConfigurationNamespaces returns a list of other policy configuration namespaces to search
|
||||
// for if explicit configuration for PolicyConfigurationIdentity() is not set. The list will be processed
|
||||
// in order, terminating on first match, and an implicit "" is always checked at the end.
|
||||
// It is STRONGLY recommended for the first element, if any, to be a prefix of PolicyConfigurationIdentity(),
|
||||
// and each following element to be a prefix of the element preceding it.
|
||||
func (ref openshiftReference) PolicyConfigurationNamespaces() []string {
|
||||
return policyconfiguration.DockerReferenceNamespaces(ref.dockerReference)
|
||||
}
|
||||
|
||||
// NewImage returns a types.Image for this reference, possibly specialized for this ImageTransport.
|
||||
// The caller must call .Close() on the returned Image.
|
||||
// NOTE: If any kind of signature verification should happen, build an UnparsedImage from the value returned by NewImageSource,
|
||||
// verify that UnparsedImage, and convert it into a real Image via image.FromUnparsedImage.
|
||||
func (ref openshiftReference) NewImage(ctx *types.SystemContext) (types.Image, error) {
|
||||
src, err := newImageSource(ctx, ref, nil)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return genericImage.FromSource(src)
|
||||
}
|
||||
|
||||
// NewImageSource returns a types.ImageSource for this reference,
|
||||
// asking the backend to use a manifest from requestedManifestMIMETypes if possible.
|
||||
// nil requestedManifestMIMETypes means manifest.DefaultRequestedManifestMIMETypes.
|
||||
// The caller must call .Close() on the returned ImageSource.
|
||||
func (ref openshiftReference) NewImageSource(ctx *types.SystemContext, requestedManifestMIMETypes []string) (types.ImageSource, error) {
|
||||
return newImageSource(ctx, ref, requestedManifestMIMETypes)
|
||||
}
|
||||
|
||||
// NewImageDestination returns a types.ImageDestination for this reference.
|
||||
// The caller must call .Close() on the returned ImageDestination.
|
||||
func (ref openshiftReference) NewImageDestination(ctx *types.SystemContext) (types.ImageDestination, error) {
|
||||
return newImageDestination(ctx, ref)
|
||||
}
|
||||
|
||||
// DeleteImage deletes the named image from the registry, if supported.
|
||||
func (ref openshiftReference) DeleteImage(ctx *types.SystemContext) error {
|
||||
return fmt.Errorf("Deleting images not implemented for atomic: images")
|
||||
}
|
61
vendor/github.com/containers/image/signature/docker.go
generated
vendored
Normal file
61
vendor/github.com/containers/image/signature/docker.go
generated
vendored
Normal file
|
@ -0,0 +1,61 @@
|
|||
// Note: Consider the API unstable until the code supports at least three different image formats or transports.
|
||||
|
||||
package signature
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
"github.com/containers/image/manifest"
|
||||
"github.com/docker/distribution/digest"
|
||||
)
|
||||
|
||||
// SignDockerManifest returns a signature for manifest as the specified dockerReference,
|
||||
// using mech and keyIdentity.
|
||||
func SignDockerManifest(m []byte, dockerReference string, mech SigningMechanism, keyIdentity string) ([]byte, error) {
|
||||
manifestDigest, err := manifest.Digest(m)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
sig := privateSignature{
|
||||
Signature{
|
||||
DockerManifestDigest: manifestDigest,
|
||||
DockerReference: dockerReference,
|
||||
},
|
||||
}
|
||||
return sig.sign(mech, keyIdentity)
|
||||
}
|
||||
|
||||
// VerifyDockerManifestSignature checks that unverifiedSignature uses expectedKeyIdentity to sign unverifiedManifest as expectedDockerReference,
|
||||
// using mech.
|
||||
func VerifyDockerManifestSignature(unverifiedSignature, unverifiedManifest []byte,
|
||||
expectedDockerReference string, mech SigningMechanism, expectedKeyIdentity string) (*Signature, error) {
|
||||
sig, err := verifyAndExtractSignature(mech, unverifiedSignature, signatureAcceptanceRules{
|
||||
validateKeyIdentity: func(keyIdentity string) error {
|
||||
if keyIdentity != expectedKeyIdentity {
|
||||
return InvalidSignatureError{msg: fmt.Sprintf("Signature by %s does not match expected fingerprint %s", keyIdentity, expectedKeyIdentity)}
|
||||
}
|
||||
return nil
|
||||
},
|
||||
validateSignedDockerReference: func(signedDockerReference string) error {
|
||||
if signedDockerReference != expectedDockerReference {
|
||||
return InvalidSignatureError{msg: fmt.Sprintf("Docker reference %s does not match %s",
|
||||
signedDockerReference, expectedDockerReference)}
|
||||
}
|
||||
return nil
|
||||
},
|
||||
validateSignedDockerManifestDigest: func(signedDockerManifestDigest digest.Digest) error {
|
||||
matches, err := manifest.MatchesDigest(unverifiedManifest, signedDockerManifestDigest)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if !matches {
|
||||
return InvalidSignatureError{msg: fmt.Sprintf("Signature for docker digest %q does not match", signedDockerManifestDigest)}
|
||||
}
|
||||
return nil
|
||||
},
|
||||
})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return sig, nil
|
||||
}
|
107
vendor/github.com/containers/image/signature/json.go
generated
vendored
Normal file
107
vendor/github.com/containers/image/signature/json.go
generated
vendored
Normal file
|
@ -0,0 +1,107 @@
|
|||
package signature
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
)
|
||||
|
||||
// jsonFormatError is returned when JSON does not match expected format.
|
||||
type jsonFormatError string
|
||||
|
||||
func (err jsonFormatError) Error() string {
|
||||
return string(err)
|
||||
}
|
||||
|
||||
// validateExactMapKeys returns an error if the keys of m are not exactly expectedKeys, which must be pairwise distinct
|
||||
func validateExactMapKeys(m map[string]interface{}, expectedKeys ...string) error {
|
||||
if len(m) != len(expectedKeys) {
|
||||
return jsonFormatError("Unexpected keys in a JSON object")
|
||||
}
|
||||
|
||||
for _, k := range expectedKeys {
|
||||
if _, ok := m[k]; !ok {
|
||||
return jsonFormatError(fmt.Sprintf("Key %s missing in a JSON object", k))
|
||||
}
|
||||
}
|
||||
// Assuming expectedKeys are pairwise distinct, we know m contains len(expectedKeys) different values in expectedKeys.
|
||||
return nil
|
||||
}
|
||||
|
||||
// mapField returns a member fieldName of m, if it is a JSON map, or an error.
|
||||
func mapField(m map[string]interface{}, fieldName string) (map[string]interface{}, error) {
|
||||
untyped, ok := m[fieldName]
|
||||
if !ok {
|
||||
return nil, jsonFormatError(fmt.Sprintf("Field %s missing", fieldName))
|
||||
}
|
||||
v, ok := untyped.(map[string]interface{})
|
||||
if !ok {
|
||||
return nil, jsonFormatError(fmt.Sprintf("Field %s is not a JSON object", fieldName))
|
||||
}
|
||||
return v, nil
|
||||
}
|
||||
|
||||
// stringField returns a member fieldName of m, if it is a string, or an error.
|
||||
func stringField(m map[string]interface{}, fieldName string) (string, error) {
|
||||
untyped, ok := m[fieldName]
|
||||
if !ok {
|
||||
return "", jsonFormatError(fmt.Sprintf("Field %s missing", fieldName))
|
||||
}
|
||||
v, ok := untyped.(string)
|
||||
if !ok {
|
||||
return "", jsonFormatError(fmt.Sprintf("Field %s is not a JSON object", fieldName))
|
||||
}
|
||||
return v, nil
|
||||
}
|
||||
|
||||
// paranoidUnmarshalJSONObject unmarshals data as a JSON object, but failing on the slightest unexpected aspect
|
||||
// (including duplicated keys, unrecognized keys, and non-matching types). Uses fieldResolver to
|
||||
// determine the destination for a field value, which should return a pointer to the destination if valid, or nil if the key is rejected.
|
||||
//
|
||||
// The fieldResolver approach is useful for decoding the Policy.Transports map; using it for structs is a bit lazy,
|
||||
// we could use reflection to automate this. Later?
|
||||
func paranoidUnmarshalJSONObject(data []byte, fieldResolver func(string) interface{}) error {
|
||||
seenKeys := map[string]struct{}{}
|
||||
|
||||
dec := json.NewDecoder(bytes.NewReader(data))
|
||||
t, err := dec.Token()
|
||||
if err != nil {
|
||||
return jsonFormatError(err.Error())
|
||||
}
|
||||
if t != json.Delim('{') {
|
||||
return jsonFormatError(fmt.Sprintf("JSON object expected, got \"%s\"", t))
|
||||
}
|
||||
for {
|
||||
t, err := dec.Token()
|
||||
if err != nil {
|
||||
return jsonFormatError(err.Error())
|
||||
}
|
||||
if t == json.Delim('}') {
|
||||
break
|
||||
}
|
||||
|
||||
key, ok := t.(string)
|
||||
if !ok {
|
||||
// Coverage: This should never happen, dec.Token() rejects non-string-literals in this state.
|
||||
return jsonFormatError(fmt.Sprintf("Key string literal expected, got \"%s\"", t))
|
||||
}
|
||||
if _, ok := seenKeys[key]; ok {
|
||||
return jsonFormatError(fmt.Sprintf("Duplicate key \"%s\"", key))
|
||||
}
|
||||
seenKeys[key] = struct{}{}
|
||||
|
||||
valuePtr := fieldResolver(key)
|
||||
if valuePtr == nil {
|
||||
return jsonFormatError(fmt.Sprintf("Unknown key \"%s\"", key))
|
||||
}
|
||||
// This works like json.Unmarshal, in particular it allows us to implement UnmarshalJSON to implement strict parsing of the field value.
|
||||
if err := dec.Decode(valuePtr); err != nil {
|
||||
return jsonFormatError(err.Error())
|
||||
}
|
||||
}
|
||||
if _, err := dec.Token(); err != io.EOF {
|
||||
return jsonFormatError("Unexpected data after JSON object")
|
||||
}
|
||||
return nil
|
||||
}
|
124
vendor/github.com/containers/image/signature/mechanism.go
generated
vendored
Normal file
124
vendor/github.com/containers/image/signature/mechanism.go
generated
vendored
Normal file
|
@ -0,0 +1,124 @@
|
|||
// Note: Consider the API unstable until the code supports at least three different image formats or transports.
|
||||
|
||||
package signature
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
|
||||
"github.com/mtrmac/gpgme"
|
||||
)
|
||||
|
||||
// SigningMechanism abstracts a way to sign binary blobs and verify their signatures.
|
||||
// FIXME: Eventually expand on keyIdentity (namespace them between mechanisms to
|
||||
// eliminate ambiguities, support CA signatures and perhaps other key properties)
|
||||
type SigningMechanism interface {
|
||||
// ImportKeysFromBytes imports public keys from the supplied blob and returns their identities.
|
||||
// The blob is assumed to have an appropriate format (the caller is expected to know which one).
|
||||
// NOTE: This may modify long-term state (e.g. key storage in a directory underlying the mechanism).
|
||||
ImportKeysFromBytes(blob []byte) ([]string, error)
|
||||
// Sign creates a (non-detached) signature of input using keyidentity
|
||||
Sign(input []byte, keyIdentity string) ([]byte, error)
|
||||
// Verify parses unverifiedSignature and returns the content and the signer's identity
|
||||
Verify(unverifiedSignature []byte) (contents []byte, keyIdentity string, err error)
|
||||
}
|
||||
|
||||
// A GPG/OpenPGP signing mechanism.
|
||||
type gpgSigningMechanism struct {
|
||||
ctx *gpgme.Context
|
||||
}
|
||||
|
||||
// NewGPGSigningMechanism returns a new GPG/OpenPGP signing mechanism.
|
||||
func NewGPGSigningMechanism() (SigningMechanism, error) {
|
||||
return newGPGSigningMechanismInDirectory("")
|
||||
}
|
||||
|
||||
// newGPGSigningMechanismInDirectory returns a new GPG/OpenPGP signing mechanism, using optionalDir if not empty.
|
||||
func newGPGSigningMechanismInDirectory(optionalDir string) (SigningMechanism, error) {
|
||||
ctx, err := gpgme.New()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if err = ctx.SetProtocol(gpgme.ProtocolOpenPGP); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if optionalDir != "" {
|
||||
err := ctx.SetEngineInfo(gpgme.ProtocolOpenPGP, "", optionalDir)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
ctx.SetArmor(false)
|
||||
ctx.SetTextMode(false)
|
||||
return gpgSigningMechanism{ctx: ctx}, nil
|
||||
}
|
||||
|
||||
// ImportKeysFromBytes implements SigningMechanism.ImportKeysFromBytes
|
||||
func (m gpgSigningMechanism) ImportKeysFromBytes(blob []byte) ([]string, error) {
|
||||
inputData, err := gpgme.NewDataBytes(blob)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
res, err := m.ctx.Import(inputData)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
keyIdentities := []string{}
|
||||
for _, i := range res.Imports {
|
||||
if i.Result == nil {
|
||||
keyIdentities = append(keyIdentities, i.Fingerprint)
|
||||
}
|
||||
}
|
||||
return keyIdentities, nil
|
||||
}
|
||||
|
||||
// Sign implements SigningMechanism.Sign
|
||||
func (m gpgSigningMechanism) Sign(input []byte, keyIdentity string) ([]byte, error) {
|
||||
key, err := m.ctx.GetKey(keyIdentity, true)
|
||||
if err != nil {
|
||||
if e, ok := err.(gpgme.Error); ok && e.Code() == gpgme.ErrorEOF {
|
||||
return nil, fmt.Errorf("key %q not found", keyIdentity)
|
||||
}
|
||||
return nil, err
|
||||
}
|
||||
inputData, err := gpgme.NewDataBytes(input)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
var sigBuffer bytes.Buffer
|
||||
sigData, err := gpgme.NewDataWriter(&sigBuffer)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if err = m.ctx.Sign([]*gpgme.Key{key}, inputData, sigData, gpgme.SigModeNormal); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return sigBuffer.Bytes(), nil
|
||||
}
|
||||
|
||||
// Verify implements SigningMechanism.Verify
|
||||
func (m gpgSigningMechanism) Verify(unverifiedSignature []byte) (contents []byte, keyIdentity string, err error) {
|
||||
signedBuffer := bytes.Buffer{}
|
||||
signedData, err := gpgme.NewDataWriter(&signedBuffer)
|
||||
if err != nil {
|
||||
return nil, "", err
|
||||
}
|
||||
unverifiedSignatureData, err := gpgme.NewDataBytes(unverifiedSignature)
|
||||
if err != nil {
|
||||
return nil, "", err
|
||||
}
|
||||
_, sigs, err := m.ctx.Verify(unverifiedSignatureData, nil, signedData)
|
||||
if err != nil {
|
||||
return nil, "", err
|
||||
}
|
||||
if len(sigs) != 1 {
|
||||
return nil, "", InvalidSignatureError{msg: fmt.Sprintf("Unexpected GPG signature count %d", len(sigs))}
|
||||
}
|
||||
sig := sigs[0]
|
||||
// This is sig.Summary == gpgme.SigSumValid except for key trust, which we handle ourselves
|
||||
if sig.Status != nil || sig.Validity == gpgme.ValidityNever || sig.ValidityReason != nil || sig.WrongKeyUsage {
|
||||
// FIXME: Better error reporting eventually
|
||||
return nil, "", InvalidSignatureError{msg: fmt.Sprintf("Invalid GPG signature: %#v", sig)}
|
||||
}
|
||||
return signedBuffer.Bytes(), sig.Fingerprint, nil
|
||||
}
|
732
vendor/github.com/containers/image/signature/policy_config.go
generated
vendored
Normal file
732
vendor/github.com/containers/image/signature/policy_config.go
generated
vendored
Normal file
|
@ -0,0 +1,732 @@
|
|||
// policy_config.go hanles creation of policy objects, either by parsing JSON
|
||||
// or by programs building them programmatically.
|
||||
|
||||
// The New* constructors are intended to be a stable API. FIXME: after an independent review.
|
||||
|
||||
// Do not invoke the internals of the JSON marshaling/unmarshaling directly.
|
||||
|
||||
// We can't just blindly call json.Unmarshal because that would silently ignore
|
||||
// typos, and that would just not do for security policy.
|
||||
|
||||
// FIXME? This is by no means an user-friendly parser: No location information in error messages, no other context.
|
||||
// But at least it is not worse than blind json.Unmarshal()…
|
||||
|
||||
package signature
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"path/filepath"
|
||||
|
||||
"github.com/containers/image/docker/reference"
|
||||
"github.com/containers/image/transports"
|
||||
"github.com/containers/image/types"
|
||||
)
|
||||
|
||||
// systemDefaultPolicyPath is the policy path used for DefaultPolicy().
|
||||
// You can override this at build time with
|
||||
// -ldflags '-X github.com/containers/image/signature.systemDefaultPolicyPath=$your_path'
|
||||
var systemDefaultPolicyPath = builtinDefaultPolicyPath
|
||||
|
||||
// builtinDefaultPolicyPath is the policy pat used for DefaultPolicy().
|
||||
// DO NOT change this, instead see systemDefaultPolicyPath above.
|
||||
const builtinDefaultPolicyPath = "/etc/containers/policy.json"
|
||||
|
||||
// InvalidPolicyFormatError is returned when parsing an invalid policy configuration.
|
||||
type InvalidPolicyFormatError string
|
||||
|
||||
func (err InvalidPolicyFormatError) Error() string {
|
||||
return string(err)
|
||||
}
|
||||
|
||||
// DefaultPolicy returns the default policy of the system.
|
||||
// Most applications should be using this method to get the policy configured
|
||||
// by the system administrator.
|
||||
// ctx should usually be nil, can be set to override the default.
|
||||
// NOTE: When this function returns an error, report it to the user and abort.
|
||||
// DO NOT hard-code fallback policies in your application.
|
||||
func DefaultPolicy(ctx *types.SystemContext) (*Policy, error) {
|
||||
return NewPolicyFromFile(defaultPolicyPath(ctx))
|
||||
}
|
||||
|
||||
// defaultPolicyPath returns a path to the default policy of the system.
|
||||
func defaultPolicyPath(ctx *types.SystemContext) string {
|
||||
if ctx != nil {
|
||||
if ctx.SignaturePolicyPath != "" {
|
||||
return ctx.SignaturePolicyPath
|
||||
}
|
||||
if ctx.RootForImplicitAbsolutePaths != "" {
|
||||
return filepath.Join(ctx.RootForImplicitAbsolutePaths, systemDefaultPolicyPath)
|
||||
}
|
||||
}
|
||||
return systemDefaultPolicyPath
|
||||
}
|
||||
|
||||
// NewPolicyFromFile returns a policy configured in the specified file.
|
||||
func NewPolicyFromFile(fileName string) (*Policy, error) {
|
||||
contents, err := ioutil.ReadFile(fileName)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return NewPolicyFromBytes(contents)
|
||||
}
|
||||
|
||||
// NewPolicyFromBytes returns a policy parsed from the specified blob.
|
||||
// Use this function instead of calling json.Unmarshal directly.
|
||||
func NewPolicyFromBytes(data []byte) (*Policy, error) {
|
||||
p := Policy{}
|
||||
if err := json.Unmarshal(data, &p); err != nil {
|
||||
return nil, InvalidPolicyFormatError(err.Error())
|
||||
}
|
||||
return &p, nil
|
||||
}
|
||||
|
||||
// Compile-time check that Policy implements json.Unmarshaler.
|
||||
var _ json.Unmarshaler = (*Policy)(nil)
|
||||
|
||||
// UnmarshalJSON implements the json.Unmarshaler interface.
|
||||
func (p *Policy) UnmarshalJSON(data []byte) error {
|
||||
*p = Policy{}
|
||||
transports := policyTransportsMap{}
|
||||
if err := paranoidUnmarshalJSONObject(data, func(key string) interface{} {
|
||||
switch key {
|
||||
case "default":
|
||||
return &p.Default
|
||||
case "transports":
|
||||
return &transports
|
||||
default:
|
||||
return nil
|
||||
}
|
||||
}); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if p.Default == nil {
|
||||
return InvalidPolicyFormatError("Default policy is missing")
|
||||
}
|
||||
p.Transports = map[string]PolicyTransportScopes(transports)
|
||||
return nil
|
||||
}
|
||||
|
||||
// policyTransportsMap is a specialization of this map type for the strict JSON parsing semantics appropriate for the Policy.Transports member.
|
||||
type policyTransportsMap map[string]PolicyTransportScopes
|
||||
|
||||
// Compile-time check that policyTransportsMap implements json.Unmarshaler.
|
||||
var _ json.Unmarshaler = (*policyTransportsMap)(nil)
|
||||
|
||||
// UnmarshalJSON implements the json.Unmarshaler interface.
|
||||
func (m *policyTransportsMap) UnmarshalJSON(data []byte) error {
|
||||
// We can't unmarshal directly into map values because it is not possible to take an address of a map value.
|
||||
// So, use a temporary map of pointers-to-slices and convert.
|
||||
tmpMap := map[string]*PolicyTransportScopes{}
|
||||
if err := paranoidUnmarshalJSONObject(data, func(key string) interface{} {
|
||||
transport, ok := transports.KnownTransports[key]
|
||||
if !ok {
|
||||
return nil
|
||||
}
|
||||
// paranoidUnmarshalJSONObject detects key duplication for us, check just to be safe.
|
||||
if _, ok := tmpMap[key]; ok {
|
||||
return nil
|
||||
}
|
||||
ptsWithTransport := policyTransportScopesWithTransport{
|
||||
transport: transport,
|
||||
dest: &PolicyTransportScopes{}, // This allocates a new instance on each call.
|
||||
}
|
||||
tmpMap[key] = ptsWithTransport.dest
|
||||
return &ptsWithTransport
|
||||
}); err != nil {
|
||||
return err
|
||||
}
|
||||
for key, ptr := range tmpMap {
|
||||
(*m)[key] = *ptr
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Compile-time check that PolicyTransportScopes "implements"" json.Unmarshaler.
|
||||
// we want to only use policyTransportScopesWithTransport
|
||||
var _ json.Unmarshaler = (*PolicyTransportScopes)(nil)
|
||||
|
||||
// UnmarshalJSON implements the json.Unmarshaler interface.
|
||||
func (m *PolicyTransportScopes) UnmarshalJSON(data []byte) error {
|
||||
return errors.New("Do not try to unmarshal PolicyTransportScopes directly")
|
||||
}
|
||||
|
||||
// policyTransportScopesWithTransport is a way to unmarshal a PolicyTransportScopes
|
||||
// while validating using a specific ImageTransport.
|
||||
type policyTransportScopesWithTransport struct {
|
||||
transport types.ImageTransport
|
||||
dest *PolicyTransportScopes
|
||||
}
|
||||
|
||||
// Compile-time check that policyTransportScopesWithTransport implements json.Unmarshaler.
|
||||
var _ json.Unmarshaler = (*policyTransportScopesWithTransport)(nil)
|
||||
|
||||
// UnmarshalJSON implements the json.Unmarshaler interface.
|
||||
func (m *policyTransportScopesWithTransport) UnmarshalJSON(data []byte) error {
|
||||
// We can't unmarshal directly into map values because it is not possible to take an address of a map value.
|
||||
// So, use a temporary map of pointers-to-slices and convert.
|
||||
tmpMap := map[string]*PolicyRequirements{}
|
||||
if err := paranoidUnmarshalJSONObject(data, func(key string) interface{} {
|
||||
// paranoidUnmarshalJSONObject detects key duplication for us, check just to be safe.
|
||||
if _, ok := tmpMap[key]; ok {
|
||||
return nil
|
||||
}
|
||||
if key != "" {
|
||||
if err := m.transport.ValidatePolicyConfigurationScope(key); err != nil {
|
||||
return nil
|
||||
}
|
||||
}
|
||||
ptr := &PolicyRequirements{} // This allocates a new instance on each call.
|
||||
tmpMap[key] = ptr
|
||||
return ptr
|
||||
}); err != nil {
|
||||
return err
|
||||
}
|
||||
for key, ptr := range tmpMap {
|
||||
(*m.dest)[key] = *ptr
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Compile-time check that PolicyRequirements implements json.Unmarshaler.
|
||||
var _ json.Unmarshaler = (*PolicyRequirements)(nil)
|
||||
|
||||
// UnmarshalJSON implements the json.Unmarshaler interface.
|
||||
func (m *PolicyRequirements) UnmarshalJSON(data []byte) error {
|
||||
reqJSONs := []json.RawMessage{}
|
||||
if err := json.Unmarshal(data, &reqJSONs); err != nil {
|
||||
return err
|
||||
}
|
||||
if len(reqJSONs) == 0 {
|
||||
return InvalidPolicyFormatError("List of verification policy requirements must not be empty")
|
||||
}
|
||||
res := make([]PolicyRequirement, len(reqJSONs))
|
||||
for i, reqJSON := range reqJSONs {
|
||||
req, err := newPolicyRequirementFromJSON(reqJSON)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
res[i] = req
|
||||
}
|
||||
*m = res
|
||||
return nil
|
||||
}
|
||||
|
||||
// newPolicyRequirementFromJSON parses JSON data into a PolicyRequirement implementation.
|
||||
func newPolicyRequirementFromJSON(data []byte) (PolicyRequirement, error) {
|
||||
var typeField prCommon
|
||||
if err := json.Unmarshal(data, &typeField); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
var res PolicyRequirement
|
||||
switch typeField.Type {
|
||||
case prTypeInsecureAcceptAnything:
|
||||
res = &prInsecureAcceptAnything{}
|
||||
case prTypeReject:
|
||||
res = &prReject{}
|
||||
case prTypeSignedBy:
|
||||
res = &prSignedBy{}
|
||||
case prTypeSignedBaseLayer:
|
||||
res = &prSignedBaseLayer{}
|
||||
default:
|
||||
return nil, InvalidPolicyFormatError(fmt.Sprintf("Unknown policy requirement type \"%s\"", typeField.Type))
|
||||
}
|
||||
if err := json.Unmarshal(data, &res); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return res, nil
|
||||
}
|
||||
|
||||
// newPRInsecureAcceptAnything is NewPRInsecureAcceptAnything, except it returns the private type.
|
||||
func newPRInsecureAcceptAnything() *prInsecureAcceptAnything {
|
||||
return &prInsecureAcceptAnything{prCommon{Type: prTypeInsecureAcceptAnything}}
|
||||
}
|
||||
|
||||
// NewPRInsecureAcceptAnything returns a new "insecureAcceptAnything" PolicyRequirement.
|
||||
func NewPRInsecureAcceptAnything() PolicyRequirement {
|
||||
return newPRInsecureAcceptAnything()
|
||||
}
|
||||
|
||||
// Compile-time check that prInsecureAcceptAnything implements json.Unmarshaler.
|
||||
var _ json.Unmarshaler = (*prInsecureAcceptAnything)(nil)
|
||||
|
||||
// UnmarshalJSON implements the json.Unmarshaler interface.
|
||||
func (pr *prInsecureAcceptAnything) UnmarshalJSON(data []byte) error {
|
||||
*pr = prInsecureAcceptAnything{}
|
||||
var tmp prInsecureAcceptAnything
|
||||
if err := paranoidUnmarshalJSONObject(data, func(key string) interface{} {
|
||||
switch key {
|
||||
case "type":
|
||||
return &tmp.Type
|
||||
default:
|
||||
return nil
|
||||
}
|
||||
}); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if tmp.Type != prTypeInsecureAcceptAnything {
|
||||
return InvalidPolicyFormatError(fmt.Sprintf("Unexpected policy requirement type \"%s\"", tmp.Type))
|
||||
}
|
||||
*pr = *newPRInsecureAcceptAnything()
|
||||
return nil
|
||||
}
|
||||
|
||||
// newPRReject is NewPRReject, except it returns the private type.
|
||||
func newPRReject() *prReject {
|
||||
return &prReject{prCommon{Type: prTypeReject}}
|
||||
}
|
||||
|
||||
// NewPRReject returns a new "reject" PolicyRequirement.
|
||||
func NewPRReject() PolicyRequirement {
|
||||
return newPRReject()
|
||||
}
|
||||
|
||||
// Compile-time check that prReject implements json.Unmarshaler.
|
||||
var _ json.Unmarshaler = (*prReject)(nil)
|
||||
|
||||
// UnmarshalJSON implements the json.Unmarshaler interface.
|
||||
func (pr *prReject) UnmarshalJSON(data []byte) error {
|
||||
*pr = prReject{}
|
||||
var tmp prReject
|
||||
if err := paranoidUnmarshalJSONObject(data, func(key string) interface{} {
|
||||
switch key {
|
||||
case "type":
|
||||
return &tmp.Type
|
||||
default:
|
||||
return nil
|
||||
}
|
||||
}); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if tmp.Type != prTypeReject {
|
||||
return InvalidPolicyFormatError(fmt.Sprintf("Unexpected policy requirement type \"%s\"", tmp.Type))
|
||||
}
|
||||
*pr = *newPRReject()
|
||||
return nil
|
||||
}
|
||||
|
||||
// newPRSignedBy returns a new prSignedBy if parameters are valid.
|
||||
func newPRSignedBy(keyType sbKeyType, keyPath string, keyData []byte, signedIdentity PolicyReferenceMatch) (*prSignedBy, error) {
|
||||
if !keyType.IsValid() {
|
||||
return nil, InvalidPolicyFormatError(fmt.Sprintf("invalid keyType \"%s\"", keyType))
|
||||
}
|
||||
if len(keyPath) > 0 && len(keyData) > 0 {
|
||||
return nil, InvalidPolicyFormatError("keyType and keyData cannot be used simultaneously")
|
||||
}
|
||||
if signedIdentity == nil {
|
||||
return nil, InvalidPolicyFormatError("signedIdentity not specified")
|
||||
}
|
||||
return &prSignedBy{
|
||||
prCommon: prCommon{Type: prTypeSignedBy},
|
||||
KeyType: keyType,
|
||||
KeyPath: keyPath,
|
||||
KeyData: keyData,
|
||||
SignedIdentity: signedIdentity,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// newPRSignedByKeyPath is NewPRSignedByKeyPath, except it returns the private type.
|
||||
func newPRSignedByKeyPath(keyType sbKeyType, keyPath string, signedIdentity PolicyReferenceMatch) (*prSignedBy, error) {
|
||||
return newPRSignedBy(keyType, keyPath, nil, signedIdentity)
|
||||
}
|
||||
|
||||
// NewPRSignedByKeyPath returns a new "signedBy" PolicyRequirement using a KeyPath
|
||||
func NewPRSignedByKeyPath(keyType sbKeyType, keyPath string, signedIdentity PolicyReferenceMatch) (PolicyRequirement, error) {
|
||||
return newPRSignedByKeyPath(keyType, keyPath, signedIdentity)
|
||||
}
|
||||
|
||||
// newPRSignedByKeyData is NewPRSignedByKeyData, except it returns the private type.
|
||||
func newPRSignedByKeyData(keyType sbKeyType, keyData []byte, signedIdentity PolicyReferenceMatch) (*prSignedBy, error) {
|
||||
return newPRSignedBy(keyType, "", keyData, signedIdentity)
|
||||
}
|
||||
|
||||
// NewPRSignedByKeyData returns a new "signedBy" PolicyRequirement using a KeyData
|
||||
func NewPRSignedByKeyData(keyType sbKeyType, keyData []byte, signedIdentity PolicyReferenceMatch) (PolicyRequirement, error) {
|
||||
return newPRSignedByKeyData(keyType, keyData, signedIdentity)
|
||||
}
|
||||
|
||||
// Compile-time check that prSignedBy implements json.Unmarshaler.
|
||||
var _ json.Unmarshaler = (*prSignedBy)(nil)
|
||||
|
||||
// UnmarshalJSON implements the json.Unmarshaler interface.
|
||||
func (pr *prSignedBy) UnmarshalJSON(data []byte) error {
|
||||
*pr = prSignedBy{}
|
||||
var tmp prSignedBy
|
||||
var gotKeyPath, gotKeyData = false, false
|
||||
var signedIdentity json.RawMessage
|
||||
if err := paranoidUnmarshalJSONObject(data, func(key string) interface{} {
|
||||
switch key {
|
||||
case "type":
|
||||
return &tmp.Type
|
||||
case "keyType":
|
||||
return &tmp.KeyType
|
||||
case "keyPath":
|
||||
gotKeyPath = true
|
||||
return &tmp.KeyPath
|
||||
case "keyData":
|
||||
gotKeyData = true
|
||||
return &tmp.KeyData
|
||||
case "signedIdentity":
|
||||
return &signedIdentity
|
||||
default:
|
||||
return nil
|
||||
}
|
||||
}); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if tmp.Type != prTypeSignedBy {
|
||||
return InvalidPolicyFormatError(fmt.Sprintf("Unexpected policy requirement type \"%s\"", tmp.Type))
|
||||
}
|
||||
if signedIdentity == nil {
|
||||
tmp.SignedIdentity = NewPRMMatchRepoDigestOrExact()
|
||||
} else {
|
||||
si, err := newPolicyReferenceMatchFromJSON(signedIdentity)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
tmp.SignedIdentity = si
|
||||
}
|
||||
|
||||
var res *prSignedBy
|
||||
var err error
|
||||
switch {
|
||||
case gotKeyPath && gotKeyData:
|
||||
return InvalidPolicyFormatError("keyPath and keyData cannot be used simultaneously")
|
||||
case gotKeyPath && !gotKeyData:
|
||||
res, err = newPRSignedByKeyPath(tmp.KeyType, tmp.KeyPath, tmp.SignedIdentity)
|
||||
case !gotKeyPath && gotKeyData:
|
||||
res, err = newPRSignedByKeyData(tmp.KeyType, tmp.KeyData, tmp.SignedIdentity)
|
||||
case !gotKeyPath && !gotKeyData:
|
||||
return InvalidPolicyFormatError("At least one of keyPath and keyData mus be specified")
|
||||
default: // Coverage: This should never happen
|
||||
return fmt.Errorf("Impossible keyPath/keyData presence combination!?")
|
||||
}
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
*pr = *res
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// IsValid returns true iff kt is a recognized value
|
||||
func (kt sbKeyType) IsValid() bool {
|
||||
switch kt {
|
||||
case SBKeyTypeGPGKeys, SBKeyTypeSignedByGPGKeys,
|
||||
SBKeyTypeX509Certificates, SBKeyTypeSignedByX509CAs:
|
||||
return true
|
||||
default:
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
// Compile-time check that sbKeyType implements json.Unmarshaler.
|
||||
var _ json.Unmarshaler = (*sbKeyType)(nil)
|
||||
|
||||
// UnmarshalJSON implements the json.Unmarshaler interface.
|
||||
func (kt *sbKeyType) UnmarshalJSON(data []byte) error {
|
||||
*kt = sbKeyType("")
|
||||
var s string
|
||||
if err := json.Unmarshal(data, &s); err != nil {
|
||||
return err
|
||||
}
|
||||
if !sbKeyType(s).IsValid() {
|
||||
return InvalidPolicyFormatError(fmt.Sprintf("Unrecognized keyType value \"%s\"", s))
|
||||
}
|
||||
*kt = sbKeyType(s)
|
||||
return nil
|
||||
}
|
||||
|
||||
// newPRSignedBaseLayer is NewPRSignedBaseLayer, except it returns the private type.
|
||||
func newPRSignedBaseLayer(baseLayerIdentity PolicyReferenceMatch) (*prSignedBaseLayer, error) {
|
||||
if baseLayerIdentity == nil {
|
||||
return nil, InvalidPolicyFormatError("baseLayerIdentity not specified")
|
||||
}
|
||||
return &prSignedBaseLayer{
|
||||
prCommon: prCommon{Type: prTypeSignedBaseLayer},
|
||||
BaseLayerIdentity: baseLayerIdentity,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// NewPRSignedBaseLayer returns a new "signedBaseLayer" PolicyRequirement.
|
||||
func NewPRSignedBaseLayer(baseLayerIdentity PolicyReferenceMatch) (PolicyRequirement, error) {
|
||||
return newPRSignedBaseLayer(baseLayerIdentity)
|
||||
}
|
||||
|
||||
// Compile-time check that prSignedBaseLayer implements json.Unmarshaler.
|
||||
var _ json.Unmarshaler = (*prSignedBaseLayer)(nil)
|
||||
|
||||
// UnmarshalJSON implements the json.Unmarshaler interface.
|
||||
func (pr *prSignedBaseLayer) UnmarshalJSON(data []byte) error {
|
||||
*pr = prSignedBaseLayer{}
|
||||
var tmp prSignedBaseLayer
|
||||
var baseLayerIdentity json.RawMessage
|
||||
if err := paranoidUnmarshalJSONObject(data, func(key string) interface{} {
|
||||
switch key {
|
||||
case "type":
|
||||
return &tmp.Type
|
||||
case "baseLayerIdentity":
|
||||
return &baseLayerIdentity
|
||||
default:
|
||||
return nil
|
||||
}
|
||||
}); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if tmp.Type != prTypeSignedBaseLayer {
|
||||
return InvalidPolicyFormatError(fmt.Sprintf("Unexpected policy requirement type \"%s\"", tmp.Type))
|
||||
}
|
||||
if baseLayerIdentity == nil {
|
||||
return InvalidPolicyFormatError(fmt.Sprintf("baseLayerIdentity not specified"))
|
||||
}
|
||||
bli, err := newPolicyReferenceMatchFromJSON(baseLayerIdentity)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
res, err := newPRSignedBaseLayer(bli)
|
||||
if err != nil {
|
||||
// Coverage: This should never happen, newPolicyReferenceMatchFromJSON has ensured bli is valid.
|
||||
return err
|
||||
}
|
||||
*pr = *res
|
||||
return nil
|
||||
}
|
||||
|
||||
// newPolicyReferenceMatchFromJSON parses JSON data into a PolicyReferenceMatch implementation.
|
||||
func newPolicyReferenceMatchFromJSON(data []byte) (PolicyReferenceMatch, error) {
|
||||
var typeField prmCommon
|
||||
if err := json.Unmarshal(data, &typeField); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
var res PolicyReferenceMatch
|
||||
switch typeField.Type {
|
||||
case prmTypeMatchExact:
|
||||
res = &prmMatchExact{}
|
||||
case prmTypeMatchRepoDigestOrExact:
|
||||
res = &prmMatchRepoDigestOrExact{}
|
||||
case prmTypeMatchRepository:
|
||||
res = &prmMatchRepository{}
|
||||
case prmTypeExactReference:
|
||||
res = &prmExactReference{}
|
||||
case prmTypeExactRepository:
|
||||
res = &prmExactRepository{}
|
||||
default:
|
||||
return nil, InvalidPolicyFormatError(fmt.Sprintf("Unknown policy reference match type \"%s\"", typeField.Type))
|
||||
}
|
||||
if err := json.Unmarshal(data, &res); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return res, nil
|
||||
}
|
||||
|
||||
// newPRMMatchExact is NewPRMMatchExact, except it resturns the private type.
|
||||
func newPRMMatchExact() *prmMatchExact {
|
||||
return &prmMatchExact{prmCommon{Type: prmTypeMatchExact}}
|
||||
}
|
||||
|
||||
// NewPRMMatchExact returns a new "matchExact" PolicyReferenceMatch.
|
||||
func NewPRMMatchExact() PolicyReferenceMatch {
|
||||
return newPRMMatchExact()
|
||||
}
|
||||
|
||||
// Compile-time check that prmMatchExact implements json.Unmarshaler.
|
||||
var _ json.Unmarshaler = (*prmMatchExact)(nil)
|
||||
|
||||
// UnmarshalJSON implements the json.Unmarshaler interface.
|
||||
func (prm *prmMatchExact) UnmarshalJSON(data []byte) error {
|
||||
*prm = prmMatchExact{}
|
||||
var tmp prmMatchExact
|
||||
if err := paranoidUnmarshalJSONObject(data, func(key string) interface{} {
|
||||
switch key {
|
||||
case "type":
|
||||
return &tmp.Type
|
||||
default:
|
||||
return nil
|
||||
}
|
||||
}); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if tmp.Type != prmTypeMatchExact {
|
||||
return InvalidPolicyFormatError(fmt.Sprintf("Unexpected policy requirement type \"%s\"", tmp.Type))
|
||||
}
|
||||
*prm = *newPRMMatchExact()
|
||||
return nil
|
||||
}
|
||||
|
||||
// newPRMMatchRepoDigestOrExact is NewPRMMatchRepoDigestOrExact, except it resturns the private type.
|
||||
func newPRMMatchRepoDigestOrExact() *prmMatchRepoDigestOrExact {
|
||||
return &prmMatchRepoDigestOrExact{prmCommon{Type: prmTypeMatchRepoDigestOrExact}}
|
||||
}
|
||||
|
||||
// NewPRMMatchRepoDigestOrExact returns a new "matchRepoDigestOrExact" PolicyReferenceMatch.
|
||||
func NewPRMMatchRepoDigestOrExact() PolicyReferenceMatch {
|
||||
return newPRMMatchRepoDigestOrExact()
|
||||
}
|
||||
|
||||
// Compile-time check that prmMatchRepoDigestOrExact implements json.Unmarshaler.
|
||||
var _ json.Unmarshaler = (*prmMatchRepoDigestOrExact)(nil)
|
||||
|
||||
// UnmarshalJSON implements the json.Unmarshaler interface.
|
||||
func (prm *prmMatchRepoDigestOrExact) UnmarshalJSON(data []byte) error {
|
||||
*prm = prmMatchRepoDigestOrExact{}
|
||||
var tmp prmMatchRepoDigestOrExact
|
||||
if err := paranoidUnmarshalJSONObject(data, func(key string) interface{} {
|
||||
switch key {
|
||||
case "type":
|
||||
return &tmp.Type
|
||||
default:
|
||||
return nil
|
||||
}
|
||||
}); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if tmp.Type != prmTypeMatchRepoDigestOrExact {
|
||||
return InvalidPolicyFormatError(fmt.Sprintf("Unexpected policy requirement type \"%s\"", tmp.Type))
|
||||
}
|
||||
*prm = *newPRMMatchRepoDigestOrExact()
|
||||
return nil
|
||||
}
|
||||
|
||||
// newPRMMatchRepository is NewPRMMatchRepository, except it resturns the private type.
|
||||
func newPRMMatchRepository() *prmMatchRepository {
|
||||
return &prmMatchRepository{prmCommon{Type: prmTypeMatchRepository}}
|
||||
}
|
||||
|
||||
// NewPRMMatchRepository returns a new "matchRepository" PolicyReferenceMatch.
|
||||
func NewPRMMatchRepository() PolicyReferenceMatch {
|
||||
return newPRMMatchRepository()
|
||||
}
|
||||
|
||||
// Compile-time check that prmMatchRepository implements json.Unmarshaler.
|
||||
var _ json.Unmarshaler = (*prmMatchRepository)(nil)
|
||||
|
||||
// UnmarshalJSON implements the json.Unmarshaler interface.
|
||||
func (prm *prmMatchRepository) UnmarshalJSON(data []byte) error {
|
||||
*prm = prmMatchRepository{}
|
||||
var tmp prmMatchRepository
|
||||
if err := paranoidUnmarshalJSONObject(data, func(key string) interface{} {
|
||||
switch key {
|
||||
case "type":
|
||||
return &tmp.Type
|
||||
default:
|
||||
return nil
|
||||
}
|
||||
}); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if tmp.Type != prmTypeMatchRepository {
|
||||
return InvalidPolicyFormatError(fmt.Sprintf("Unexpected policy requirement type \"%s\"", tmp.Type))
|
||||
}
|
||||
*prm = *newPRMMatchRepository()
|
||||
return nil
|
||||
}
|
||||
|
||||
// newPRMExactReference is NewPRMExactReference, except it resturns the private type.
|
||||
func newPRMExactReference(dockerReference string) (*prmExactReference, error) {
|
||||
ref, err := reference.ParseNamed(dockerReference)
|
||||
if err != nil {
|
||||
return nil, InvalidPolicyFormatError(fmt.Sprintf("Invalid format of dockerReference %s: %s", dockerReference, err.Error()))
|
||||
}
|
||||
if reference.IsNameOnly(ref) {
|
||||
return nil, InvalidPolicyFormatError(fmt.Sprintf("dockerReference %s contains neither a tag nor digest", dockerReference))
|
||||
}
|
||||
return &prmExactReference{
|
||||
prmCommon: prmCommon{Type: prmTypeExactReference},
|
||||
DockerReference: dockerReference,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// NewPRMExactReference returns a new "exactReference" PolicyReferenceMatch.
|
||||
func NewPRMExactReference(dockerReference string) (PolicyReferenceMatch, error) {
|
||||
return newPRMExactReference(dockerReference)
|
||||
}
|
||||
|
||||
// Compile-time check that prmExactReference implements json.Unmarshaler.
|
||||
var _ json.Unmarshaler = (*prmExactReference)(nil)
|
||||
|
||||
// UnmarshalJSON implements the json.Unmarshaler interface.
|
||||
func (prm *prmExactReference) UnmarshalJSON(data []byte) error {
|
||||
*prm = prmExactReference{}
|
||||
var tmp prmExactReference
|
||||
if err := paranoidUnmarshalJSONObject(data, func(key string) interface{} {
|
||||
switch key {
|
||||
case "type":
|
||||
return &tmp.Type
|
||||
case "dockerReference":
|
||||
return &tmp.DockerReference
|
||||
default:
|
||||
return nil
|
||||
}
|
||||
}); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if tmp.Type != prmTypeExactReference {
|
||||
return InvalidPolicyFormatError(fmt.Sprintf("Unexpected policy requirement type \"%s\"", tmp.Type))
|
||||
}
|
||||
|
||||
res, err := newPRMExactReference(tmp.DockerReference)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
*prm = *res
|
||||
return nil
|
||||
}
|
||||
|
||||
// newPRMExactRepository is NewPRMExactRepository, except it resturns the private type.
|
||||
func newPRMExactRepository(dockerRepository string) (*prmExactRepository, error) {
|
||||
if _, err := reference.ParseNamed(dockerRepository); err != nil {
|
||||
return nil, InvalidPolicyFormatError(fmt.Sprintf("Invalid format of dockerRepository %s: %s", dockerRepository, err.Error()))
|
||||
}
|
||||
return &prmExactRepository{
|
||||
prmCommon: prmCommon{Type: prmTypeExactRepository},
|
||||
DockerRepository: dockerRepository,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// NewPRMExactRepository returns a new "exactRepository" PolicyRepositoryMatch.
|
||||
func NewPRMExactRepository(dockerRepository string) (PolicyReferenceMatch, error) {
|
||||
return newPRMExactRepository(dockerRepository)
|
||||
}
|
||||
|
||||
// Compile-time check that prmExactRepository implements json.Unmarshaler.
|
||||
var _ json.Unmarshaler = (*prmExactRepository)(nil)
|
||||
|
||||
// UnmarshalJSON implements the json.Unmarshaler interface.
|
||||
func (prm *prmExactRepository) UnmarshalJSON(data []byte) error {
|
||||
*prm = prmExactRepository{}
|
||||
var tmp prmExactRepository
|
||||
if err := paranoidUnmarshalJSONObject(data, func(key string) interface{} {
|
||||
switch key {
|
||||
case "type":
|
||||
return &tmp.Type
|
||||
case "dockerRepository":
|
||||
return &tmp.DockerRepository
|
||||
default:
|
||||
return nil
|
||||
}
|
||||
}); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if tmp.Type != prmTypeExactRepository {
|
||||
return InvalidPolicyFormatError(fmt.Sprintf("Unexpected policy requirement type \"%s\"", tmp.Type))
|
||||
}
|
||||
|
||||
res, err := newPRMExactRepository(tmp.DockerRepository)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
*prm = *res
|
||||
return nil
|
||||
}
|
287
vendor/github.com/containers/image/signature/policy_eval.go
generated
vendored
Normal file
287
vendor/github.com/containers/image/signature/policy_eval.go
generated
vendored
Normal file
|
@ -0,0 +1,287 @@
|
|||
// This defines the top-level policy evaluation API.
|
||||
// To the extent possible, the interface of the fuctions provided
|
||||
// here is intended to be completely unambiguous, and stable for users
|
||||
// to rely on.
|
||||
|
||||
package signature
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
"github.com/Sirupsen/logrus"
|
||||
"github.com/containers/image/types"
|
||||
)
|
||||
|
||||
// PolicyRequirementError is an explanatory text for rejecting a signature or an image.
|
||||
type PolicyRequirementError string
|
||||
|
||||
func (err PolicyRequirementError) Error() string {
|
||||
return string(err)
|
||||
}
|
||||
|
||||
// signatureAcceptanceResult is the principal value returned by isSignatureAuthorAccepted.
|
||||
type signatureAcceptanceResult string
|
||||
|
||||
const (
|
||||
sarAccepted signatureAcceptanceResult = "sarAccepted"
|
||||
sarRejected signatureAcceptanceResult = "sarRejected"
|
||||
sarUnknown signatureAcceptanceResult = "sarUnknown"
|
||||
)
|
||||
|
||||
// PolicyRequirement is a rule which must be satisfied by at least one of the signatures of an image.
|
||||
// The type is public, but its definition is private.
|
||||
type PolicyRequirement interface {
|
||||
// FIXME: For speed, we should support creating per-context state (not stored in the PolicyRequirement), to cache
|
||||
// costly initialization like creating temporary GPG home directories and reading files.
|
||||
// Setup() (someState, error)
|
||||
// Then, the operations below would be done on the someState object, not directly on a PolicyRequirement.
|
||||
|
||||
// isSignatureAuthorAccepted, given an image and a signature blob, returns:
|
||||
// - sarAccepted if the signature has been verified against the appropriate public key
|
||||
// (where "appropriate public key" may depend on the contents of the signature);
|
||||
// in that case a parsed Signature should be returned.
|
||||
// - sarRejected if the signature has not been verified;
|
||||
// in that case error must be non-nil, and should be an PolicyRequirementError if evaluation
|
||||
// succeeded but the result was rejection.
|
||||
// - sarUnknown if if this PolicyRequirement does not deal with signatures.
|
||||
// NOTE: sarUnknown should not be returned if this PolicyRequirement should make a decision but something failed.
|
||||
// Returning sarUnknown and a non-nil error value is invalid.
|
||||
// WARNING: This makes the signature contents acceptable for futher processing,
|
||||
// but it does not necessarily mean that the contents of the signature are
|
||||
// consistent with local policy.
|
||||
// For example:
|
||||
// - Do not use a true value to determine whether to run
|
||||
// a container based on this image; use IsRunningImageAllowed instead.
|
||||
// - Just because a signature is accepted does not automatically mean the contents of the
|
||||
// signature are authorized to run code as root, or to affect system or cluster configuration.
|
||||
isSignatureAuthorAccepted(image types.UnparsedImage, sig []byte) (signatureAcceptanceResult, *Signature, error)
|
||||
|
||||
// isRunningImageAllowed returns true if the requirement allows running an image.
|
||||
// If it returns false, err must be non-nil, and should be an PolicyRequirementError if evaluation
|
||||
// succeeded but the result was rejection.
|
||||
// WARNING: This validates signatures and the manifest, but does not download or validate the
|
||||
// layers. Users must validate that the layers match their expected digests.
|
||||
isRunningImageAllowed(image types.UnparsedImage) (bool, error)
|
||||
}
|
||||
|
||||
// PolicyReferenceMatch specifies a set of image identities accepted in PolicyRequirement.
|
||||
// The type is public, but its implementation is private.
|
||||
type PolicyReferenceMatch interface {
|
||||
// matchesDockerReference decides whether a specific image identity is accepted for an image
|
||||
// (or, usually, for the image's Reference().DockerReference()). Note that
|
||||
// image.Reference().DockerReference() may be nil.
|
||||
matchesDockerReference(image types.UnparsedImage, signatureDockerReference string) bool
|
||||
}
|
||||
|
||||
// PolicyContext encapsulates a policy and possible cached state
|
||||
// for speeding up its evaluation.
|
||||
type PolicyContext struct {
|
||||
Policy *Policy
|
||||
state policyContextState // Internal consistency checking
|
||||
}
|
||||
|
||||
// policyContextState is used internally to verify the users are not misusing a PolicyContext.
|
||||
type policyContextState string
|
||||
|
||||
const (
|
||||
pcInvalid policyContextState = ""
|
||||
pcInitializing policyContextState = "Initializing"
|
||||
pcReady policyContextState = "Ready"
|
||||
pcInUse policyContextState = "InUse"
|
||||
pcDestroying policyContextState = "Destroying"
|
||||
pcDestroyed policyContextState = "Destroyed"
|
||||
)
|
||||
|
||||
// changeContextState changes pc.state, or fails if the state is unexpected
|
||||
func (pc *PolicyContext) changeState(expected, new policyContextState) error {
|
||||
if pc.state != expected {
|
||||
return fmt.Errorf(`"Invalid PolicyContext state, expected "%s", found "%s"`, expected, pc.state)
|
||||
}
|
||||
pc.state = new
|
||||
return nil
|
||||
}
|
||||
|
||||
// NewPolicyContext sets up and initializes a context for the specified policy.
|
||||
// The policy must not be modified while the context exists. FIXME: make a deep copy?
|
||||
// If this function succeeds, the caller should call PolicyContext.Destroy() when done.
|
||||
func NewPolicyContext(policy *Policy) (*PolicyContext, error) {
|
||||
pc := &PolicyContext{Policy: policy, state: pcInitializing}
|
||||
// FIXME: initialize
|
||||
if err := pc.changeState(pcInitializing, pcReady); err != nil {
|
||||
// Huh?! This should never fail, we didn't give the pointer to anybody.
|
||||
// Just give up and leave unclean state around.
|
||||
return nil, err
|
||||
}
|
||||
return pc, nil
|
||||
}
|
||||
|
||||
// Destroy should be called when the user of the context is done with it.
|
||||
func (pc *PolicyContext) Destroy() error {
|
||||
if err := pc.changeState(pcReady, pcDestroying); err != nil {
|
||||
return err
|
||||
}
|
||||
// FIXME: destroy
|
||||
return pc.changeState(pcDestroying, pcDestroyed)
|
||||
}
|
||||
|
||||
// policyIdentityLogName returns a string description of the image identity for policy purposes.
|
||||
// ONLY use this for log messages, not for any decisions!
|
||||
func policyIdentityLogName(ref types.ImageReference) string {
|
||||
return ref.Transport().Name() + ":" + ref.PolicyConfigurationIdentity()
|
||||
}
|
||||
|
||||
// requirementsForImageRef selects the appropriate requirements for ref.
|
||||
func (pc *PolicyContext) requirementsForImageRef(ref types.ImageReference) PolicyRequirements {
|
||||
// Do we have a PolicyTransportScopes for this transport?
|
||||
transportName := ref.Transport().Name()
|
||||
if transportScopes, ok := pc.Policy.Transports[transportName]; ok {
|
||||
// Look for a full match.
|
||||
identity := ref.PolicyConfigurationIdentity()
|
||||
if req, ok := transportScopes[identity]; ok {
|
||||
logrus.Debugf(` Using transport "%s" policy section %s`, transportName, identity)
|
||||
return req
|
||||
}
|
||||
|
||||
// Look for a match of the possible parent namespaces.
|
||||
for _, name := range ref.PolicyConfigurationNamespaces() {
|
||||
if req, ok := transportScopes[name]; ok {
|
||||
logrus.Debugf(` Using transport "%s" specific policy section %s`, transportName, name)
|
||||
return req
|
||||
}
|
||||
}
|
||||
|
||||
// Look for a default match for the transport.
|
||||
if req, ok := transportScopes[""]; ok {
|
||||
logrus.Debugf(` Using transport "%s" policy section ""`, transportName)
|
||||
return req
|
||||
}
|
||||
}
|
||||
|
||||
logrus.Debugf(" Using default policy section")
|
||||
return pc.Policy.Default
|
||||
}
|
||||
|
||||
// GetSignaturesWithAcceptedAuthor returns those signatures from an image
|
||||
// for which the policy accepts the author (and which have been successfully
|
||||
// verified).
|
||||
// NOTE: This may legitimately return an empty list and no error, if the image
|
||||
// has no signatures or only invalid signatures.
|
||||
// WARNING: This makes the signature contents acceptable for futher processing,
|
||||
// but it does not necessarily mean that the contents of the signature are
|
||||
// consistent with local policy.
|
||||
// For example:
|
||||
// - Do not use a an existence of an accepted signature to determine whether to run
|
||||
// a container based on this image; use IsRunningImageAllowed instead.
|
||||
// - Just because a signature is accepted does not automatically mean the contents of the
|
||||
// signature are authorized to run code as root, or to affect system or cluster configuration.
|
||||
func (pc *PolicyContext) GetSignaturesWithAcceptedAuthor(image types.UnparsedImage) (sigs []*Signature, finalErr error) {
|
||||
if err := pc.changeState(pcReady, pcInUse); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer func() {
|
||||
if err := pc.changeState(pcInUse, pcReady); err != nil {
|
||||
sigs = nil
|
||||
finalErr = err
|
||||
}
|
||||
}()
|
||||
|
||||
logrus.Debugf("GetSignaturesWithAcceptedAuthor for image %s", policyIdentityLogName(image.Reference()))
|
||||
reqs := pc.requirementsForImageRef(image.Reference())
|
||||
|
||||
// FIXME: rename Signatures to UnverifiedSignatures
|
||||
unverifiedSignatures, err := image.Signatures()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
res := make([]*Signature, 0, len(unverifiedSignatures))
|
||||
for sigNumber, sig := range unverifiedSignatures {
|
||||
var acceptedSig *Signature // non-nil if accepted
|
||||
rejected := false
|
||||
// FIXME? Say more about the contents of the signature, i.e. parse it even before verification?!
|
||||
logrus.Debugf("Evaluating signature %d:", sigNumber)
|
||||
interpretingReqs:
|
||||
for reqNumber, req := range reqs {
|
||||
// FIXME: Log the requirement itself? For now, we use just the number.
|
||||
// FIXME: supply state
|
||||
switch res, as, err := req.isSignatureAuthorAccepted(image, sig); res {
|
||||
case sarAccepted:
|
||||
if as == nil { // Coverage: this should never happen
|
||||
logrus.Debugf(" Requirement %d: internal inconsistency: sarAccepted but no parsed contents", reqNumber)
|
||||
rejected = true
|
||||
break interpretingReqs
|
||||
}
|
||||
logrus.Debugf(" Requirement %d: signature accepted", reqNumber)
|
||||
if acceptedSig == nil {
|
||||
acceptedSig = as
|
||||
} else if *as != *acceptedSig { // Coverage: this should never happen
|
||||
// Huh?! Two ways of verifying the same signature blob resulted in two different parses of its already accepted contents?
|
||||
logrus.Debugf(" Requirement %d: internal inconsistency: sarAccepted but different parsed contents", reqNumber)
|
||||
rejected = true
|
||||
acceptedSig = nil
|
||||
break interpretingReqs
|
||||
}
|
||||
case sarRejected:
|
||||
logrus.Debugf(" Requirement %d: signature rejected: %s", reqNumber, err.Error())
|
||||
rejected = true
|
||||
break interpretingReqs
|
||||
case sarUnknown:
|
||||
if err != nil { // Coverage: this should never happen
|
||||
logrus.Debugf(" Requirement %d: internal inconsistency: sarUnknown but an error message %s", reqNumber, err.Error())
|
||||
rejected = true
|
||||
break interpretingReqs
|
||||
}
|
||||
logrus.Debugf(" Requirement %d: signature state unknown, continuing", reqNumber)
|
||||
default: // Coverage: this should never happen
|
||||
logrus.Debugf(" Requirement %d: internal inconsistency: unknown result %#v", reqNumber, string(res))
|
||||
rejected = true
|
||||
break interpretingReqs
|
||||
}
|
||||
}
|
||||
// This also handles the (invalid) case of empty reqs, by rejecting the signature.
|
||||
if acceptedSig != nil && !rejected {
|
||||
logrus.Debugf(" Overall: OK, signature accepted")
|
||||
res = append(res, acceptedSig)
|
||||
} else {
|
||||
logrus.Debugf(" Overall: Signature not accepted")
|
||||
}
|
||||
}
|
||||
return res, nil
|
||||
}
|
||||
|
||||
// IsRunningImageAllowed returns true iff the policy allows running the image.
|
||||
// If it returns false, err must be non-nil, and should be an PolicyRequirementError if evaluation
|
||||
// succeeded but the result was rejection.
|
||||
// WARNING: This validates signatures and the manifest, but does not download or validate the
|
||||
// layers. Users must validate that the layers match their expected digests.
|
||||
func (pc *PolicyContext) IsRunningImageAllowed(image types.UnparsedImage) (res bool, finalErr error) {
|
||||
if err := pc.changeState(pcReady, pcInUse); err != nil {
|
||||
return false, err
|
||||
}
|
||||
defer func() {
|
||||
if err := pc.changeState(pcInUse, pcReady); err != nil {
|
||||
res = false
|
||||
finalErr = err
|
||||
}
|
||||
}()
|
||||
|
||||
logrus.Debugf("IsRunningImageAllowed for image %s", policyIdentityLogName(image.Reference()))
|
||||
reqs := pc.requirementsForImageRef(image.Reference())
|
||||
|
||||
if len(reqs) == 0 {
|
||||
return false, PolicyRequirementError("List of verification policy requirements must not be empty")
|
||||
}
|
||||
|
||||
for reqNumber, req := range reqs {
|
||||
// FIXME: supply state
|
||||
allowed, err := req.isRunningImageAllowed(image)
|
||||
if !allowed {
|
||||
logrus.Debugf("Requirement %d: denied, done", reqNumber)
|
||||
return false, err
|
||||
}
|
||||
logrus.Debugf(" Requirement %d: allowed", reqNumber)
|
||||
}
|
||||
// We have tested that len(reqs) != 0, so at least one req must have explicitly allowed this image.
|
||||
logrus.Debugf("Overall: allowed")
|
||||
return true, nil
|
||||
}
|
18
vendor/github.com/containers/image/signature/policy_eval_baselayer.go
generated
vendored
Normal file
18
vendor/github.com/containers/image/signature/policy_eval_baselayer.go
generated
vendored
Normal file
|
@ -0,0 +1,18 @@
|
|||
// Policy evaluation for prSignedBaseLayer.
|
||||
|
||||
package signature
|
||||
|
||||
import (
|
||||
"github.com/Sirupsen/logrus"
|
||||
"github.com/containers/image/types"
|
||||
)
|
||||
|
||||
func (pr *prSignedBaseLayer) isSignatureAuthorAccepted(image types.UnparsedImage, sig []byte) (signatureAcceptanceResult, *Signature, error) {
|
||||
return sarUnknown, nil, nil
|
||||
}
|
||||
|
||||
func (pr *prSignedBaseLayer) isRunningImageAllowed(image types.UnparsedImage) (bool, error) {
|
||||
// FIXME? Reject this at policy parsing time already?
|
||||
logrus.Errorf("signedBaseLayer not implemented yet!")
|
||||
return false, PolicyRequirementError("signedBaseLayer not implemented yet!")
|
||||
}
|
138
vendor/github.com/containers/image/signature/policy_eval_signedby.go
generated
vendored
Normal file
138
vendor/github.com/containers/image/signature/policy_eval_signedby.go
generated
vendored
Normal file
|
@ -0,0 +1,138 @@
|
|||
// Policy evaluation for prSignedBy.
|
||||
|
||||
package signature
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"strings"
|
||||
|
||||
"github.com/containers/image/manifest"
|
||||
"github.com/containers/image/types"
|
||||
"github.com/docker/distribution/digest"
|
||||
)
|
||||
|
||||
func (pr *prSignedBy) isSignatureAuthorAccepted(image types.UnparsedImage, sig []byte) (signatureAcceptanceResult, *Signature, error) {
|
||||
switch pr.KeyType {
|
||||
case SBKeyTypeGPGKeys:
|
||||
case SBKeyTypeSignedByGPGKeys, SBKeyTypeX509Certificates, SBKeyTypeSignedByX509CAs:
|
||||
// FIXME? Reject this at policy parsing time already?
|
||||
return sarRejected, nil, fmt.Errorf(`"Unimplemented "keyType" value "%s"`, string(pr.KeyType))
|
||||
default:
|
||||
// This should never happen, newPRSignedBy ensures KeyType.IsValid()
|
||||
return sarRejected, nil, fmt.Errorf(`"Unknown "keyType" value "%s"`, string(pr.KeyType))
|
||||
}
|
||||
|
||||
if pr.KeyPath != "" && pr.KeyData != nil {
|
||||
return sarRejected, nil, errors.New(`Internal inconsistency: both "keyPath" and "keyData" specified`)
|
||||
}
|
||||
// FIXME: move this to per-context initialization
|
||||
var data []byte
|
||||
if pr.KeyData != nil {
|
||||
data = pr.KeyData
|
||||
} else {
|
||||
d, err := ioutil.ReadFile(pr.KeyPath)
|
||||
if err != nil {
|
||||
return sarRejected, nil, err
|
||||
}
|
||||
data = d
|
||||
}
|
||||
|
||||
// FIXME: move this to per-context initialization
|
||||
dir, err := ioutil.TempDir("", "skopeo-signedBy-")
|
||||
if err != nil {
|
||||
return sarRejected, nil, err
|
||||
}
|
||||
defer os.RemoveAll(dir)
|
||||
mech, err := newGPGSigningMechanismInDirectory(dir)
|
||||
if err != nil {
|
||||
return sarRejected, nil, err
|
||||
}
|
||||
|
||||
trustedIdentities, err := mech.ImportKeysFromBytes(data)
|
||||
if err != nil {
|
||||
return sarRejected, nil, err
|
||||
}
|
||||
if len(trustedIdentities) == 0 {
|
||||
return sarRejected, nil, PolicyRequirementError("No public keys imported")
|
||||
}
|
||||
|
||||
signature, err := verifyAndExtractSignature(mech, sig, signatureAcceptanceRules{
|
||||
validateKeyIdentity: func(keyIdentity string) error {
|
||||
for _, trustedIdentity := range trustedIdentities {
|
||||
if keyIdentity == trustedIdentity {
|
||||
return nil
|
||||
}
|
||||
}
|
||||
// Coverage: We use a private GPG home directory and only import trusted keys, so this should
|
||||
// not be reachable.
|
||||
return PolicyRequirementError(fmt.Sprintf("Signature by key %s is not accepted", keyIdentity))
|
||||
},
|
||||
validateSignedDockerReference: func(ref string) error {
|
||||
if !pr.SignedIdentity.matchesDockerReference(image, ref) {
|
||||
return PolicyRequirementError(fmt.Sprintf("Signature for identity %s is not accepted", ref))
|
||||
}
|
||||
return nil
|
||||
},
|
||||
validateSignedDockerManifestDigest: func(digest digest.Digest) error {
|
||||
m, _, err := image.Manifest()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
digestMatches, err := manifest.MatchesDigest(m, digest)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if !digestMatches {
|
||||
return PolicyRequirementError(fmt.Sprintf("Signature for digest %s does not match", digest))
|
||||
}
|
||||
return nil
|
||||
},
|
||||
})
|
||||
if err != nil {
|
||||
return sarRejected, nil, err
|
||||
}
|
||||
|
||||
return sarAccepted, signature, nil
|
||||
}
|
||||
|
||||
func (pr *prSignedBy) isRunningImageAllowed(image types.UnparsedImage) (bool, error) {
|
||||
sigs, err := image.Signatures()
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
var rejections []error
|
||||
for _, s := range sigs {
|
||||
var reason error
|
||||
switch res, _, err := pr.isSignatureAuthorAccepted(image, s); res {
|
||||
case sarAccepted:
|
||||
// One accepted signature is enough.
|
||||
return true, nil
|
||||
case sarRejected:
|
||||
reason = err
|
||||
case sarUnknown:
|
||||
// Huh?! This should not happen at all; treat it as any other invalid value.
|
||||
fallthrough
|
||||
default:
|
||||
reason = fmt.Errorf(`Internal error: Unexpected signature verification result "%s"`, string(res))
|
||||
}
|
||||
rejections = append(rejections, reason)
|
||||
}
|
||||
var summary error
|
||||
switch len(rejections) {
|
||||
case 0:
|
||||
summary = PolicyRequirementError("A signature was required, but no signature exists")
|
||||
case 1:
|
||||
summary = rejections[0]
|
||||
default:
|
||||
var msgs []string
|
||||
for _, e := range rejections {
|
||||
msgs = append(msgs, e.Error())
|
||||
}
|
||||
summary = PolicyRequirementError(fmt.Sprintf("None of the signatures were accepted, reasons: %s",
|
||||
strings.Join(msgs, "; ")))
|
||||
}
|
||||
return false, summary
|
||||
}
|
28
vendor/github.com/containers/image/signature/policy_eval_simple.go
generated
vendored
Normal file
28
vendor/github.com/containers/image/signature/policy_eval_simple.go
generated
vendored
Normal file
|
@ -0,0 +1,28 @@
|
|||
// Policy evaluation for the various simple PolicyRequirement types.
|
||||
|
||||
package signature
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
"github.com/containers/image/transports"
|
||||
"github.com/containers/image/types"
|
||||
)
|
||||
|
||||
func (pr *prInsecureAcceptAnything) isSignatureAuthorAccepted(image types.UnparsedImage, sig []byte) (signatureAcceptanceResult, *Signature, error) {
|
||||
// prInsecureAcceptAnything semantics: Every image is allowed to run,
|
||||
// but this does not consider the signature as verified.
|
||||
return sarUnknown, nil, nil
|
||||
}
|
||||
|
||||
func (pr *prInsecureAcceptAnything) isRunningImageAllowed(image types.UnparsedImage) (bool, error) {
|
||||
return true, nil
|
||||
}
|
||||
|
||||
func (pr *prReject) isSignatureAuthorAccepted(image types.UnparsedImage, sig []byte) (signatureAcceptanceResult, *Signature, error) {
|
||||
return sarRejected, nil, PolicyRequirementError(fmt.Sprintf("Any signatures for image %s are rejected by policy.", transports.ImageName(image.Reference())))
|
||||
}
|
||||
|
||||
func (pr *prReject) isRunningImageAllowed(image types.UnparsedImage) (bool, error) {
|
||||
return false, PolicyRequirementError(fmt.Sprintf("Running image %s is rejected by policy.", transports.ImageName(image.Reference())))
|
||||
}
|
101
vendor/github.com/containers/image/signature/policy_reference_match.go
generated
vendored
Normal file
101
vendor/github.com/containers/image/signature/policy_reference_match.go
generated
vendored
Normal file
|
@ -0,0 +1,101 @@
|
|||
// PolicyReferenceMatch implementations.
|
||||
|
||||
package signature
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
"github.com/containers/image/docker/reference"
|
||||
"github.com/containers/image/transports"
|
||||
"github.com/containers/image/types"
|
||||
)
|
||||
|
||||
// parseImageAndDockerReference converts an image and a reference string into two parsed entities, failing on any error and handling unidentified images.
|
||||
func parseImageAndDockerReference(image types.UnparsedImage, s2 string) (reference.Named, reference.Named, error) {
|
||||
r1 := image.Reference().DockerReference()
|
||||
if r1 == nil {
|
||||
return nil, nil, PolicyRequirementError(fmt.Sprintf("Docker reference match attempted on image %s with no known Docker reference identity",
|
||||
transports.ImageName(image.Reference())))
|
||||
}
|
||||
r2, err := reference.ParseNamed(s2)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
return r1, r2, nil
|
||||
}
|
||||
|
||||
func (prm *prmMatchExact) matchesDockerReference(image types.UnparsedImage, signatureDockerReference string) bool {
|
||||
intended, signature, err := parseImageAndDockerReference(image, signatureDockerReference)
|
||||
if err != nil {
|
||||
return false
|
||||
}
|
||||
// Do not add default tags: image.Reference().DockerReference() should contain it already, and signatureDockerReference should be exact; so, verify that now.
|
||||
if reference.IsNameOnly(intended) || reference.IsNameOnly(signature) {
|
||||
return false
|
||||
}
|
||||
return signature.String() == intended.String()
|
||||
}
|
||||
|
||||
func (prm *prmMatchRepoDigestOrExact) matchesDockerReference(image types.UnparsedImage, signatureDockerReference string) bool {
|
||||
intended, signature, err := parseImageAndDockerReference(image, signatureDockerReference)
|
||||
if err != nil {
|
||||
return false
|
||||
}
|
||||
|
||||
// Do not add default tags: image.Reference().DockerReference() should contain it already, and signatureDockerReference should be exact; so, verify that now.
|
||||
if reference.IsNameOnly(signature) {
|
||||
return false
|
||||
}
|
||||
switch intended.(type) {
|
||||
case reference.NamedTagged: // Includes the case when intended has both a tag and a digest.
|
||||
return signature.String() == intended.String()
|
||||
case reference.Canonical:
|
||||
// We don’t actually compare the manifest digest against the signature here; that happens prSignedBy.in UnparsedImage.Manifest.
|
||||
// Becase UnparsedImage.Manifest verifies the intended.Digest() against the manifest, and prSignedBy verifies the signature digest against the manifest,
|
||||
// we know that signature digest matches intended.Digest() (but intended.Digest() and signature digest may use different algorithms)
|
||||
return signature.Name() == intended.Name()
|
||||
default: // !reference.IsNameOnly(intended)
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
func (prm *prmMatchRepository) matchesDockerReference(image types.UnparsedImage, signatureDockerReference string) bool {
|
||||
intended, signature, err := parseImageAndDockerReference(image, signatureDockerReference)
|
||||
if err != nil {
|
||||
return false
|
||||
}
|
||||
return signature.Name() == intended.Name()
|
||||
}
|
||||
|
||||
// parseDockerReferences converts two reference strings into parsed entities, failing on any error
|
||||
func parseDockerReferences(s1, s2 string) (reference.Named, reference.Named, error) {
|
||||
r1, err := reference.ParseNamed(s1)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
r2, err := reference.ParseNamed(s2)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
return r1, r2, nil
|
||||
}
|
||||
|
||||
func (prm *prmExactReference) matchesDockerReference(image types.UnparsedImage, signatureDockerReference string) bool {
|
||||
intended, signature, err := parseDockerReferences(prm.DockerReference, signatureDockerReference)
|
||||
if err != nil {
|
||||
return false
|
||||
}
|
||||
// prm.DockerReference and signatureDockerReference should be exact; so, verify that now.
|
||||
if reference.IsNameOnly(intended) || reference.IsNameOnly(signature) {
|
||||
return false
|
||||
}
|
||||
return signature.String() == intended.String()
|
||||
}
|
||||
|
||||
func (prm *prmExactRepository) matchesDockerReference(image types.UnparsedImage, signatureDockerReference string) bool {
|
||||
intended, signature, err := parseDockerReferences(prm.DockerRepository, signatureDockerReference)
|
||||
if err != nil {
|
||||
return false
|
||||
}
|
||||
return signature.Name() == intended.Name()
|
||||
}
|
152
vendor/github.com/containers/image/signature/policy_types.go
generated
vendored
Normal file
152
vendor/github.com/containers/image/signature/policy_types.go
generated
vendored
Normal file
|
@ -0,0 +1,152 @@
|
|||
// Note: Consider the API unstable until the code supports at least three different image formats or transports.
|
||||
|
||||
// This defines types used to represent a signature verification policy in memory.
|
||||
// Do not use the private types directly; either parse a configuration file, or construct a Policy from PolicyRequirements
|
||||
// built using the constructor functions provided in policy_config.go.
|
||||
|
||||
package signature
|
||||
|
||||
// NOTE: Keep this in sync with docs/policy.json.md!
|
||||
|
||||
// Policy defines requirements for considering a signature, or an image, valid.
|
||||
type Policy struct {
|
||||
// Default applies to any image which does not have a matching policy in Transports.
|
||||
// Note that this can happen even if a matching PolicyTransportScopes exists in Transports
|
||||
// if the image matches none of the scopes.
|
||||
Default PolicyRequirements `json:"default"`
|
||||
Transports map[string]PolicyTransportScopes `json:"transports"`
|
||||
}
|
||||
|
||||
// PolicyTransportScopes defines policies for images for a specific transport,
|
||||
// for various scopes, the map keys.
|
||||
// Scopes are defined by the transport (types.ImageReference.PolicyConfigurationIdentity etc.);
|
||||
// there is one scope precisely matching to a single image, and namespace scopes as prefixes
|
||||
// of the single-image scope. (e.g. hostname[/zero[/or[/more[/namespaces[/individualimage]]]]])
|
||||
// The empty scope, if exists, is considered a parent namespace of all other scopes.
|
||||
// Most specific scope wins, duplication is prohibited (hard failure).
|
||||
type PolicyTransportScopes map[string]PolicyRequirements
|
||||
|
||||
// PolicyRequirements is a set of requirements applying to a set of images; each of them must be satisfied (though perhaps each by a different signature).
|
||||
// Must not be empty, frequently will only contain a single element.
|
||||
type PolicyRequirements []PolicyRequirement
|
||||
|
||||
// PolicyRequirement is a rule which must be satisfied by at least one of the signatures of an image.
|
||||
// The type is public, but its definition is private.
|
||||
|
||||
// prCommon is the common type field in a JSON encoding of PolicyRequirement.
|
||||
type prCommon struct {
|
||||
Type prTypeIdentifier `json:"type"`
|
||||
}
|
||||
|
||||
// prTypeIdentifier is string designating a kind of a PolicyRequirement.
|
||||
type prTypeIdentifier string
|
||||
|
||||
const (
|
||||
prTypeInsecureAcceptAnything prTypeIdentifier = "insecureAcceptAnything"
|
||||
prTypeReject prTypeIdentifier = "reject"
|
||||
prTypeSignedBy prTypeIdentifier = "signedBy"
|
||||
prTypeSignedBaseLayer prTypeIdentifier = "signedBaseLayer"
|
||||
)
|
||||
|
||||
// prInsecureAcceptAnything is a PolicyRequirement with type = prTypeInsecureAcceptAnything:
|
||||
// every image is allowed to run.
|
||||
// Note that because PolicyRequirements are implicitly ANDed, this is necessary only if it is the only rule (to make the list non-empty and the policy explicit).
|
||||
// NOTE: This allows the image to run; it DOES NOT consider the signature verified (per IsSignatureAuthorAccepted).
|
||||
// FIXME? Better name?
|
||||
type prInsecureAcceptAnything struct {
|
||||
prCommon
|
||||
}
|
||||
|
||||
// prReject is a PolicyRequirement with type = prTypeReject: every image is rejected.
|
||||
type prReject struct {
|
||||
prCommon
|
||||
}
|
||||
|
||||
// prSignedBy is a PolicyRequirement with type = prTypeSignedBy: the image is signed by trusted keys for a specified identity
|
||||
type prSignedBy struct {
|
||||
prCommon
|
||||
|
||||
// KeyType specifies what kind of key reference KeyPath/KeyData is.
|
||||
// Acceptable values are “GPGKeys” | “signedByGPGKeys” “X.509Certificates” | “signedByX.509CAs”
|
||||
// FIXME: eventually also support GPGTOFU, X.509TOFU, with KeyPath only
|
||||
KeyType sbKeyType `json:"keyType"`
|
||||
|
||||
// KeyPath is a pathname to a local file containing the trusted key(s). Exactly one of KeyPath and KeyData must be specified.
|
||||
KeyPath string `json:"keyPath,omitempty"`
|
||||
// KeyData contains the trusted key(s), base64-encoded. Exactly one of KeyPath and KeyData must be specified.
|
||||
KeyData []byte `json:"keyData,omitempty"`
|
||||
|
||||
// SignedIdentity specifies what image identity the signature must be claiming about the image.
|
||||
// Defaults to "match-exact" if not specified.
|
||||
SignedIdentity PolicyReferenceMatch `json:"signedIdentity"`
|
||||
}
|
||||
|
||||
// sbKeyType are the allowed values for prSignedBy.KeyType
|
||||
type sbKeyType string
|
||||
|
||||
const (
|
||||
// SBKeyTypeGPGKeys refers to keys contained in a GPG keyring
|
||||
SBKeyTypeGPGKeys sbKeyType = "GPGKeys"
|
||||
// SBKeyTypeSignedByGPGKeys refers to keys signed by keys in a GPG keyring
|
||||
SBKeyTypeSignedByGPGKeys sbKeyType = "signedByGPGKeys"
|
||||
// SBKeyTypeX509Certificates refers to keys in a set of X.509 certificates
|
||||
// FIXME: PEM, DER?
|
||||
SBKeyTypeX509Certificates sbKeyType = "X509Certificates"
|
||||
// SBKeyTypeSignedByX509CAs refers to keys signed by one of the X.509 CAs
|
||||
// FIXME: PEM, DER?
|
||||
SBKeyTypeSignedByX509CAs sbKeyType = "signedByX509CAs"
|
||||
)
|
||||
|
||||
// prSignedBaseLayer is a PolicyRequirement with type = prSignedBaseLayer: the image has a specified, correctly signed, base image.
|
||||
type prSignedBaseLayer struct {
|
||||
prCommon
|
||||
// BaseLayerIdentity specifies the base image to look for. "match-exact" is rejected, "match-repository" is unlikely to be useful.
|
||||
BaseLayerIdentity PolicyReferenceMatch `json:"baseLayerIdentity"`
|
||||
}
|
||||
|
||||
// PolicyReferenceMatch specifies a set of image identities accepted in PolicyRequirement.
|
||||
// The type is public, but its implementation is private.
|
||||
|
||||
// prmCommon is the common type field in a JSON encoding of PolicyReferenceMatch.
|
||||
type prmCommon struct {
|
||||
Type prmTypeIdentifier `json:"type"`
|
||||
}
|
||||
|
||||
// prmTypeIdentifier is string designating a kind of a PolicyReferenceMatch.
|
||||
type prmTypeIdentifier string
|
||||
|
||||
const (
|
||||
prmTypeMatchExact prmTypeIdentifier = "matchExact"
|
||||
prmTypeMatchRepoDigestOrExact prmTypeIdentifier = "matchRepoDigestOrExact"
|
||||
prmTypeMatchRepository prmTypeIdentifier = "matchRepository"
|
||||
prmTypeExactReference prmTypeIdentifier = "exactReference"
|
||||
prmTypeExactRepository prmTypeIdentifier = "exactRepository"
|
||||
)
|
||||
|
||||
// prmMatchExact is a PolicyReferenceMatch with type = prmMatchExact: the two references must match exactly.
|
||||
type prmMatchExact struct {
|
||||
prmCommon
|
||||
}
|
||||
|
||||
// prmMatchRepoDigestOrExact is a PolicyReferenceMatch with type = prmMatchExactOrDigest: the two references must match exactly,
|
||||
// except that digest references are also accepted if the repository name matches (regardless of tag/digest) and the signature applies to the referenced digest
|
||||
type prmMatchRepoDigestOrExact struct {
|
||||
prmCommon
|
||||
}
|
||||
|
||||
// prmMatchRepository is a PolicyReferenceMatch with type = prmMatchRepository: the two references must use the same repository, may differ in the tag.
|
||||
type prmMatchRepository struct {
|
||||
prmCommon
|
||||
}
|
||||
|
||||
// prmExactReference is a PolicyReferenceMatch with type = prmExactReference: matches a specified reference exactly.
|
||||
type prmExactReference struct {
|
||||
prmCommon
|
||||
DockerReference string `json:"dockerReference"`
|
||||
}
|
||||
|
||||
// prmExactRepository is a PolicyReferenceMatch with type = prmExactRepository: matches a specified repository, with any tag.
|
||||
type prmExactRepository struct {
|
||||
prmCommon
|
||||
DockerRepository string `json:"dockerRepository"`
|
||||
}
|
192
vendor/github.com/containers/image/signature/signature.go
generated
vendored
Normal file
192
vendor/github.com/containers/image/signature/signature.go
generated
vendored
Normal file
|
@ -0,0 +1,192 @@
|
|||
// Note: Consider the API unstable until the code supports at least three different image formats or transports.
|
||||
|
||||
package signature
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"time"
|
||||
|
||||
"github.com/containers/image/version"
|
||||
"github.com/docker/distribution/digest"
|
||||
)
|
||||
|
||||
const (
|
||||
signatureType = "atomic container signature"
|
||||
)
|
||||
|
||||
// InvalidSignatureError is returned when parsing an invalid signature.
|
||||
type InvalidSignatureError struct {
|
||||
msg string
|
||||
}
|
||||
|
||||
func (err InvalidSignatureError) Error() string {
|
||||
return err.msg
|
||||
}
|
||||
|
||||
// Signature is a parsed content of a signature.
|
||||
type Signature struct {
|
||||
DockerManifestDigest digest.Digest
|
||||
DockerReference string // FIXME: more precise type?
|
||||
}
|
||||
|
||||
// Wrap signature to add to it some methods which we don't want to make public.
|
||||
type privateSignature struct {
|
||||
Signature
|
||||
}
|
||||
|
||||
// Compile-time check that privateSignature implements json.Marshaler
|
||||
var _ json.Marshaler = (*privateSignature)(nil)
|
||||
|
||||
// MarshalJSON implements the json.Marshaler interface.
|
||||
func (s privateSignature) MarshalJSON() ([]byte, error) {
|
||||
return s.marshalJSONWithVariables(time.Now().UTC().Unix(), "atomic "+version.Version)
|
||||
}
|
||||
|
||||
// Implementation of MarshalJSON, with a caller-chosen values of the variable items to help testing.
|
||||
func (s privateSignature) marshalJSONWithVariables(timestamp int64, creatorID string) ([]byte, error) {
|
||||
if s.DockerManifestDigest == "" || s.DockerReference == "" {
|
||||
return nil, errors.New("Unexpected empty signature content")
|
||||
}
|
||||
critical := map[string]interface{}{
|
||||
"type": signatureType,
|
||||
"image": map[string]string{"docker-manifest-digest": s.DockerManifestDigest.String()},
|
||||
"identity": map[string]string{"docker-reference": s.DockerReference},
|
||||
}
|
||||
optional := map[string]interface{}{
|
||||
"creator": creatorID,
|
||||
"timestamp": timestamp,
|
||||
}
|
||||
signature := map[string]interface{}{
|
||||
"critical": critical,
|
||||
"optional": optional,
|
||||
}
|
||||
return json.Marshal(signature)
|
||||
}
|
||||
|
||||
// Compile-time check that privateSignature implements json.Unmarshaler
|
||||
var _ json.Unmarshaler = (*privateSignature)(nil)
|
||||
|
||||
// UnmarshalJSON implements the json.Unmarshaler interface
|
||||
func (s *privateSignature) UnmarshalJSON(data []byte) error {
|
||||
err := s.strictUnmarshalJSON(data)
|
||||
if err != nil {
|
||||
if _, ok := err.(jsonFormatError); ok {
|
||||
err = InvalidSignatureError{msg: err.Error()}
|
||||
}
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
// strictUnmarshalJSON is UnmarshalJSON, except that it may return the internal jsonFormatError error type.
|
||||
// Splitting it into a separate function allows us to do the jsonFormatError → InvalidSignatureError in a single place, the caller.
|
||||
func (s *privateSignature) strictUnmarshalJSON(data []byte) error {
|
||||
var untyped interface{}
|
||||
if err := json.Unmarshal(data, &untyped); err != nil {
|
||||
return err
|
||||
}
|
||||
o, ok := untyped.(map[string]interface{})
|
||||
if !ok {
|
||||
return InvalidSignatureError{msg: "Invalid signature format"}
|
||||
}
|
||||
if err := validateExactMapKeys(o, "critical", "optional"); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
c, err := mapField(o, "critical")
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if err := validateExactMapKeys(c, "type", "image", "identity"); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
optional, err := mapField(o, "optional")
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
_ = optional // We don't use anything from here for now.
|
||||
|
||||
t, err := stringField(c, "type")
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if t != signatureType {
|
||||
return InvalidSignatureError{msg: fmt.Sprintf("Unrecognized signature type %s", t)}
|
||||
}
|
||||
|
||||
image, err := mapField(c, "image")
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if err := validateExactMapKeys(image, "docker-manifest-digest"); err != nil {
|
||||
return err
|
||||
}
|
||||
digestString, err := stringField(image, "docker-manifest-digest")
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
s.DockerManifestDigest = digest.Digest(digestString)
|
||||
|
||||
identity, err := mapField(c, "identity")
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if err := validateExactMapKeys(identity, "docker-reference"); err != nil {
|
||||
return err
|
||||
}
|
||||
reference, err := stringField(identity, "docker-reference")
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
s.DockerReference = reference
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Sign formats the signature and returns a blob signed using mech and keyIdentity
|
||||
func (s privateSignature) sign(mech SigningMechanism, keyIdentity string) ([]byte, error) {
|
||||
json, err := json.Marshal(s)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return mech.Sign(json, keyIdentity)
|
||||
}
|
||||
|
||||
// signatureAcceptanceRules specifies how to decide whether an untrusted signature is acceptable.
|
||||
// We centralize the actual parsing and data extraction in verifyAndExtractSignature; this supplies
|
||||
// the policy. We use an object instead of supplying func parameters to verifyAndExtractSignature
|
||||
// because all of the functions have the same type, so there is a risk of exchanging the functions;
|
||||
// named members of this struct are more explicit.
|
||||
type signatureAcceptanceRules struct {
|
||||
validateKeyIdentity func(string) error
|
||||
validateSignedDockerReference func(string) error
|
||||
validateSignedDockerManifestDigest func(digest.Digest) error
|
||||
}
|
||||
|
||||
// verifyAndExtractSignature verifies that unverifiedSignature has been signed, and that its principial components
|
||||
// match expected values, both as specified by rules, and returns it
|
||||
func verifyAndExtractSignature(mech SigningMechanism, unverifiedSignature []byte, rules signatureAcceptanceRules) (*Signature, error) {
|
||||
signed, keyIdentity, err := mech.Verify(unverifiedSignature)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if err := rules.validateKeyIdentity(keyIdentity); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
var unmatchedSignature privateSignature
|
||||
if err := json.Unmarshal(signed, &unmatchedSignature); err != nil {
|
||||
return nil, InvalidSignatureError{msg: err.Error()}
|
||||
}
|
||||
if err := rules.validateSignedDockerManifestDigest(unmatchedSignature.DockerManifestDigest); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if err := rules.validateSignedDockerReference(unmatchedSignature.DockerReference); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
signature := unmatchedSignature.Signature // Policy OK.
|
||||
return &signature, nil
|
||||
}
|
570
vendor/github.com/containers/image/storage/storage_image.go
generated
vendored
Normal file
570
vendor/github.com/containers/image/storage/storage_image.go
generated
vendored
Normal file
|
@ -0,0 +1,570 @@
|
|||
package storage
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"time"
|
||||
|
||||
"github.com/Sirupsen/logrus"
|
||||
"github.com/containers/image/image"
|
||||
"github.com/containers/image/manifest"
|
||||
"github.com/containers/image/types"
|
||||
"github.com/containers/storage/pkg/archive"
|
||||
"github.com/containers/storage/pkg/ioutils"
|
||||
"github.com/containers/storage/storage"
|
||||
ddigest "github.com/docker/distribution/digest"
|
||||
)
|
||||
|
||||
var (
|
||||
// ErrBlobDigestMismatch is returned when PutBlob() is given a blob
|
||||
// with a digest-based name that doesn't match its contents.
|
||||
ErrBlobDigestMismatch = errors.New("blob digest mismatch")
|
||||
// ErrBlobSizeMismatch is returned when PutBlob() is given a blob
|
||||
// with an expected size that doesn't match the reader.
|
||||
ErrBlobSizeMismatch = errors.New("blob size mismatch")
|
||||
// ErrNoManifestLists is returned when GetTargetManifest() is
|
||||
// called.
|
||||
ErrNoManifestLists = errors.New("manifest lists are not supported by this transport")
|
||||
// ErrNoSuchImage is returned when we attempt to access an image which
|
||||
// doesn't exist in the storage area.
|
||||
ErrNoSuchImage = storage.ErrNotAnImage
|
||||
)
|
||||
|
||||
type storageImageSource struct {
|
||||
imageRef storageReference
|
||||
Tag string `json:"tag,omitempty"`
|
||||
Created time.Time `json:"created-time,omitempty"`
|
||||
ID string `json:"id"`
|
||||
BlobList []types.BlobInfo `json:"blob-list,omitempty"` // Ordered list of every blob the image has been told to handle
|
||||
Layers map[ddigest.Digest][]string `json:"layers,omitempty"` // Map from digests of blobs to lists of layer IDs
|
||||
LayerPosition map[ddigest.Digest]int `json:"-"` // Where we are in reading a blob's layers
|
||||
SignatureSizes []int `json:"signature-sizes"` // List of sizes of each signature slice
|
||||
}
|
||||
|
||||
type storageImageDestination struct {
|
||||
imageRef storageReference
|
||||
Tag string `json:"tag,omitempty"`
|
||||
Created time.Time `json:"created-time,omitempty"`
|
||||
ID string `json:"id"`
|
||||
BlobList []types.BlobInfo `json:"blob-list,omitempty"` // Ordered list of every blob the image has been told to handle
|
||||
Layers map[ddigest.Digest][]string `json:"layers,omitempty"` // Map from digests of blobs to lists of layer IDs
|
||||
BlobData map[ddigest.Digest][]byte `json:"-"` // Map from names of blobs that aren't layers to contents, temporary
|
||||
Manifest []byte `json:"-"` // Manifest contents, temporary
|
||||
Signatures []byte `json:"-"` // Signature contents, temporary
|
||||
SignatureSizes []int `json:"signature-sizes"` // List of sizes of each signature slice
|
||||
}
|
||||
|
||||
type storageLayerMetadata struct {
|
||||
Digest string `json:"digest,omitempty"`
|
||||
Size int64 `json:"size"`
|
||||
CompressedSize int64 `json:"compressed-size,omitempty"`
|
||||
}
|
||||
|
||||
type storageImage struct {
|
||||
types.Image
|
||||
size int64
|
||||
}
|
||||
|
||||
// newImageSource sets us up to read out an image, which needs to already exist.
|
||||
func newImageSource(imageRef storageReference) (*storageImageSource, error) {
|
||||
id := imageRef.resolveID()
|
||||
if id == "" {
|
||||
logrus.Errorf("no image matching reference %q found", imageRef.StringWithinTransport())
|
||||
return nil, ErrNoSuchImage
|
||||
}
|
||||
img, err := imageRef.transport.store.GetImage(id)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("error reading image %q: %v", id, err)
|
||||
}
|
||||
image := &storageImageSource{
|
||||
imageRef: imageRef,
|
||||
Created: time.Now(),
|
||||
ID: img.ID,
|
||||
BlobList: []types.BlobInfo{},
|
||||
Layers: make(map[ddigest.Digest][]string),
|
||||
LayerPosition: make(map[ddigest.Digest]int),
|
||||
SignatureSizes: []int{},
|
||||
}
|
||||
if err := json.Unmarshal([]byte(img.Metadata), image); err != nil {
|
||||
return nil, fmt.Errorf("error decoding metadata for source image: %v", err)
|
||||
}
|
||||
return image, nil
|
||||
}
|
||||
|
||||
// newImageDestination sets us up to write a new image.
|
||||
func newImageDestination(imageRef storageReference) (*storageImageDestination, error) {
|
||||
image := &storageImageDestination{
|
||||
imageRef: imageRef,
|
||||
Tag: imageRef.reference,
|
||||
Created: time.Now(),
|
||||
ID: imageRef.id,
|
||||
BlobList: []types.BlobInfo{},
|
||||
Layers: make(map[ddigest.Digest][]string),
|
||||
BlobData: make(map[ddigest.Digest][]byte),
|
||||
SignatureSizes: []int{},
|
||||
}
|
||||
return image, nil
|
||||
}
|
||||
|
||||
func (s storageImageSource) Reference() types.ImageReference {
|
||||
return s.imageRef
|
||||
}
|
||||
|
||||
func (s storageImageDestination) Reference() types.ImageReference {
|
||||
return s.imageRef
|
||||
}
|
||||
|
||||
func (s storageImageSource) Close() {
|
||||
}
|
||||
|
||||
func (s storageImageDestination) Close() {
|
||||
}
|
||||
|
||||
func (s storageImageDestination) ShouldCompressLayers() bool {
|
||||
// We ultimately have to decompress layers to populate trees on disk,
|
||||
// so callers shouldn't bother compressing them before handing them to
|
||||
// us, if they're not already compressed.
|
||||
return false
|
||||
}
|
||||
|
||||
// PutBlob is used to both store filesystem layers and binary data that is part
|
||||
// of the image. Filesystem layers are assumed to be imported in order, as
|
||||
// that is required by some of the underlying storage drivers.
|
||||
func (s *storageImageDestination) PutBlob(stream io.Reader, blobinfo types.BlobInfo) (types.BlobInfo, error) {
|
||||
blobSize := int64(-1)
|
||||
digest := blobinfo.Digest
|
||||
errorBlobInfo := types.BlobInfo{
|
||||
Digest: "",
|
||||
Size: -1,
|
||||
}
|
||||
// Try to read an initial snippet of the blob.
|
||||
header := make([]byte, 10240)
|
||||
n, err := stream.Read(header)
|
||||
if err != nil && err != io.EOF {
|
||||
return errorBlobInfo, err
|
||||
}
|
||||
// Set up to read the whole blob (the initial snippet, plus the rest)
|
||||
// while digesting it with either the default, or the passed-in digest,
|
||||
// if one was specified.
|
||||
hasher := ddigest.Canonical.New()
|
||||
if digest.Validate() == nil {
|
||||
if a := digest.Algorithm(); a.Available() {
|
||||
hasher = a.New()
|
||||
}
|
||||
}
|
||||
hash := ""
|
||||
counter := ioutils.NewWriteCounter(hasher.Hash())
|
||||
defragmented := io.MultiReader(bytes.NewBuffer(header[:n]), stream)
|
||||
multi := io.TeeReader(defragmented, counter)
|
||||
if (n > 0) && archive.IsArchive(header[:n]) {
|
||||
// It's a filesystem layer. If it's not the first one in the
|
||||
// image, we assume that the most recently added layer is its
|
||||
// parent.
|
||||
parentLayer := ""
|
||||
for _, blob := range s.BlobList {
|
||||
if layerList, ok := s.Layers[blob.Digest]; ok {
|
||||
parentLayer = layerList[len(layerList)-1]
|
||||
}
|
||||
}
|
||||
// If we have an expected content digest, generate a layer ID
|
||||
// based on the parent's ID and the expected content digest.
|
||||
id := ""
|
||||
if digest.Validate() == nil {
|
||||
id = ddigest.Canonical.FromBytes([]byte(parentLayer + "+" + digest.String())).Hex()
|
||||
}
|
||||
// Attempt to create the identified layer and import its contents.
|
||||
layer, uncompressedSize, err := s.imageRef.transport.store.PutLayer(id, parentLayer, nil, "", true, multi)
|
||||
if err != nil && err != storage.ErrDuplicateID {
|
||||
logrus.Debugf("error importing layer blob %q as %q: %v", blobinfo.Digest, id, err)
|
||||
return errorBlobInfo, err
|
||||
}
|
||||
if err == storage.ErrDuplicateID {
|
||||
// We specified an ID, and there's already a layer with
|
||||
// the same ID. Drain the input so that we can look at
|
||||
// its length and digest.
|
||||
_, err := io.Copy(ioutil.Discard, multi)
|
||||
if err != nil && err != io.EOF {
|
||||
logrus.Debugf("error digesting layer blob %q: %v", blobinfo.Digest, id, err)
|
||||
return errorBlobInfo, err
|
||||
}
|
||||
hash = hasher.Digest().String()
|
||||
} else {
|
||||
// Applied the layer with the specified ID. Note the
|
||||
// size info and computed digest.
|
||||
hash = hasher.Digest().String()
|
||||
layerMeta := storageLayerMetadata{
|
||||
Digest: hash,
|
||||
CompressedSize: counter.Count,
|
||||
Size: uncompressedSize,
|
||||
}
|
||||
if metadata, err := json.Marshal(&layerMeta); len(metadata) != 0 && err == nil {
|
||||
s.imageRef.transport.store.SetMetadata(layer.ID, string(metadata))
|
||||
}
|
||||
// Hang on to the new layer's ID.
|
||||
id = layer.ID
|
||||
}
|
||||
blobSize = counter.Count
|
||||
// Check if the size looks right.
|
||||
if blobinfo.Size >= 0 && blobSize != blobinfo.Size {
|
||||
logrus.Debugf("blob %q size is %d, not %d, rejecting", blobinfo.Digest, blobSize, blobinfo.Size)
|
||||
if layer != nil {
|
||||
// Something's wrong; delete the newly-created layer.
|
||||
s.imageRef.transport.store.DeleteLayer(layer.ID)
|
||||
}
|
||||
return errorBlobInfo, ErrBlobSizeMismatch
|
||||
}
|
||||
// If the content digest was specified, verify it.
|
||||
if digest.Validate() == nil && digest.String() != hash {
|
||||
logrus.Debugf("blob %q digests to %q, rejecting", blobinfo.Digest, hash)
|
||||
if layer != nil {
|
||||
// Something's wrong; delete the newly-created layer.
|
||||
s.imageRef.transport.store.DeleteLayer(layer.ID)
|
||||
}
|
||||
return errorBlobInfo, ErrBlobDigestMismatch
|
||||
}
|
||||
// If we didn't get a digest, construct one.
|
||||
if digest == "" {
|
||||
digest = ddigest.Digest(hash)
|
||||
}
|
||||
// Record that this layer blob is a layer, and the layer ID it
|
||||
// ended up having. This is a list, in case the same blob is
|
||||
// being applied more than once.
|
||||
s.Layers[digest] = append(s.Layers[digest], id)
|
||||
s.BlobList = append(s.BlobList, types.BlobInfo{Digest: digest, Size: blobSize})
|
||||
if layer != nil {
|
||||
logrus.Debugf("blob %q imported as a filesystem layer %q", blobinfo.Digest, id)
|
||||
} else {
|
||||
logrus.Debugf("layer blob %q already present as layer %q", blobinfo.Digest, id)
|
||||
}
|
||||
} else {
|
||||
// It's just data. Finish scanning it in, check that our
|
||||
// computed digest matches the passed-in digest, and store it,
|
||||
// but leave it out of the blob-to-layer-ID map so that we can
|
||||
// tell that it's not a layer.
|
||||
blob, err := ioutil.ReadAll(multi)
|
||||
if err != nil && err != io.EOF {
|
||||
return errorBlobInfo, err
|
||||
}
|
||||
blobSize = int64(len(blob))
|
||||
hash = hasher.Digest().String()
|
||||
if blobinfo.Size >= 0 && blobSize != blobinfo.Size {
|
||||
logrus.Debugf("blob %q size is %d, not %d, rejecting", blobinfo.Digest, blobSize, blobinfo.Size)
|
||||
return errorBlobInfo, ErrBlobSizeMismatch
|
||||
}
|
||||
// If we were given a digest, verify that the content matches
|
||||
// it.
|
||||
if digest.Validate() == nil && digest.String() != hash {
|
||||
logrus.Debugf("blob %q digests to %q, rejecting", blobinfo.Digest, hash)
|
||||
return errorBlobInfo, ErrBlobDigestMismatch
|
||||
}
|
||||
// If we didn't get a digest, construct one.
|
||||
if digest == "" {
|
||||
digest = ddigest.Digest(hash)
|
||||
}
|
||||
// Save the blob for when we Commit().
|
||||
s.BlobData[digest] = blob
|
||||
s.BlobList = append(s.BlobList, types.BlobInfo{Digest: digest, Size: blobSize})
|
||||
logrus.Debugf("blob %q imported as opaque data %q", blobinfo.Digest, digest)
|
||||
}
|
||||
return types.BlobInfo{
|
||||
Digest: digest,
|
||||
Size: blobSize,
|
||||
}, nil
|
||||
}
|
||||
|
||||
func (s *storageImageDestination) HasBlob(blobinfo types.BlobInfo) (bool, int64, error) {
|
||||
if blobinfo.Digest == "" {
|
||||
return false, -1, fmt.Errorf(`"Can not check for a blob with unknown digest`)
|
||||
}
|
||||
for _, blob := range s.BlobList {
|
||||
if blob.Digest == blobinfo.Digest {
|
||||
return true, blob.Size, nil
|
||||
}
|
||||
}
|
||||
return false, -1, types.ErrBlobNotFound
|
||||
}
|
||||
|
||||
func (s *storageImageDestination) ReapplyBlob(blobinfo types.BlobInfo) (types.BlobInfo, error) {
|
||||
err := blobinfo.Digest.Validate()
|
||||
if err != nil {
|
||||
return types.BlobInfo{}, err
|
||||
}
|
||||
if layerList, ok := s.Layers[blobinfo.Digest]; !ok || len(layerList) < 1 {
|
||||
b, err := s.imageRef.transport.store.GetImageBigData(s.ID, blobinfo.Digest.String())
|
||||
if err != nil {
|
||||
return types.BlobInfo{}, err
|
||||
}
|
||||
return types.BlobInfo{Digest: blobinfo.Digest, Size: int64(len(b))}, nil
|
||||
}
|
||||
layerList := s.Layers[blobinfo.Digest]
|
||||
rc, _, err := diffLayer(s.imageRef.transport.store, layerList[len(layerList)-1])
|
||||
if err != nil {
|
||||
return types.BlobInfo{}, err
|
||||
}
|
||||
return s.PutBlob(rc, blobinfo)
|
||||
}
|
||||
|
||||
func (s *storageImageDestination) Commit() error {
|
||||
// Create the image record.
|
||||
lastLayer := ""
|
||||
for _, blob := range s.BlobList {
|
||||
if layerList, ok := s.Layers[blob.Digest]; ok {
|
||||
lastLayer = layerList[len(layerList)-1]
|
||||
}
|
||||
}
|
||||
img, err := s.imageRef.transport.store.CreateImage(s.ID, nil, lastLayer, "", nil)
|
||||
if err != nil {
|
||||
logrus.Debugf("error creating image: %q", err)
|
||||
return err
|
||||
}
|
||||
logrus.Debugf("created new image ID %q", img.ID)
|
||||
s.ID = img.ID
|
||||
if s.Tag != "" {
|
||||
// We have a name to set, so move the name to this image.
|
||||
if err := s.imageRef.transport.store.SetNames(img.ID, []string{s.Tag}); err != nil {
|
||||
if _, err2 := s.imageRef.transport.store.DeleteImage(img.ID, true); err2 != nil {
|
||||
logrus.Debugf("error deleting incomplete image %q: %v", img.ID, err2)
|
||||
}
|
||||
logrus.Debugf("error setting names on image %q: %v", img.ID, err)
|
||||
return err
|
||||
}
|
||||
logrus.Debugf("set name of image %q to %q", img.ID, s.Tag)
|
||||
}
|
||||
// Save the data blobs to disk, and drop their contents from memory.
|
||||
keys := []ddigest.Digest{}
|
||||
for k, v := range s.BlobData {
|
||||
if err := s.imageRef.transport.store.SetImageBigData(img.ID, k.String(), v); err != nil {
|
||||
if _, err2 := s.imageRef.transport.store.DeleteImage(img.ID, true); err2 != nil {
|
||||
logrus.Debugf("error deleting incomplete image %q: %v", img.ID, err2)
|
||||
}
|
||||
logrus.Debugf("error saving big data %q for image %q: %v", k, img.ID, err)
|
||||
return err
|
||||
}
|
||||
keys = append(keys, k)
|
||||
}
|
||||
for _, key := range keys {
|
||||
delete(s.BlobData, key)
|
||||
}
|
||||
// Save the manifest, if we have one.
|
||||
if err := s.imageRef.transport.store.SetImageBigData(s.ID, "manifest", s.Manifest); err != nil {
|
||||
if _, err2 := s.imageRef.transport.store.DeleteImage(img.ID, true); err2 != nil {
|
||||
logrus.Debugf("error deleting incomplete image %q: %v", img.ID, err2)
|
||||
}
|
||||
logrus.Debugf("error saving manifest for image %q: %v", img.ID, err)
|
||||
return err
|
||||
}
|
||||
// Save the signatures, if we have any.
|
||||
if err := s.imageRef.transport.store.SetImageBigData(s.ID, "signatures", s.Signatures); err != nil {
|
||||
if _, err2 := s.imageRef.transport.store.DeleteImage(img.ID, true); err2 != nil {
|
||||
logrus.Debugf("error deleting incomplete image %q: %v", img.ID, err2)
|
||||
}
|
||||
logrus.Debugf("error saving signatures for image %q: %v", img.ID, err)
|
||||
return err
|
||||
}
|
||||
// Save our metadata.
|
||||
metadata, err := json.Marshal(s)
|
||||
if err != nil {
|
||||
if _, err2 := s.imageRef.transport.store.DeleteImage(img.ID, true); err2 != nil {
|
||||
logrus.Debugf("error deleting incomplete image %q: %v", img.ID, err2)
|
||||
}
|
||||
logrus.Debugf("error encoding metadata for image %q: %v", img.ID, err)
|
||||
return err
|
||||
}
|
||||
if len(metadata) != 0 {
|
||||
if err = s.imageRef.transport.store.SetMetadata(s.ID, string(metadata)); err != nil {
|
||||
if _, err2 := s.imageRef.transport.store.DeleteImage(img.ID, true); err2 != nil {
|
||||
logrus.Debugf("error deleting incomplete image %q: %v", img.ID, err2)
|
||||
}
|
||||
logrus.Debugf("error saving metadata for image %q: %v", img.ID, err)
|
||||
return err
|
||||
}
|
||||
logrus.Debugf("saved image metadata %q", string(metadata))
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *storageImageDestination) SupportedManifestMIMETypes() []string {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *storageImageDestination) PutManifest(manifest []byte) error {
|
||||
s.Manifest = make([]byte, len(manifest))
|
||||
copy(s.Manifest, manifest)
|
||||
return nil
|
||||
}
|
||||
|
||||
// SupportsSignatures returns an error if we can't expect GetSignatures() to
|
||||
// return data that was previously supplied to PutSignatures().
|
||||
func (s *storageImageDestination) SupportsSignatures() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// AcceptsForeignLayerURLs returns false iff foreign layers in manifest should be actually
|
||||
// uploaded to the image destination, true otherwise.
|
||||
func (s *storageImageDestination) AcceptsForeignLayerURLs() bool {
|
||||
return false
|
||||
}
|
||||
|
||||
func (s *storageImageDestination) PutSignatures(signatures [][]byte) error {
|
||||
sizes := []int{}
|
||||
sigblob := []byte{}
|
||||
for _, sig := range signatures {
|
||||
sizes = append(sizes, len(sig))
|
||||
newblob := make([]byte, len(sigblob)+len(sig))
|
||||
copy(newblob, sigblob)
|
||||
copy(newblob[len(sigblob):], sig)
|
||||
sigblob = newblob
|
||||
}
|
||||
s.Signatures = sigblob
|
||||
s.SignatureSizes = sizes
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *storageImageSource) GetBlob(info types.BlobInfo) (rc io.ReadCloser, n int64, err error) {
|
||||
rc, n, _, err = s.getBlobAndLayerID(info)
|
||||
return rc, n, err
|
||||
}
|
||||
|
||||
func (s *storageImageSource) getBlobAndLayerID(info types.BlobInfo) (rc io.ReadCloser, n int64, layerID string, err error) {
|
||||
err = info.Digest.Validate()
|
||||
if err != nil {
|
||||
return nil, -1, "", err
|
||||
}
|
||||
if layerList, ok := s.Layers[info.Digest]; !ok || len(layerList) < 1 {
|
||||
b, err := s.imageRef.transport.store.GetImageBigData(s.ID, info.Digest.String())
|
||||
if err != nil {
|
||||
return nil, -1, "", err
|
||||
}
|
||||
r := bytes.NewReader(b)
|
||||
logrus.Debugf("exporting opaque data as blob %q", info.Digest.String())
|
||||
return ioutil.NopCloser(r), int64(r.Len()), "", nil
|
||||
}
|
||||
// If the blob was "put" more than once, we have multiple layer IDs
|
||||
// which should all produce the same diff. For the sake of tests that
|
||||
// want to make sure we created different layers each time the blob was
|
||||
// "put", though, cycle through the layers.
|
||||
layerList := s.Layers[info.Digest]
|
||||
position, ok := s.LayerPosition[info.Digest]
|
||||
if !ok {
|
||||
position = 0
|
||||
}
|
||||
s.LayerPosition[info.Digest] = (position + 1) % len(layerList)
|
||||
logrus.Debugf("exporting filesystem layer %q for blob %q", layerList[position], info.Digest)
|
||||
rc, n, err = diffLayer(s.imageRef.transport.store, layerList[position])
|
||||
return rc, n, layerList[position], err
|
||||
}
|
||||
|
||||
func diffLayer(store storage.Store, layerID string) (rc io.ReadCloser, n int64, err error) {
|
||||
layer, err := store.GetLayer(layerID)
|
||||
if err != nil {
|
||||
return nil, -1, err
|
||||
}
|
||||
layerMeta := storageLayerMetadata{
|
||||
CompressedSize: -1,
|
||||
}
|
||||
if layer.Metadata != "" {
|
||||
if err := json.Unmarshal([]byte(layer.Metadata), &layerMeta); err != nil {
|
||||
return nil, -1, fmt.Errorf("error decoding metadata for layer %q: %v", layerID, err)
|
||||
}
|
||||
}
|
||||
if layerMeta.CompressedSize <= 0 {
|
||||
n = -1
|
||||
} else {
|
||||
n = layerMeta.CompressedSize
|
||||
}
|
||||
diff, err := store.Diff("", layer.ID)
|
||||
if err != nil {
|
||||
return nil, -1, err
|
||||
}
|
||||
return diff, n, nil
|
||||
}
|
||||
|
||||
func (s *storageImageSource) GetManifest() (manifestBlob []byte, MIMEType string, err error) {
|
||||
manifestBlob, err = s.imageRef.transport.store.GetImageBigData(s.ID, "manifest")
|
||||
return manifestBlob, manifest.GuessMIMEType(manifestBlob), err
|
||||
}
|
||||
|
||||
func (s *storageImageSource) GetTargetManifest(digest ddigest.Digest) (manifestBlob []byte, MIMEType string, err error) {
|
||||
return nil, "", ErrNoManifestLists
|
||||
}
|
||||
|
||||
func (s *storageImageSource) GetSignatures() (signatures [][]byte, err error) {
|
||||
var offset int
|
||||
signature, err := s.imageRef.transport.store.GetImageBigData(s.ID, "signatures")
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
sigslice := [][]byte{}
|
||||
for _, length := range s.SignatureSizes {
|
||||
sigslice = append(sigslice, signature[offset:offset+length])
|
||||
offset += length
|
||||
}
|
||||
if offset != len(signature) {
|
||||
return nil, fmt.Errorf("signatures data contained %d extra bytes", len(signatures)-offset)
|
||||
}
|
||||
return sigslice, nil
|
||||
}
|
||||
|
||||
func (s *storageImageSource) getSize() (int64, error) {
|
||||
var sum int64
|
||||
names, err := s.imageRef.transport.store.ListImageBigData(s.imageRef.id)
|
||||
if err != nil {
|
||||
return -1, fmt.Errorf("error reading image %q: %v", s.imageRef.id, err)
|
||||
}
|
||||
for _, name := range names {
|
||||
bigSize, err := s.imageRef.transport.store.GetImageBigDataSize(s.imageRef.id, name)
|
||||
if err != nil {
|
||||
return -1, fmt.Errorf("error reading data blob size %q for %q: %v", name, s.imageRef.id, err)
|
||||
}
|
||||
sum += bigSize
|
||||
}
|
||||
for _, sigSize := range s.SignatureSizes {
|
||||
sum += int64(sigSize)
|
||||
}
|
||||
for _, layerList := range s.Layers {
|
||||
for _, layerID := range layerList {
|
||||
layer, err := s.imageRef.transport.store.GetLayer(layerID)
|
||||
if err != nil {
|
||||
return -1, err
|
||||
}
|
||||
layerMeta := storageLayerMetadata{
|
||||
Size: -1,
|
||||
}
|
||||
if layer.Metadata != "" {
|
||||
if err := json.Unmarshal([]byte(layer.Metadata), &layerMeta); err != nil {
|
||||
return -1, fmt.Errorf("error decoding metadata for layer %q: %v", layerID, err)
|
||||
}
|
||||
}
|
||||
if layerMeta.Size < 0 {
|
||||
return -1, fmt.Errorf("size for layer %q is unknown, failing getSize()", layerID)
|
||||
}
|
||||
sum += layerMeta.Size
|
||||
}
|
||||
}
|
||||
return sum, nil
|
||||
}
|
||||
|
||||
func (s *storageImage) Size() (int64, error) {
|
||||
return s.size, nil
|
||||
}
|
||||
|
||||
// newImage creates an image that also knows its size
|
||||
func newImage(s storageReference) (types.Image, error) {
|
||||
src, err := newImageSource(s)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
img, err := image.FromSource(src)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
size, err := src.getSize()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &storageImage{Image: img, size: size}, nil
|
||||
}
|
127
vendor/github.com/containers/image/storage/storage_reference.go
generated
vendored
Normal file
127
vendor/github.com/containers/image/storage/storage_reference.go
generated
vendored
Normal file
|
@ -0,0 +1,127 @@
|
|||
package storage
|
||||
|
||||
import (
|
||||
"strings"
|
||||
|
||||
"github.com/Sirupsen/logrus"
|
||||
"github.com/containers/image/docker/reference"
|
||||
"github.com/containers/image/types"
|
||||
)
|
||||
|
||||
// A storageReference holds an arbitrary name and/or an ID, which is a 32-byte
|
||||
// value hex-encoded into a 64-character string, and a reference to a Store
|
||||
// where an image is, or would be, kept.
|
||||
type storageReference struct {
|
||||
transport storageTransport
|
||||
reference string
|
||||
id string
|
||||
name reference.Named
|
||||
}
|
||||
|
||||
func newReference(transport storageTransport, reference, id string, name reference.Named) *storageReference {
|
||||
// We take a copy of the transport, which contains a pointer to the
|
||||
// store that it used for resolving this reference, so that the
|
||||
// transport that we'll return from Transport() won't be affected by
|
||||
// further calls to the original transport's SetStore() method.
|
||||
return &storageReference{
|
||||
transport: transport,
|
||||
reference: reference,
|
||||
id: id,
|
||||
name: name,
|
||||
}
|
||||
}
|
||||
|
||||
// Resolve the reference's name to an image ID in the store, if there's already
|
||||
// one present with the same name or ID.
|
||||
func (s *storageReference) resolveID() string {
|
||||
if s.id == "" {
|
||||
image, err := s.transport.store.GetImage(s.reference)
|
||||
if image != nil && err == nil {
|
||||
s.id = image.ID
|
||||
}
|
||||
}
|
||||
return s.id
|
||||
}
|
||||
|
||||
// Return a Transport object that defaults to using the same store that we used
|
||||
// to build this reference object.
|
||||
func (s storageReference) Transport() types.ImageTransport {
|
||||
return &storageTransport{
|
||||
store: s.transport.store,
|
||||
}
|
||||
}
|
||||
|
||||
// Return a name with a tag, if we have a name to base them on.
|
||||
func (s storageReference) DockerReference() reference.Named {
|
||||
return s.name
|
||||
}
|
||||
|
||||
// Return a name with a tag, prefixed with the graph root and driver name, to
|
||||
// disambiguate between images which may be present in multiple stores and
|
||||
// share only their names.
|
||||
func (s storageReference) StringWithinTransport() string {
|
||||
storeSpec := "[" + s.transport.store.GetGraphDriverName() + "@" + s.transport.store.GetGraphRoot() + "]"
|
||||
if s.name == nil {
|
||||
return storeSpec + "@" + s.id
|
||||
}
|
||||
if s.id == "" {
|
||||
return storeSpec + s.reference
|
||||
}
|
||||
return storeSpec + s.reference + "@" + s.id
|
||||
}
|
||||
|
||||
func (s storageReference) PolicyConfigurationIdentity() string {
|
||||
return s.StringWithinTransport()
|
||||
}
|
||||
|
||||
// Also accept policy that's tied to the combination of the graph root and
|
||||
// driver name, to apply to all images stored in the Store, and to just the
|
||||
// graph root, in case we're using multiple drivers in the same directory for
|
||||
// some reason.
|
||||
func (s storageReference) PolicyConfigurationNamespaces() []string {
|
||||
storeSpec := "[" + s.transport.store.GetGraphDriverName() + "@" + s.transport.store.GetGraphRoot() + "]"
|
||||
driverlessStoreSpec := "[" + s.transport.store.GetGraphRoot() + "]"
|
||||
namespaces := []string{}
|
||||
if s.name != nil {
|
||||
if s.id != "" {
|
||||
// The reference without the ID is also a valid namespace.
|
||||
namespaces = append(namespaces, storeSpec+s.reference)
|
||||
}
|
||||
components := strings.Split(s.name.FullName(), "/")
|
||||
for len(components) > 0 {
|
||||
namespaces = append(namespaces, storeSpec+strings.Join(components, "/"))
|
||||
components = components[:len(components)-1]
|
||||
}
|
||||
}
|
||||
namespaces = append(namespaces, storeSpec)
|
||||
namespaces = append(namespaces, driverlessStoreSpec)
|
||||
return namespaces
|
||||
}
|
||||
|
||||
func (s storageReference) NewImage(ctx *types.SystemContext) (types.Image, error) {
|
||||
return newImage(s)
|
||||
}
|
||||
|
||||
func (s storageReference) DeleteImage(ctx *types.SystemContext) error {
|
||||
id := s.resolveID()
|
||||
if id == "" {
|
||||
logrus.Errorf("reference %q does not resolve to an image ID", s.StringWithinTransport())
|
||||
return ErrNoSuchImage
|
||||
}
|
||||
layers, err := s.transport.store.DeleteImage(id, true)
|
||||
if err == nil {
|
||||
logrus.Debugf("deleted image %q", id)
|
||||
for _, layer := range layers {
|
||||
logrus.Debugf("deleted layer %q", layer)
|
||||
}
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
func (s storageReference) NewImageSource(ctx *types.SystemContext, requestedManifestMIMETypes []string) (types.ImageSource, error) {
|
||||
return newImageSource(s)
|
||||
}
|
||||
|
||||
func (s storageReference) NewImageDestination(ctx *types.SystemContext) (types.ImageDestination, error) {
|
||||
return newImageDestination(s)
|
||||
}
|
285
vendor/github.com/containers/image/storage/storage_transport.go
generated
vendored
Normal file
285
vendor/github.com/containers/image/storage/storage_transport.go
generated
vendored
Normal file
|
@ -0,0 +1,285 @@
|
|||
package storage
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"path/filepath"
|
||||
"regexp"
|
||||
"strings"
|
||||
|
||||
"github.com/Sirupsen/logrus"
|
||||
"github.com/containers/image/docker/reference"
|
||||
"github.com/containers/image/types"
|
||||
"github.com/containers/storage/storage"
|
||||
"github.com/docker/distribution/digest"
|
||||
ddigest "github.com/docker/distribution/digest"
|
||||
)
|
||||
|
||||
var (
|
||||
// Transport is an ImageTransport that uses either a default
|
||||
// storage.Store or one that's it's explicitly told to use.
|
||||
Transport StoreTransport = &storageTransport{}
|
||||
// ErrInvalidReference is returned when ParseReference() is passed an
|
||||
// empty reference.
|
||||
ErrInvalidReference = errors.New("invalid reference")
|
||||
// ErrPathNotAbsolute is returned when a graph root is not an absolute
|
||||
// path name.
|
||||
ErrPathNotAbsolute = errors.New("path name is not absolute")
|
||||
idRegexp = regexp.MustCompile("^(sha256:)?([0-9a-fA-F]{64})$")
|
||||
)
|
||||
|
||||
// StoreTransport is an ImageTransport that uses a storage.Store to parse
|
||||
// references, either its own default or one that it's told to use.
|
||||
type StoreTransport interface {
|
||||
types.ImageTransport
|
||||
// SetStore sets the default store for this transport.
|
||||
SetStore(storage.Store)
|
||||
// GetImage retrieves the image from the transport's store that's named
|
||||
// by the reference.
|
||||
GetImage(types.ImageReference) (*storage.Image, error)
|
||||
// GetStoreImage retrieves the image from a specified store that's named
|
||||
// by the reference.
|
||||
GetStoreImage(storage.Store, types.ImageReference) (*storage.Image, error)
|
||||
// ParseStoreReference parses a reference, overriding any store
|
||||
// specification that it may contain.
|
||||
ParseStoreReference(store storage.Store, reference string) (*storageReference, error)
|
||||
}
|
||||
|
||||
type storageTransport struct {
|
||||
store storage.Store
|
||||
}
|
||||
|
||||
func (s *storageTransport) Name() string {
|
||||
// Still haven't really settled on a name.
|
||||
return "containers-storage"
|
||||
}
|
||||
|
||||
// SetStore sets the Store object which the Transport will use for parsing
|
||||
// references when information about a Store is not directly specified as part
|
||||
// of the reference. If one is not set, the library will attempt to initialize
|
||||
// one with default settings when a reference needs to be parsed. Calling
|
||||
// SetStore does not affect previously parsed references.
|
||||
func (s *storageTransport) SetStore(store storage.Store) {
|
||||
s.store = store
|
||||
}
|
||||
|
||||
// ParseStoreReference takes a name or an ID, tries to figure out which it is
|
||||
// relative to the given store, and returns it in a reference object.
|
||||
func (s storageTransport) ParseStoreReference(store storage.Store, ref string) (*storageReference, error) {
|
||||
var name reference.Named
|
||||
var sum digest.Digest
|
||||
var err error
|
||||
if ref == "" {
|
||||
return nil, ErrInvalidReference
|
||||
}
|
||||
if ref[0] == '[' {
|
||||
// Ignore the store specifier.
|
||||
closeIndex := strings.IndexRune(ref, ']')
|
||||
if closeIndex < 1 {
|
||||
return nil, ErrInvalidReference
|
||||
}
|
||||
ref = ref[closeIndex+1:]
|
||||
}
|
||||
refInfo := strings.SplitN(ref, "@", 2)
|
||||
if len(refInfo) == 1 {
|
||||
// A name.
|
||||
name, err = reference.ParseNamed(refInfo[0])
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
} else if len(refInfo) == 2 {
|
||||
// An ID, possibly preceded by a name.
|
||||
if refInfo[0] != "" {
|
||||
name, err = reference.ParseNamed(refInfo[0])
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
sum, err = digest.ParseDigest("sha256:" + refInfo[1])
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
} else { // Coverage: len(refInfo) is always 1 or 2
|
||||
// Anything else: store specified in a form we don't
|
||||
// recognize.
|
||||
return nil, ErrInvalidReference
|
||||
}
|
||||
storeSpec := "[" + store.GetGraphDriverName() + "@" + store.GetGraphRoot() + "]"
|
||||
id := ""
|
||||
if sum.Validate() == nil {
|
||||
id = sum.Hex()
|
||||
}
|
||||
refname := ""
|
||||
if name != nil {
|
||||
name = reference.WithDefaultTag(name)
|
||||
refname = verboseName(name)
|
||||
}
|
||||
if refname == "" {
|
||||
logrus.Debugf("parsed reference into %q", storeSpec+"@"+id)
|
||||
} else if id == "" {
|
||||
logrus.Debugf("parsed reference into %q", storeSpec+refname)
|
||||
} else {
|
||||
logrus.Debugf("parsed reference into %q", storeSpec+refname+"@"+id)
|
||||
}
|
||||
return newReference(storageTransport{store: store}, refname, id, name), nil
|
||||
}
|
||||
|
||||
func (s *storageTransport) GetStore() (storage.Store, error) {
|
||||
// Return the transport's previously-set store. If we don't have one
|
||||
// of those, initialize one now.
|
||||
if s.store == nil {
|
||||
store, err := storage.GetStore(storage.DefaultStoreOptions)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
s.store = store
|
||||
}
|
||||
return s.store, nil
|
||||
}
|
||||
|
||||
// ParseReference takes a name and/or an ID ("_name_"/"@_id_"/"_name_@_id_"),
|
||||
// possibly prefixed with a store specifier in the form "[_graphroot_]" or
|
||||
// "[_driver_@_graphroot_]", tries to figure out which it is, and returns it in
|
||||
// a reference object. If the _graphroot_ is a location other than the default,
|
||||
// it needs to have been previously opened using storage.GetStore(), so that it
|
||||
// can figure out which run root goes with the graph root.
|
||||
func (s *storageTransport) ParseReference(reference string) (types.ImageReference, error) {
|
||||
store, err := s.GetStore()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
// Check if there's a store location prefix. If there is, then it
|
||||
// needs to match a store that was previously initialized using
|
||||
// storage.GetStore(), or be enough to let the storage library fill out
|
||||
// the rest using knowledge that it has from elsewhere.
|
||||
if reference[0] == '[' {
|
||||
closeIndex := strings.IndexRune(reference, ']')
|
||||
if closeIndex < 1 {
|
||||
return nil, ErrInvalidReference
|
||||
}
|
||||
storeSpec := reference[1:closeIndex]
|
||||
reference = reference[closeIndex+1:]
|
||||
storeInfo := strings.SplitN(storeSpec, "@", 2)
|
||||
if len(storeInfo) == 1 && storeInfo[0] != "" {
|
||||
// One component: the graph root.
|
||||
if !filepath.IsAbs(storeInfo[0]) {
|
||||
return nil, ErrPathNotAbsolute
|
||||
}
|
||||
store2, err := storage.GetStore(storage.StoreOptions{
|
||||
GraphRoot: storeInfo[0],
|
||||
})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
store = store2
|
||||
} else if len(storeInfo) == 2 && storeInfo[0] != "" && storeInfo[1] != "" {
|
||||
// Two components: the driver type and the graph root.
|
||||
if !filepath.IsAbs(storeInfo[1]) {
|
||||
return nil, ErrPathNotAbsolute
|
||||
}
|
||||
store2, err := storage.GetStore(storage.StoreOptions{
|
||||
GraphDriverName: storeInfo[0],
|
||||
GraphRoot: storeInfo[1],
|
||||
})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
store = store2
|
||||
} else {
|
||||
// Anything else: store specified in a form we don't
|
||||
// recognize.
|
||||
return nil, ErrInvalidReference
|
||||
}
|
||||
}
|
||||
return s.ParseStoreReference(store, reference)
|
||||
}
|
||||
|
||||
func (s storageTransport) GetStoreImage(store storage.Store, ref types.ImageReference) (*storage.Image, error) {
|
||||
dref := ref.DockerReference()
|
||||
if dref == nil {
|
||||
if sref, ok := ref.(*storageReference); ok {
|
||||
if sref.id != "" {
|
||||
if img, err := store.GetImage(sref.id); err == nil {
|
||||
return img, nil
|
||||
}
|
||||
}
|
||||
}
|
||||
return nil, ErrInvalidReference
|
||||
}
|
||||
return store.GetImage(verboseName(dref))
|
||||
}
|
||||
|
||||
func (s *storageTransport) GetImage(ref types.ImageReference) (*storage.Image, error) {
|
||||
store, err := s.GetStore()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return s.GetStoreImage(store, ref)
|
||||
}
|
||||
|
||||
func (s storageTransport) ValidatePolicyConfigurationScope(scope string) error {
|
||||
// Check that there's a store location prefix. Values we're passed are
|
||||
// expected to come from PolicyConfigurationIdentity or
|
||||
// PolicyConfigurationNamespaces, so if there's no store location,
|
||||
// something's wrong.
|
||||
if scope[0] != '[' {
|
||||
return ErrInvalidReference
|
||||
}
|
||||
// Parse the store location prefix.
|
||||
closeIndex := strings.IndexRune(scope, ']')
|
||||
if closeIndex < 1 {
|
||||
return ErrInvalidReference
|
||||
}
|
||||
storeSpec := scope[1:closeIndex]
|
||||
scope = scope[closeIndex+1:]
|
||||
storeInfo := strings.SplitN(storeSpec, "@", 2)
|
||||
if len(storeInfo) == 1 && storeInfo[0] != "" {
|
||||
// One component: the graph root.
|
||||
if !filepath.IsAbs(storeInfo[0]) {
|
||||
return ErrPathNotAbsolute
|
||||
}
|
||||
} else if len(storeInfo) == 2 && storeInfo[0] != "" && storeInfo[1] != "" {
|
||||
// Two components: the driver type and the graph root.
|
||||
if !filepath.IsAbs(storeInfo[1]) {
|
||||
return ErrPathNotAbsolute
|
||||
}
|
||||
} else {
|
||||
// Anything else: store specified in a form we don't
|
||||
// recognize.
|
||||
return ErrInvalidReference
|
||||
}
|
||||
// That might be all of it, and that's okay.
|
||||
if scope == "" {
|
||||
return nil
|
||||
}
|
||||
// But if there is anything left, it has to be a name, with or without
|
||||
// a tag, with or without an ID, since we don't return namespace values
|
||||
// that are just bare IDs.
|
||||
scopeInfo := strings.SplitN(scope, "@", 2)
|
||||
if len(scopeInfo) == 1 && scopeInfo[0] != "" {
|
||||
_, err := reference.ParseNamed(scopeInfo[0])
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
} else if len(scopeInfo) == 2 && scopeInfo[0] != "" && scopeInfo[1] != "" {
|
||||
_, err := reference.ParseNamed(scopeInfo[0])
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
_, err = ddigest.ParseDigest("sha256:" + scopeInfo[1])
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
} else {
|
||||
return ErrInvalidReference
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func verboseName(name reference.Named) string {
|
||||
name = reference.WithDefaultTag(name)
|
||||
tag := ""
|
||||
if tagged, ok := name.(reference.NamedTagged); ok {
|
||||
tag = tagged.Tag()
|
||||
}
|
||||
return name.FullName() + ":" + tag
|
||||
}
|
61
vendor/github.com/containers/image/transports/transports.go
generated
vendored
Normal file
61
vendor/github.com/containers/image/transports/transports.go
generated
vendored
Normal file
|
@ -0,0 +1,61 @@
|
|||
package transports
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"strings"
|
||||
|
||||
"github.com/containers/image/directory"
|
||||
"github.com/containers/image/docker"
|
||||
"github.com/containers/image/docker/daemon"
|
||||
ociLayout "github.com/containers/image/oci/layout"
|
||||
"github.com/containers/image/openshift"
|
||||
"github.com/containers/image/storage"
|
||||
"github.com/containers/image/types"
|
||||
)
|
||||
|
||||
// KnownTransports is a registry of known ImageTransport instances.
|
||||
var KnownTransports map[string]types.ImageTransport
|
||||
|
||||
func init() {
|
||||
KnownTransports = make(map[string]types.ImageTransport)
|
||||
// NOTE: Make sure docs/policy.json.md is updated when adding or updating
|
||||
// a transport.
|
||||
for _, t := range []types.ImageTransport{
|
||||
directory.Transport,
|
||||
docker.Transport,
|
||||
daemon.Transport,
|
||||
ociLayout.Transport,
|
||||
openshift.Transport,
|
||||
storage.Transport,
|
||||
} {
|
||||
name := t.Name()
|
||||
if _, ok := KnownTransports[name]; ok {
|
||||
panic(fmt.Sprintf("Duplicate image transport name %s", name))
|
||||
}
|
||||
KnownTransports[name] = t
|
||||
}
|
||||
}
|
||||
|
||||
// ParseImageName converts a URL-like image name to a types.ImageReference.
|
||||
func ParseImageName(imgName string) (types.ImageReference, error) {
|
||||
parts := strings.SplitN(imgName, ":", 2)
|
||||
if len(parts) != 2 {
|
||||
return nil, fmt.Errorf(`Invalid image name "%s", expected colon-separated transport:reference`, imgName)
|
||||
}
|
||||
transport, ok := KnownTransports[parts[0]]
|
||||
if !ok {
|
||||
return nil, fmt.Errorf(`Invalid image name "%s", unknown transport "%s"`, imgName, parts[0])
|
||||
}
|
||||
return transport.ParseReference(parts[1])
|
||||
}
|
||||
|
||||
// ImageName converts a types.ImageReference into an URL-like image name, which MUST be such that
|
||||
// ParseImageName(ImageName(reference)) returns an equivalent reference.
|
||||
//
|
||||
// This is the generally recommended way to refer to images in the UI.
|
||||
//
|
||||
// NOTE: The returned string is not promised to be equal to the original input to ParseImageName;
|
||||
// e.g. default attribute values omitted by the user may be filled in in the return value, or vice versa.
|
||||
func ImageName(ref types.ImageReference) string {
|
||||
return ref.Transport().Name() + ":" + ref.StringWithinTransport()
|
||||
}
|
300
vendor/github.com/containers/image/types/types.go
generated
vendored
Normal file
300
vendor/github.com/containers/image/types/types.go
generated
vendored
Normal file
|
@ -0,0 +1,300 @@
|
|||
package types
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"io"
|
||||
"time"
|
||||
|
||||
"github.com/containers/image/docker/reference"
|
||||
"github.com/docker/distribution/digest"
|
||||
)
|
||||
|
||||
// ImageTransport is a top-level namespace for ways to to store/load an image.
|
||||
// It should generally correspond to ImageSource/ImageDestination implementations.
|
||||
//
|
||||
// Note that ImageTransport is based on "ways the users refer to image storage", not necessarily on the underlying physical transport.
|
||||
// For example, all Docker References would be used within a single "docker" transport, regardless of whether the images are pulled over HTTP or HTTPS
|
||||
// (or, even, IPv4 or IPv6).
|
||||
//
|
||||
// OTOH all images using the same transport should (apart from versions of the image format), be interoperable.
|
||||
// For example, several different ImageTransport implementations may be based on local filesystem paths,
|
||||
// but using completely different formats for the contents of that path (a single tar file, a directory containing tarballs, a fully expanded container filesystem, ...)
|
||||
//
|
||||
// See also transports.KnownTransports.
|
||||
type ImageTransport interface {
|
||||
// Name returns the name of the transport, which must be unique among other transports.
|
||||
Name() string
|
||||
// ParseReference converts a string, which should not start with the ImageTransport.Name prefix, into an ImageReference.
|
||||
ParseReference(reference string) (ImageReference, error)
|
||||
// ValidatePolicyConfigurationScope checks that scope is a valid name for a signature.PolicyTransportScopes keys
|
||||
// (i.e. a valid PolicyConfigurationIdentity() or PolicyConfigurationNamespaces() return value).
|
||||
// It is acceptable to allow an invalid value which will never be matched, it can "only" cause user confusion.
|
||||
// scope passed to this function will not be "", that value is always allowed.
|
||||
ValidatePolicyConfigurationScope(scope string) error
|
||||
}
|
||||
|
||||
// ImageReference is an abstracted way to refer to an image location, namespaced within an ImageTransport.
|
||||
//
|
||||
// The object should preferably be immutable after creation, with any parsing/state-dependent resolving happening
|
||||
// within an ImageTransport.ParseReference() or equivalent API creating the reference object.
|
||||
// That's also why the various identification/formatting methods of this type do not support returning errors.
|
||||
//
|
||||
// WARNING: While this design freezes the content of the reference within this process, it can not freeze the outside
|
||||
// world: paths may be replaced by symlinks elsewhere, HTTP APIs may start returning different results, and so on.
|
||||
type ImageReference interface {
|
||||
Transport() ImageTransport
|
||||
// StringWithinTransport returns a string representation of the reference, which MUST be such that
|
||||
// reference.Transport().ParseReference(reference.StringWithinTransport()) returns an equivalent reference.
|
||||
// NOTE: The returned string is not promised to be equal to the original input to ParseReference;
|
||||
// e.g. default attribute values omitted by the user may be filled in in the return value, or vice versa.
|
||||
// WARNING: Do not use the return value in the UI to describe an image, it does not contain the Transport().Name() prefix;
|
||||
// instead, see transports.ImageName().
|
||||
StringWithinTransport() string
|
||||
|
||||
// DockerReference returns a Docker reference associated with this reference
|
||||
// (fully explicit, i.e. !reference.IsNameOnly, but reflecting user intent,
|
||||
// not e.g. after redirect or alias processing), or nil if unknown/not applicable.
|
||||
DockerReference() reference.Named
|
||||
|
||||
// PolicyConfigurationIdentity returns a string representation of the reference, suitable for policy lookup.
|
||||
// This MUST reflect user intent, not e.g. after processing of third-party redirects or aliases;
|
||||
// The value SHOULD be fully explicit about its semantics, with no hidden defaults, AND canonical
|
||||
// (i.e. various references with exactly the same semantics should return the same configuration identity)
|
||||
// It is fine for the return value to be equal to StringWithinTransport(), and it is desirable but
|
||||
// not required/guaranteed that it will be a valid input to Transport().ParseReference().
|
||||
// Returns "" if configuration identities for these references are not supported.
|
||||
PolicyConfigurationIdentity() string
|
||||
|
||||
// PolicyConfigurationNamespaces returns a list of other policy configuration namespaces to search
|
||||
// for if explicit configuration for PolicyConfigurationIdentity() is not set. The list will be processed
|
||||
// in order, terminating on first match, and an implicit "" is always checked at the end.
|
||||
// It is STRONGLY recommended for the first element, if any, to be a prefix of PolicyConfigurationIdentity(),
|
||||
// and each following element to be a prefix of the element preceding it.
|
||||
PolicyConfigurationNamespaces() []string
|
||||
|
||||
// NewImage returns a types.Image for this reference, possibly specialized for this ImageTransport.
|
||||
// The caller must call .Close() on the returned Image.
|
||||
// NOTE: If any kind of signature verification should happen, build an UnparsedImage from the value returned by NewImageSource,
|
||||
// verify that UnparsedImage, and convert it into a real Image via image.FromUnparsedImage.
|
||||
NewImage(ctx *SystemContext) (Image, error)
|
||||
// NewImageSource returns a types.ImageSource for this reference,
|
||||
// asking the backend to use a manifest from requestedManifestMIMETypes if possible.
|
||||
// nil requestedManifestMIMETypes means manifest.DefaultRequestedManifestMIMETypes.
|
||||
// The caller must call .Close() on the returned ImageSource.
|
||||
NewImageSource(ctx *SystemContext, requestedManifestMIMETypes []string) (ImageSource, error)
|
||||
// NewImageDestination returns a types.ImageDestination for this reference.
|
||||
// The caller must call .Close() on the returned ImageDestination.
|
||||
NewImageDestination(ctx *SystemContext) (ImageDestination, error)
|
||||
|
||||
// DeleteImage deletes the named image from the registry, if supported.
|
||||
DeleteImage(ctx *SystemContext) error
|
||||
}
|
||||
|
||||
// BlobInfo collects known information about a blob (layer/config).
|
||||
// In some situations, some fields may be unknown, in others they may be mandatory; documenting an “unknown” value here does not override that.
|
||||
type BlobInfo struct {
|
||||
Digest digest.Digest // "" if unknown.
|
||||
Size int64 // -1 if unknown
|
||||
URLs []string
|
||||
}
|
||||
|
||||
// ImageSource is a service, possibly remote (= slow), to download components of a single image.
|
||||
// This is primarily useful for copying images around; for examining their properties, Image (below)
|
||||
// is usually more useful.
|
||||
// Each ImageSource should eventually be closed by calling Close().
|
||||
//
|
||||
// WARNING: Various methods which return an object identified by digest generally do not
|
||||
// validate that the returned data actually matches that digest; this is the caller’s responsibility.
|
||||
type ImageSource interface {
|
||||
// Reference returns the reference used to set up this source, _as specified by the user_
|
||||
// (not as the image itself, or its underlying storage, claims). This can be used e.g. to determine which public keys are trusted for this image.
|
||||
Reference() ImageReference
|
||||
// Close removes resources associated with an initialized ImageSource, if any.
|
||||
Close()
|
||||
// GetManifest returns the image's manifest along with its MIME type (which may be empty when it can't be determined but the manifest is available).
|
||||
// It may use a remote (= slow) service.
|
||||
GetManifest() ([]byte, string, error)
|
||||
// GetTargetManifest returns an image's manifest given a digest. This is mainly used to retrieve a single image's manifest
|
||||
// out of a manifest list.
|
||||
GetTargetManifest(digest digest.Digest) ([]byte, string, error)
|
||||
// GetBlob returns a stream for the specified blob, and the blob’s size (or -1 if unknown).
|
||||
// The Digest field in BlobInfo is guaranteed to be provided; Size may be -1.
|
||||
GetBlob(BlobInfo) (io.ReadCloser, int64, error)
|
||||
// GetSignatures returns the image's signatures. It may use a remote (= slow) service.
|
||||
GetSignatures() ([][]byte, error)
|
||||
}
|
||||
|
||||
// ImageDestination is a service, possibly remote (= slow), to store components of a single image.
|
||||
//
|
||||
// There is a specific required order for some of the calls:
|
||||
// PutBlob on the various blobs, if any, MUST be called before PutManifest (manifest references blobs, which may be created or compressed only at push time)
|
||||
// ReapplyBlob, if used, MUST only be called if HasBlob returned true for the same blob digest
|
||||
// PutSignatures, if called, MUST be called after PutManifest (signatures reference manifest contents)
|
||||
// Finally, Commit MUST be called if the caller wants the image, as formed by the components saved above, to persist.
|
||||
//
|
||||
// Each ImageDestination should eventually be closed by calling Close().
|
||||
type ImageDestination interface {
|
||||
// Reference returns the reference used to set up this destination. Note that this should directly correspond to user's intent,
|
||||
// e.g. it should use the public hostname instead of the result of resolving CNAMEs or following redirects.
|
||||
Reference() ImageReference
|
||||
// Close removes resources associated with an initialized ImageDestination, if any.
|
||||
Close()
|
||||
|
||||
// SupportedManifestMIMETypes tells which manifest mime types the destination supports
|
||||
// If an empty slice or nil it's returned, then any mime type can be tried to upload
|
||||
SupportedManifestMIMETypes() []string
|
||||
// SupportsSignatures returns an error (to be displayed to the user) if the destination certainly can't store signatures.
|
||||
// Note: It is still possible for PutSignatures to fail if SupportsSignatures returns nil.
|
||||
SupportsSignatures() error
|
||||
// ShouldCompressLayers returns true iff it is desirable to compress layer blobs written to this destination.
|
||||
ShouldCompressLayers() bool
|
||||
|
||||
// AcceptsForeignLayerURLs returns false iff foreign layers in manifest should be actually
|
||||
// uploaded to the image destination, true otherwise.
|
||||
AcceptsForeignLayerURLs() bool
|
||||
|
||||
// PutBlob writes contents of stream and returns data representing the result (with all data filled in).
|
||||
// inputInfo.Digest can be optionally provided if known; it is not mandatory for the implementation to verify it.
|
||||
// inputInfo.Size is the expected length of stream, if known.
|
||||
// WARNING: The contents of stream are being verified on the fly. Until stream.Read() returns io.EOF, the contents of the data SHOULD NOT be available
|
||||
// to any other readers for download using the supplied digest.
|
||||
// If stream.Read() at any time, ESPECIALLY at end of input, returns an error, PutBlob MUST 1) fail, and 2) delete any data stored so far.
|
||||
PutBlob(stream io.Reader, inputInfo BlobInfo) (BlobInfo, error)
|
||||
// HasBlob returns true iff the image destination already contains a blob with the matching digest which can be reapplied using ReapplyBlob. Unlike PutBlob, the digest can not be empty. If HasBlob returns true, the size of the blob must also be returned. A false result will often be accompanied by an ErrBlobNotFound error.
|
||||
HasBlob(info BlobInfo) (bool, int64, error)
|
||||
// ReapplyBlob informs the image destination that a blob for which HasBlob previously returned true would have been passed to PutBlob if it had returned false. Like HasBlob and unlike PutBlob, the digest can not be empty. If the blob is a filesystem layer, this signifies that the changes it describes need to be applied again when composing a filesystem tree.
|
||||
ReapplyBlob(info BlobInfo) (BlobInfo, error)
|
||||
// FIXME? This should also receive a MIME type if known, to differentiate between schema versions.
|
||||
PutManifest([]byte) error
|
||||
PutSignatures(signatures [][]byte) error
|
||||
// Commit marks the process of storing the image as successful and asks for the image to be persisted.
|
||||
// WARNING: This does not have any transactional semantics:
|
||||
// - Uploaded data MAY be visible to others before Commit() is called
|
||||
// - Uploaded data MAY be removed or MAY remain around if Close() is called without Commit() (i.e. rollback is allowed but not guaranteed)
|
||||
Commit() error
|
||||
}
|
||||
|
||||
// UnparsedImage is an Image-to-be; until it is verified and accepted, it only caries its identity and caches manifest and signature blobs.
|
||||
// Thus, an UnparsedImage can be created from an ImageSource simply by fetching blobs without interpreting them,
|
||||
// allowing cryptographic signature verification to happen first, before even fetching the manifest, or parsing anything else.
|
||||
// This also makes the UnparsedImage→Image conversion an explicitly visible step.
|
||||
// Each UnparsedImage should eventually be closed by calling Close().
|
||||
type UnparsedImage interface {
|
||||
// Reference returns the reference used to set up this source, _as specified by the user_
|
||||
// (not as the image itself, or its underlying storage, claims). This can be used e.g. to determine which public keys are trusted for this image.
|
||||
Reference() ImageReference
|
||||
// Close removes resources associated with an initialized UnparsedImage, if any.
|
||||
Close()
|
||||
// Manifest is like ImageSource.GetManifest, but the result is cached; it is OK to call this however often you need.
|
||||
Manifest() ([]byte, string, error)
|
||||
// Signatures is like ImageSource.GetSignatures, but the result is cached; it is OK to call this however often you need.
|
||||
Signatures() ([][]byte, error)
|
||||
}
|
||||
|
||||
// Image is the primary API for inspecting properties of images.
|
||||
// Each Image should eventually be closed by calling Close().
|
||||
type Image interface {
|
||||
// Note that Reference may return nil in the return value of UpdatedImage!
|
||||
UnparsedImage
|
||||
// ConfigInfo returns a complete BlobInfo for the separate config object, or a BlobInfo{Digest:""} if there isn't a separate object.
|
||||
// Note that the config object may not exist in the underlying storage in the return value of UpdatedImage! Use ConfigBlob() below.
|
||||
ConfigInfo() BlobInfo
|
||||
// ConfigBlob returns the blob described by ConfigInfo, iff ConfigInfo().Digest != ""; nil otherwise.
|
||||
// The result is cached; it is OK to call this however often you need.
|
||||
ConfigBlob() ([]byte, error)
|
||||
// LayerInfos returns a list of BlobInfos of layers referenced by this image, in order (the root layer first, and then successive layered layers).
|
||||
// The Digest field is guaranteed to be provided; Size may be -1.
|
||||
// WARNING: The list may contain duplicates, and they are semantically relevant.
|
||||
LayerInfos() []BlobInfo
|
||||
// Inspect returns various information for (skopeo inspect) parsed from the manifest and configuration.
|
||||
Inspect() (*ImageInspectInfo, error)
|
||||
// UpdatedImageNeedsLayerDiffIDs returns true iff UpdatedImage(options) needs InformationOnly.LayerDiffIDs.
|
||||
// This is a horribly specific interface, but computing InformationOnly.LayerDiffIDs can be very expensive to compute
|
||||
// (most importantly it forces us to download the full layers even if they are already present at the destination).
|
||||
UpdatedImageNeedsLayerDiffIDs(options ManifestUpdateOptions) bool
|
||||
// UpdatedImage returns a types.Image modified according to options.
|
||||
// Everything in options.InformationOnly should be provided, other fields should be set only if a modification is desired.
|
||||
// This does not change the state of the original Image object.
|
||||
UpdatedImage(options ManifestUpdateOptions) (Image, error)
|
||||
// IsMultiImage returns true if the image's manifest is a list of images, false otherwise.
|
||||
IsMultiImage() bool
|
||||
// Size returns an approximation of the amount of disk space which is consumed by the image in its current
|
||||
// location. If the size is not known, -1 will be returned.
|
||||
Size() (int64, error)
|
||||
}
|
||||
|
||||
// ManifestUpdateOptions is a way to pass named optional arguments to Image.UpdatedManifest
|
||||
type ManifestUpdateOptions struct {
|
||||
LayerInfos []BlobInfo // Complete BlobInfos (size+digest+urls) which should replace the originals, in order (the root layer first, and then successive layered layers)
|
||||
ManifestMIMEType string
|
||||
// The values below are NOT requests to modify the image; they provide optional context which may or may not be used.
|
||||
InformationOnly ManifestUpdateInformation
|
||||
}
|
||||
|
||||
// ManifestUpdateInformation is a component of ManifestUpdateOptions, named here
|
||||
// only to make writing struct literals possible.
|
||||
type ManifestUpdateInformation struct {
|
||||
Destination ImageDestination // and yes, UpdatedManifest may write to Destination (see the schema2 → schema1 conversion logic in image/docker_schema2.go)
|
||||
LayerInfos []BlobInfo // Complete BlobInfos (size+digest) which have been uploaded, in order (the root layer first, and then successive layered layers)
|
||||
LayerDiffIDs []digest.Digest // Digest values for the _uncompressed_ contents of the blobs which have been uploaded, in the same order.
|
||||
}
|
||||
|
||||
// ImageInspectInfo is a set of metadata describing Docker images, primarily their manifest and configuration.
|
||||
// The Tag field is a legacy field which is here just for the Docker v2s1 manifest. It won't be supported
|
||||
// for other manifest types.
|
||||
type ImageInspectInfo struct {
|
||||
Tag string
|
||||
Created time.Time
|
||||
DockerVersion string
|
||||
Labels map[string]string
|
||||
Architecture string
|
||||
Os string
|
||||
Layers []string
|
||||
}
|
||||
|
||||
// DockerAuthConfig contains authorization information for connecting to a registry.
|
||||
type DockerAuthConfig struct {
|
||||
Username string
|
||||
Password string
|
||||
}
|
||||
|
||||
// SystemContext allows parametrizing access to implicitly-accessed resources,
|
||||
// like configuration files in /etc and users' login state in their home directory.
|
||||
// Various components can share the same field only if their semantics is exactly
|
||||
// the same; if in doubt, add a new field.
|
||||
// It is always OK to pass nil instead of a SystemContext.
|
||||
type SystemContext struct {
|
||||
// If not "", prefixed to any absolute paths used by default by the library (e.g. in /etc/).
|
||||
// Not used for any of the more specific path overrides available in this struct.
|
||||
// Not used for any paths specified by users in config files (even if the location of the config file _was_ affected by it).
|
||||
// NOTE: If this is set, environment-variable overrides of paths are ignored (to keep the semantics simple: to create an /etc replacement, just set RootForImplicitAbsolutePaths .
|
||||
// and there is no need to worry about the environment.)
|
||||
// NOTE: This does NOT affect paths starting by $HOME.
|
||||
RootForImplicitAbsolutePaths string
|
||||
|
||||
// === Global configuration overrides ===
|
||||
// If not "", overrides the system's default path for signature.Policy configuration.
|
||||
SignaturePolicyPath string
|
||||
// If not "", overrides the system's default path for registries.d (Docker signature storage configuration)
|
||||
RegistriesDirPath string
|
||||
|
||||
// === docker.Transport overrides ===
|
||||
// If not "", a directory containing a CA certificate (ending with ".crt"),
|
||||
// a client certificate (ending with ".cert") and a client ceritificate key
|
||||
// (ending with ".key") used when talking to a Docker Registry.
|
||||
DockerCertPath string
|
||||
DockerInsecureSkipTLSVerify bool // Allow contacting docker registries over HTTP, or HTTPS with failed TLS verification. Note that this does not affect other TLS connections.
|
||||
// if nil, the library tries to parse ~/.docker/config.json to retrieve credentials
|
||||
DockerAuthConfig *DockerAuthConfig
|
||||
// if not "", an User-Agent header is added to each request when contacting a registry.
|
||||
DockerRegistryUserAgent string
|
||||
// if true, a V1 ping attempt isn't done to give users a better error. Default is false.
|
||||
// Note that this field is used mainly to integrate containers/image into projectatomic/docker
|
||||
// in order to not break any existing docker's integration tests.
|
||||
DockerDisableV1Ping bool
|
||||
}
|
||||
|
||||
var (
|
||||
// ErrBlobNotFound can be returned by an ImageDestination's HasBlob() method
|
||||
ErrBlobNotFound = errors.New("no such blob present")
|
||||
)
|
18
vendor/github.com/containers/image/version/version.go
generated
vendored
Normal file
18
vendor/github.com/containers/image/version/version.go
generated
vendored
Normal file
|
@ -0,0 +1,18 @@
|
|||
package version
|
||||
|
||||
import "fmt"
|
||||
|
||||
const (
|
||||
// VersionMajor is for an API incompatible changes
|
||||
VersionMajor = 0
|
||||
// VersionMinor is for functionality in a backwards-compatible manner
|
||||
VersionMinor = 1
|
||||
// VersionPatch is for backwards-compatible bug fixes
|
||||
VersionPatch = 0
|
||||
|
||||
// VersionDev indicates development branch. Releases will be empty string.
|
||||
VersionDev = "-dev"
|
||||
)
|
||||
|
||||
// Version is the specification version that the package types support.
|
||||
var Version = fmt.Sprintf("%d.%d.%d%s", VersionMajor, VersionMinor, VersionPatch, VersionDev)
|
Loading…
Add table
Add a link
Reference in a new issue