Switch to github.com/golang/dep for vendoring

Signed-off-by: Mrunal Patel <mrunalp@gmail.com>
This commit is contained in:
Mrunal Patel 2017-01-31 16:45:59 -08:00
parent d6ab91be27
commit 8e5b17cf13
15431 changed files with 3971413 additions and 8881 deletions

16
vendor/k8s.io/kubernetes/.gazelcfg.json generated vendored Normal file
View file

@ -0,0 +1,16 @@
{
"GoPrefix": "k8s.io/kubernetes",
"SrcDirs": [
"./pkg",
"./cmd",
"./third_party",
"./plugin",
"./test",
"./federation",
"./examples"
],
"SkippedPaths": [
"^_.*"
],
"AddSourcesRules": true
}

213
vendor/k8s.io/kubernetes/.generated_docs generated vendored Normal file
View file

@ -0,0 +1,213 @@
.generated_docs
docs/admin/federation-apiserver.md
docs/admin/federation-controller-manager.md
docs/admin/kube-apiserver.md
docs/admin/kube-controller-manager.md
docs/admin/kube-proxy.md
docs/admin/kube-scheduler.md
docs/admin/kubelet.md
docs/man/man1/kube-apiserver.1
docs/man/man1/kube-controller-manager.1
docs/man/man1/kube-proxy.1
docs/man/man1/kube-scheduler.1
docs/man/man1/kubectl-annotate.1
docs/man/man1/kubectl-api-versions.1
docs/man/man1/kubectl-apply.1
docs/man/man1/kubectl-attach.1
docs/man/man1/kubectl-autoscale.1
docs/man/man1/kubectl-certificate-approve.1
docs/man/man1/kubectl-certificate-deny.1
docs/man/man1/kubectl-certificate.1
docs/man/man1/kubectl-cluster-info-dump.1
docs/man/man1/kubectl-cluster-info.1
docs/man/man1/kubectl-completion.1
docs/man/man1/kubectl-config-current-context.1
docs/man/man1/kubectl-config-delete-cluster.1
docs/man/man1/kubectl-config-delete-context.1
docs/man/man1/kubectl-config-get-clusters.1
docs/man/man1/kubectl-config-get-contexts.1
docs/man/man1/kubectl-config-set-cluster.1
docs/man/man1/kubectl-config-set-context.1
docs/man/man1/kubectl-config-set-credentials.1
docs/man/man1/kubectl-config-set.1
docs/man/man1/kubectl-config-unset.1
docs/man/man1/kubectl-config-use-context.1
docs/man/man1/kubectl-config-view.1
docs/man/man1/kubectl-config.1
docs/man/man1/kubectl-convert.1
docs/man/man1/kubectl-cordon.1
docs/man/man1/kubectl-cp.1
docs/man/man1/kubectl-create-clusterrolebinding.1
docs/man/man1/kubectl-create-configmap.1
docs/man/man1/kubectl-create-deployment.1
docs/man/man1/kubectl-create-namespace.1
docs/man/man1/kubectl-create-poddisruptionbudget.1
docs/man/man1/kubectl-create-quota.1
docs/man/man1/kubectl-create-rolebinding.1
docs/man/man1/kubectl-create-secret-docker-registry.1
docs/man/man1/kubectl-create-secret-generic.1
docs/man/man1/kubectl-create-secret-tls.1
docs/man/man1/kubectl-create-secret.1
docs/man/man1/kubectl-create-service-clusterip.1
docs/man/man1/kubectl-create-service-externalname.1
docs/man/man1/kubectl-create-service-loadbalancer.1
docs/man/man1/kubectl-create-service-nodeport.1
docs/man/man1/kubectl-create-service.1
docs/man/man1/kubectl-create-serviceaccount.1
docs/man/man1/kubectl-create.1
docs/man/man1/kubectl-delete.1
docs/man/man1/kubectl-describe.1
docs/man/man1/kubectl-drain.1
docs/man/man1/kubectl-edit.1
docs/man/man1/kubectl-exec.1
docs/man/man1/kubectl-explain.1
docs/man/man1/kubectl-expose.1
docs/man/man1/kubectl-get.1
docs/man/man1/kubectl-label.1
docs/man/man1/kubectl-logs.1
docs/man/man1/kubectl-options.1
docs/man/man1/kubectl-patch.1
docs/man/man1/kubectl-port-forward.1
docs/man/man1/kubectl-proxy.1
docs/man/man1/kubectl-replace.1
docs/man/man1/kubectl-rolling-update.1
docs/man/man1/kubectl-rollout-history.1
docs/man/man1/kubectl-rollout-pause.1
docs/man/man1/kubectl-rollout-resume.1
docs/man/man1/kubectl-rollout-status.1
docs/man/man1/kubectl-rollout-undo.1
docs/man/man1/kubectl-rollout.1
docs/man/man1/kubectl-run.1
docs/man/man1/kubectl-scale.1
docs/man/man1/kubectl-set-image.1
docs/man/man1/kubectl-set-resources.1
docs/man/man1/kubectl-set-selector.1
docs/man/man1/kubectl-set.1
docs/man/man1/kubectl-stop.1
docs/man/man1/kubectl-taint.1
docs/man/man1/kubectl-top-node.1
docs/man/man1/kubectl-top-pod.1
docs/man/man1/kubectl-top.1
docs/man/man1/kubectl-uncordon.1
docs/man/man1/kubectl-version.1
docs/man/man1/kubectl.1
docs/man/man1/kubelet.1
docs/user-guide/kubectl/kubectl.md
docs/user-guide/kubectl/kubectl_annotate.md
docs/user-guide/kubectl/kubectl_api-versions.md
docs/user-guide/kubectl/kubectl_apply.md
docs/user-guide/kubectl/kubectl_attach.md
docs/user-guide/kubectl/kubectl_autoscale.md
docs/user-guide/kubectl/kubectl_certificate.md
docs/user-guide/kubectl/kubectl_certificate_approve.md
docs/user-guide/kubectl/kubectl_certificate_deny.md
docs/user-guide/kubectl/kubectl_cluster-info.md
docs/user-guide/kubectl/kubectl_cluster-info_dump.md
docs/user-guide/kubectl/kubectl_completion.md
docs/user-guide/kubectl/kubectl_config.md
docs/user-guide/kubectl/kubectl_config_current-context.md
docs/user-guide/kubectl/kubectl_config_delete-cluster.md
docs/user-guide/kubectl/kubectl_config_delete-context.md
docs/user-guide/kubectl/kubectl_config_get-clusters.md
docs/user-guide/kubectl/kubectl_config_get-contexts.md
docs/user-guide/kubectl/kubectl_config_set-cluster.md
docs/user-guide/kubectl/kubectl_config_set-context.md
docs/user-guide/kubectl/kubectl_config_set-credentials.md
docs/user-guide/kubectl/kubectl_config_set.md
docs/user-guide/kubectl/kubectl_config_unset.md
docs/user-guide/kubectl/kubectl_config_use-context.md
docs/user-guide/kubectl/kubectl_config_view.md
docs/user-guide/kubectl/kubectl_convert.md
docs/user-guide/kubectl/kubectl_cordon.md
docs/user-guide/kubectl/kubectl_cp.md
docs/user-guide/kubectl/kubectl_create.md
docs/user-guide/kubectl/kubectl_create_clusterrolebinding.md
docs/user-guide/kubectl/kubectl_create_configmap.md
docs/user-guide/kubectl/kubectl_create_deployment.md
docs/user-guide/kubectl/kubectl_create_namespace.md
docs/user-guide/kubectl/kubectl_create_poddisruptionbudget.md
docs/user-guide/kubectl/kubectl_create_quota.md
docs/user-guide/kubectl/kubectl_create_rolebinding.md
docs/user-guide/kubectl/kubectl_create_secret.md
docs/user-guide/kubectl/kubectl_create_secret_docker-registry.md
docs/user-guide/kubectl/kubectl_create_secret_generic.md
docs/user-guide/kubectl/kubectl_create_secret_tls.md
docs/user-guide/kubectl/kubectl_create_service.md
docs/user-guide/kubectl/kubectl_create_service_clusterip.md
docs/user-guide/kubectl/kubectl_create_service_externalname.md
docs/user-guide/kubectl/kubectl_create_service_loadbalancer.md
docs/user-guide/kubectl/kubectl_create_service_nodeport.md
docs/user-guide/kubectl/kubectl_create_serviceaccount.md
docs/user-guide/kubectl/kubectl_delete.md
docs/user-guide/kubectl/kubectl_describe.md
docs/user-guide/kubectl/kubectl_drain.md
docs/user-guide/kubectl/kubectl_edit.md
docs/user-guide/kubectl/kubectl_exec.md
docs/user-guide/kubectl/kubectl_explain.md
docs/user-guide/kubectl/kubectl_expose.md
docs/user-guide/kubectl/kubectl_get.md
docs/user-guide/kubectl/kubectl_label.md
docs/user-guide/kubectl/kubectl_logs.md
docs/user-guide/kubectl/kubectl_options.md
docs/user-guide/kubectl/kubectl_patch.md
docs/user-guide/kubectl/kubectl_port-forward.md
docs/user-guide/kubectl/kubectl_proxy.md
docs/user-guide/kubectl/kubectl_replace.md
docs/user-guide/kubectl/kubectl_rolling-update.md
docs/user-guide/kubectl/kubectl_rollout.md
docs/user-guide/kubectl/kubectl_rollout_history.md
docs/user-guide/kubectl/kubectl_rollout_pause.md
docs/user-guide/kubectl/kubectl_rollout_resume.md
docs/user-guide/kubectl/kubectl_rollout_status.md
docs/user-guide/kubectl/kubectl_rollout_undo.md
docs/user-guide/kubectl/kubectl_run.md
docs/user-guide/kubectl/kubectl_scale.md
docs/user-guide/kubectl/kubectl_set.md
docs/user-guide/kubectl/kubectl_set_image.md
docs/user-guide/kubectl/kubectl_set_resources.md
docs/user-guide/kubectl/kubectl_set_selector.md
docs/user-guide/kubectl/kubectl_taint.md
docs/user-guide/kubectl/kubectl_top.md
docs/user-guide/kubectl/kubectl_top_node.md
docs/user-guide/kubectl/kubectl_top_pod.md
docs/user-guide/kubectl/kubectl_uncordon.md
docs/user-guide/kubectl/kubectl_version.md
docs/yaml/kubectl/kubectl.yaml
docs/yaml/kubectl/kubectl_annotate.yaml
docs/yaml/kubectl/kubectl_api-versions.yaml
docs/yaml/kubectl/kubectl_apply.yaml
docs/yaml/kubectl/kubectl_attach.yaml
docs/yaml/kubectl/kubectl_autoscale.yaml
docs/yaml/kubectl/kubectl_certificate.yaml
docs/yaml/kubectl/kubectl_cluster-info.yaml
docs/yaml/kubectl/kubectl_completion.yaml
docs/yaml/kubectl/kubectl_config.yaml
docs/yaml/kubectl/kubectl_convert.yaml
docs/yaml/kubectl/kubectl_cordon.yaml
docs/yaml/kubectl/kubectl_cp.yaml
docs/yaml/kubectl/kubectl_create.yaml
docs/yaml/kubectl/kubectl_delete.yaml
docs/yaml/kubectl/kubectl_describe.yaml
docs/yaml/kubectl/kubectl_drain.yaml
docs/yaml/kubectl/kubectl_edit.yaml
docs/yaml/kubectl/kubectl_exec.yaml
docs/yaml/kubectl/kubectl_explain.yaml
docs/yaml/kubectl/kubectl_expose.yaml
docs/yaml/kubectl/kubectl_get.yaml
docs/yaml/kubectl/kubectl_label.yaml
docs/yaml/kubectl/kubectl_logs.yaml
docs/yaml/kubectl/kubectl_options.yaml
docs/yaml/kubectl/kubectl_patch.yaml
docs/yaml/kubectl/kubectl_port-forward.yaml
docs/yaml/kubectl/kubectl_proxy.yaml
docs/yaml/kubectl/kubectl_replace.yaml
docs/yaml/kubectl/kubectl_rolling-update.yaml
docs/yaml/kubectl/kubectl_rollout.yaml
docs/yaml/kubectl/kubectl_run.yaml
docs/yaml/kubectl/kubectl_scale.yaml
docs/yaml/kubectl/kubectl_set.yaml
docs/yaml/kubectl/kubectl_stop.yaml
docs/yaml/kubectl/kubectl_taint.yaml
docs/yaml/kubectl/kubectl_top.yaml
docs/yaml/kubectl/kubectl_uncordon.yaml
docs/yaml/kubectl/kubectl_version.yaml

30
vendor/k8s.io/kubernetes/.generated_files generated vendored Normal file
View file

@ -0,0 +1,30 @@
# Files that should be ignored by tools which do not want to consider generated
# code.
#
# https://github.com/kubernetes/contrib/blob/master/mungegithub/mungers/size.go
#
# This file is a series of lines, each of the form:
# <type> <name>
#
# Type can be:
# path - an exact path to a single file
# file-name - an exact leaf filename, regardless of path
# path-prefix - a prefix match on the file path
# file-prefix - a prefix match of the leaf filename (no path)
# paths-from-repo - read a file from the repo and load file paths
#
file-prefix zz_generated.
file-name BUILD
file-name types.generated.go
file-name generated.pb.go
file-name generated.proto
file-name types_swagger_doc_generated.go
path-prefix Godeps/
path-prefix vendor/
path-prefix api/swagger-spec/
path-prefix pkg/generated/
paths-from-repo .generated_docs

10
vendor/k8s.io/kubernetes/.gitattributes generated vendored Normal file
View file

@ -0,0 +1,10 @@
hack/verify-flags/known-flags.txt merge=union
test/test_owners.csv merge=union
**/zz_generated.*.go -diff
**/types.generated.go -diff
**/generated.pb.go -diff
**/generated.proto -diff
**/types_swagger_doc_generated.go -diff
docs/api-reference/** -diff
federation/docs/api-reference/** -diff

47
vendor/k8s.io/kubernetes/.github/ISSUE_TEMPLATE.md generated vendored Normal file
View file

@ -0,0 +1,47 @@
<!-- Thanks for filing an issue! Before hitting the button, please answer these questions.-->
**Is this a request for help?** (If yes, you should use our troubleshooting guide and community support channels, see http://kubernetes.io/docs/troubleshooting/.):
**What keywords did you search in Kubernetes issues before filing this one?** (If you have found any duplicates, you should instead reply there.):
---
**Is this a BUG REPORT or FEATURE REQUEST?** (choose one):
<!--
If this is a BUG REPORT, please:
- Fill in as much of the template below as you can. If you leave out
information, we can't help you as well.
If this is a FEATURE REQUEST, please:
- Describe *in detail* the feature/behavior/change you'd like to see.
In both cases, be ready for followup questions, and please respond in a timely
manner. If we can't reproduce a bug or think a feature already exists, we
might close your issue. If we're wrong, PLEASE feel free to reopen it and
explain why.
-->
**Kubernetes version** (use `kubectl version`):
**Environment**:
- **Cloud provider or hardware configuration**:
- **OS** (e.g. from /etc/os-release):
- **Kernel** (e.g. `uname -a`):
- **Install tools**:
- **Others**:
**What happened**:
**What you expected to happen**:
**How to reproduce it** (as minimally and precisely as possible):
**Anything else do we need to know**:

View file

@ -0,0 +1,19 @@
<!-- Thanks for sending a pull request! Here are some tips for you:
1. If this is your first time, read our contributor guidelines https://github.com/kubernetes/kubernetes/blob/master/CONTRIBUTING.md and developer guide https://github.com/kubernetes/kubernetes/blob/master/docs/devel/development.md
2. If you want *faster* PR reviews, read how: https://github.com/kubernetes/kubernetes/blob/master/docs/devel/faster_reviews.md
3. Follow the instructions for writing a release note: https://github.com/kubernetes/kubernetes/blob/master/docs/devel/pull-requests.md#release-notes
-->
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
<!-- Steps to write your release note:
1. Use the release-note-* labels to set the release note state (if you have access)
2. Enter your extended release note in the below block; leaving it blank means using the PR title as the release note. If no release note is required, just write `NONE`.
-->
```release-note
```

120
vendor/k8s.io/kubernetes/.gitignore generated vendored Normal file
View file

@ -0,0 +1,120 @@
# OSX leaves these everywhere on SMB shares
._*
# OSX trash
.DS_Store
# Eclipse files
.classpath
.project
.settings/**
# Files generated by JetBrains IDEs, e.g. IntelliJ IDEA
.idea/
*.iml
# Vscode files
.vscode
# This is where the result of the go build goes
/output*/
/_output*/
/_output
# Emacs save files
*~
\#*\#
.\#*
# Vim-related files
[._]*.s[a-w][a-z]
[._]s[a-w][a-z]
*.un~
Session.vim
.netrwhist
# Go test binaries
*.test
/hack/.test-cmd-auth
# JUnit test output from ginkgo e2e tests
/junit*.xml
# Mercurial files
**/.hg
**/.hg*
# Vagrant
.vagrant
network_closure.sh
# Local cluster env variables
/cluster/env.sh
# Compiled binaries in third_party
/third_party/pkg
# Also ignore etcd installed by hack/install-etcd.sh
/third_party/etcd*
/default.etcd
# User cluster configs
.kubeconfig
.tags*
# Version file for dockerized build
.dockerized-kube-version-defs
# Web UI
/www/master/node_modules/
/www/master/npm-debug.log
/www/master/shared/config/development.json
# Karma output
/www/test_out
# precommit temporary directories created by ./hack/verify-generated-docs.sh and ./hack/lib/util.sh
/_tmp/
/doc_tmp/
# Test artifacts produced by Jenkins jobs
/_artifacts/
# Go dependencies installed on Jenkins
/_gopath/
# Config directories created by gcloud and gsutil on Jenkins
/.config/gcloud*/
/.gsutil/
# CoreOS stuff
/cluster/libvirt-coreos/coreos_*.img
# Juju Stuff
/cluster/juju/charms/*
/cluster/juju/bundles/local.yaml
# Downloaded Kubernetes binary release
/kubernetes/
# direnv .envrc files
.envrc
# Downloaded kubernetes binary release tar ball
kubernetes.tar.gz
# generated files in any directory
# TODO(thockin): uncomment this when we stop committing the generated files.
#zz_generated.*
# make-related metadata
/.make/
# Just in time generated data in the source, should never be commited
/test/e2e/generated/bindata.go
# This file used by some vendor repos (e.g. github.com/go-openapi/...) to store secret variables and should not be ignored
!\.drone\.sec
/bazel-*
*.pyc

63
vendor/k8s.io/kubernetes/BUILD.bazel generated vendored Normal file
View file

@ -0,0 +1,63 @@
package(default_visibility = ["//visibility:public"])
licenses(["notice"])
load("@io_bazel_rules_go//go:def.bzl", "go_prefix")
load("@io_kubernetes_build//defs:build.bzl", "gcs_upload")
load("@bazel_tools//tools/build_defs/pkg:pkg.bzl", "pkg_tar")
go_prefix("k8s.io/kubernetes")
gcs_upload(
name = "ci-artifacts",
data = [
"//build/debs",
],
)
filegroup(
name = "package-srcs",
srcs = glob(
["**"],
exclude = [
"bazel-*/**",
"_*/**",
".config/**",
".git/**",
".gsutil/**",
".make/**",
],
),
visibility = ["//visibility:private"],
)
filegroup(
name = "all-srcs",
srcs = [
":package-srcs",
"//api:all-srcs",
"//build:all-srcs",
"//cluster:all-srcs",
"//cmd:all-srcs",
"//docs:all-srcs",
"//examples:all-srcs",
"//federation:all-srcs",
"//hack:all-srcs",
"//pkg:all-srcs",
"//plugin:all-srcs",
"//test:all-srcs",
"//third_party:all-srcs",
"//vendor:all-srcs",
],
tags = ["automanaged"],
)
pkg_tar(
name = "kubernetes-src",
extension = "tar.gz",
files = [
":all-srcs",
],
package_dir = "kubernetes",
strip_prefix = "//",
)

3756
vendor/k8s.io/kubernetes/CHANGELOG.md generated vendored Normal file

File diff suppressed because it is too large Load diff

132
vendor/k8s.io/kubernetes/CONTRIBUTING.md generated vendored Normal file
View file

@ -0,0 +1,132 @@
# Contributing guidelines
Want to hack on Kubernetes? Yay!
## Developer Guide
We have a [Developer's Guide](docs/devel/README.md) that outlines everything
you need to know from setting up your dev environment to how to get faster Pull
Request reviews. If you find something undocumented or incorrect along the way,
please feel free to send a Pull Request.
## Filing issues
If you have a question about Kubernetes or have a problem using it, please
start with the [troubleshooting guide](http://kubernetes.io/docs/troubleshooting/). If that
doesn't answer your questions, or if you think you found a bug, please [file an
issue](https://github.com/kubernetes/kubernetes/issues/new).
## How to become a contributor and submit your own code
### Contributor License Agreements
We'd love to accept your patches! Before we can take them, we have to jump a
couple of legal hurdles.
The Cloud Native Computing Foundation (CNCF) CLA [must be signed](https://github.com/kubernetes/community/blob/master/CLA.md) by all contributors.
Please fill out either the individual or corporate Contributor License
Agreement (CLA).
Once you are CLA'ed, we'll be able to accept your pull requests. For any issues that you face during this process,
please add a comment [here](https://github.com/kubernetes/kubernetes/issues/27796) explaining the issue and we will help get it sorted out.
***NOTE***: Only original source code from you and other people that have
signed the CLA can be accepted into the repository. This policy does not
apply to [third_party](third_party/) and [vendor](vendor/).
### Finding Things That Need Help
If you're new to the project and want to help, but don't know where to start,
we have a semi-curated list of issues that have should not need deep knowledge
of the system. [Have a look and see if anything sounds
interesting](https://github.com/kubernetes/kubernetes/issues?q=is%3Aopen+is%3Aissue+label%3Ahelp-wanted).
Alternatively, read some of the many docs on the system, for example [the
architecture](docs/design/architecture.md), and pick a component that seems
interesting. Start with `main()` (look in the [cmd](cmd/) directory) and read
until you find something you want to fix. The best way to learn is to hack!
There's always code that can be clarified and variables or functions that can
be renamed or commented.
### Contributing A Patch
If you're working on an existing issue, such as one of the `help-wanted` ones
above, simply respond to the issue and express interest in working on it. This
helps other people know that the issue is active, and hopefully prevents
duplicated efforts.
If you want to work on a new idea of relatively small scope:
1. Submit an issue describing your proposed change to the repo in question.
1. The repo owners will respond to your issue promptly.
1. If your proposed change is accepted, and you haven't already done so, sign a
Contributor License Agreement (see details above).
1. Fork the repo, develop, and test your changes.
1. Submit a pull request.
If you want to work on a bigger idea, we STRONGLY recommend that you start with
some bugs or smaller features. We have a [feature development
process](https://github.com/kubernetes/features/blob/master/README.md), but
navigating the Kubernetes system as a newcomer can be very challenging.
### Downloading the project
There are a few ways you can download this code. You must download it into a
GOPATH - see [golang.org](https://golang.org/doc/code.html) for more info on
how Go works with code. This project expects to be found at the Go package
`k8s.io/kubernetes`.
1. You can `git clone` the repo. If you do this, you MUST make sure it is in
the GOPATH as `k8s.io/kubernetes` or it may not build. E.g.: `git clone
https://github.com/kubernetes/kubernetes $GOPATH/src/k8s.io/kubernetes`
1. You can use `go get` to fetch the repo. This will automatically put it into
your GOPATH in the right place. E.g.: `go get -d k8s.io/kubernetes`
1. You can download an archive of the source. If you do this, you MUST make
sure it is unpacked into the GOPATH as `k8s.io/kubernetes` or it may not
build. See [rel.k8s.io](http://rel.k8s.io) for a list of available releases.
### Building the project
There are a few things you need to build and test this project:
1. `make` - the human interface to the Kubernetes build is `make`, so you must
have this tool installed on your machine. We try not to use too many crazy
features of `Makefile`s and other tools, so most commonly available versions
should work.
1. `docker` - some parts of the build/test system depend on `docker`. You
need a relatively recent version of Docker installed, and available to you.
1. `go` - Kubernetes is written in Go (aka golang), so you need a relatively
recent version of the [Go toolchain](https://golang.org/dl/) installed.
While Linux is the primary platform for Kubernetes, it should compile on a
Mac, too. Windows is in progress.
To build Kubernetes, simply type `make`. This should figure out what it needs
to do and not need any input from you. If you want to just build a subset of
code, you can pass the `WHAT` variable to `make`: e.g. `make
WHAT="cmd/kubelet"`.
To run basic tests, simply type `make test`. This will run all of the unit
tests in the project. If you want to just test a subset of the project, you
can pass the `WHAT` variable to `make`: e.g. `make test WHAT=pkg/kubelet`.
### Protocols for Collaborative Development
Please read [this doc](docs/devel/collab.md) for information on how we're
running development for the project. Also take a look at the [development
guide](docs/devel/development.md) for information on how to set up your
environment, run tests, manage dependencies, etc.
### Adding dependencies
If your patch depends on new packages, add that package with
[`godep`](https://github.com/tools/godep). Follow the [instructions to add a
dependency](docs/devel/development.md#godep-and-dependency-management).
### Community Expectations
Please see our [expectations](docs/devel/community-expectations.md) for members
of the Kubernetes community.
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/CONTRIBUTING.md?pixel)]()

2691
vendor/k8s.io/kubernetes/Godeps/Godeps.json generated vendored Normal file

File diff suppressed because it is too large Load diff

5
vendor/k8s.io/kubernetes/Godeps/Readme generated vendored Normal file
View file

@ -0,0 +1,5 @@
This directory tree is generated automatically by godep.
Please do not edit.
See https://github.com/tools/godep for more information.

502
vendor/k8s.io/kubernetes/Makefile generated vendored Normal file
View file

@ -0,0 +1,502 @@
# Copyright 2016 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
DBG_MAKEFILE ?=
ifeq ($(DBG_MAKEFILE),1)
$(warning ***** starting Makefile for goal(s) "$(MAKECMDGOALS)")
$(warning ***** $(shell date))
else
# If we're not debugging the Makefile, don't echo recipes.
MAKEFLAGS += -s
endif
# Old-skool build tools.
#
# Commonly used targets (see each target for more information):
# all: Build code.
# test: Run tests.
# clean: Clean up.
# It's necessary to set this because some environments don't link sh -> bash.
SHELL := /bin/bash
# We don't need make's built-in rules.
MAKEFLAGS += --no-builtin-rules
.SUFFIXES:
# Constants used throughout.
.EXPORT_ALL_VARIABLES:
OUT_DIR ?= _output
BIN_DIR := $(OUT_DIR)/bin
PRJ_SRC_PATH := k8s.io/kubernetes
GENERATED_FILE_PREFIX := zz_generated.
# Metadata for driving the build lives here.
META_DIR := .make
# Our build flags.
# TODO(thockin): it would be nice to just use the native flags. Can we EOL
# these "wrapper" flags?
KUBE_GOFLAGS := $(GOFLAGS)
KUBE_GOLDFLAGS := $(GOLDFLAGS)
KUBE_GOGCFLAGS = $(GOGCFLAGS)
# This controls the verbosity of the build. Higher numbers mean more output.
KUBE_VERBOSE ?= 1
define ALL_HELP_INFO
# Build code.
#
# Args:
# WHAT: Directory names to build. If any of these directories has a 'main'
# package, the build will produce executable files under $(OUT_DIR)/go/bin.
# If not specified, "everything" will be built.
# GOFLAGS: Extra flags to pass to 'go' when building.
# GOLDFLAGS: Extra linking flags passed to 'go' when building.
# GOGCFLAGS: Additional go compile flags passed to 'go' when building.
#
# Example:
# make
# make all
# make all WHAT=cmd/kubelet GOFLAGS=-v
# make all GOGCFLAGS="-N -l"
# Note: Use the -N -l options to disable compiler optimizations an inlining.
# Using these build options allows you to subsequently use source
# debugging tools like delve.
endef
.PHONY: all
ifeq ($(PRINT_HELP),y)
all:
@echo "$$ALL_HELP_INFO"
else
all: generated_files
hack/make-rules/build.sh $(WHAT)
endif
define GINKGO_HELP_INFO
# Build ginkgo
#
# Example:
# make ginkgo
endef
.PHONY: ginkgo
ifeq ($(PRINT_HELP),y)
ginkgo:
@echo "$$GINKGO_HELP_INFO"
else
ginkgo:
hack/make-rules/build.sh vendor/github.com/onsi/ginkgo/ginkgo
endif
define VERIFY_HELP_INFO
# Runs all the presubmission verifications.
#
# Args:
# BRANCH: Branch to be passed to verify-godeps.sh script.
#
# Example:
# make verify
# make verify BRANCH=branch_x
endef
.PHONY: verify
ifeq ($(PRINT_HELP),y)
verify:
@echo "$$VERIFY_HELP_INFO"
else
verify: verify_generated_files
KUBE_VERIFY_GIT_BRANCH=$(BRANCH) hack/make-rules/verify.sh -v
hack/make-rules/vet.sh
endif
define UPDATE_HELP_INFO
# Runs all the generated updates.
#
# Example:
# make update
endef
.PHONY: update
ifeq ($(PRINT_HELP),y)
update:
@echo "$$UPDATE_HELP_INFO"
else
update:
hack/update-all.sh
endif
define CHECK_TEST_HELP_INFO
# Build and run tests.
#
# Args:
# WHAT: Directory names to test. All *_test.go files under these
# directories will be run. If not specified, "everything" will be tested.
# TESTS: Same as WHAT.
# GOFLAGS: Extra flags to pass to 'go' when building.
# GOLDFLAGS: Extra linking flags to pass to 'go' when building.
# GOGCFLAGS: Additional go compile flags passed to 'go' when building.
#
# Example:
# make check
# make test
# make check WHAT=pkg/kubelet GOFLAGS=-v
endef
.PHONY: check test
ifeq ($(PRINT_HELP),y)
check test:
@echo "$$CHECK_TEST_HELP_INFO"
else
check test: generated_files
hack/make-rules/test.sh $(WHAT) $(TESTS)
endif
define TEST_IT_HELP_INFO
# Build and run integration tests.
#
# Args:
# WHAT: Directory names to test. All *_test.go files under these
# directories will be run. If not specified, "everything" will be tested.
#
# Example:
# make test-integration
endef
.PHONY: test-integration
ifeq ($(PRINT_HELP),y)
test-integration:
@echo "$$TEST_IT_HELP_INFO"
else
test-integration: generated_files
hack/make-rules/test-integration.sh $(WHAT)
endif
define TEST_E2E_HELP_INFO
# Build and run end-to-end tests.
#
# Example:
# make test-e2e
endef
.PHONY: test-e2e
ifeq ($(PRINT_HELP),y)
test-e2e:
@echo "$$TEST_E2E_HELP_INFO"
else
test-e2e: ginkgo generated_files
go run hack/e2e.go -v --build --up --test --down
endif
define TEST_E2E_NODE_HELP_INFO
# Build and run node end-to-end tests.
#
# Args:
# FOCUS: Regexp that matches the tests to be run. Defaults to "".
# SKIP: Regexp that matches the tests that needs to be skipped. Defaults
# to "".
# RUN_UNTIL_FAILURE: If true, pass --untilItFails to ginkgo so tests are run
# repeatedly until they fail. Defaults to false.
# REMOTE: If true, run the tests on a remote host instance on GCE. Defaults
# to false.
# IMAGES: For REMOTE=true only. Comma delimited list of images for creating
# remote hosts to run tests against. Defaults to a recent image.
# LIST_IMAGES: If true, don't run tests. Just output the list of available
# images for testing. Defaults to false.
# HOSTS: For REMOTE=true only. Comma delimited list of running gce hosts to
# run tests against. Defaults to "".
# DELETE_INSTANCES: For REMOTE=true only. Delete any instances created as
# part of this test run. Defaults to false.
# ARTIFACTS: For REMOTE=true only. Local directory to scp test artifacts into
# from the remote hosts. Defaults to "/tmp/_artifacts".
# REPORT: For REMOTE=false only. Local directory to write juntil xml results
# to. Defaults to "/tmp/".
# CLEANUP: For REMOTE=true only. If false, do not stop processes or delete
# test files on remote hosts. Defaults to true.
# IMAGE_PROJECT: For REMOTE=true only. Project containing images provided to
# IMAGES. Defaults to "kubernetes-node-e2e-images".
# INSTANCE_PREFIX: For REMOTE=true only. Instances created from images will
# have the name "${INSTANCE_PREFIX}-${IMAGE_NAME}". Defaults to "test".
# INSTANCE_METADATA: For REMOTE=true and running on GCE only.
# GUBERNATOR: For REMOTE=true only. Produce link to Gubernator to view logs.
# Defaults to false.
# PARALLELISM: The number of gingko nodes to run. Defaults to 8.
#
# Example:
# make test-e2e-node FOCUS=Kubelet SKIP=container
# make test-e2e-node REMOTE=true DELETE_INSTANCES=true
# make test-e2e-node TEST_ARGS="--experimental-cgroups-per-qos=true"
# Build and run tests.
endef
.PHONY: test-e2e-node
ifeq ($(PRINT_HELP),y)
test-e2e-node:
@echo "$$TEST_E2E_NODE_HELP_INFO"
else
test-e2e-node: ginkgo generated_files
hack/make-rules/test-e2e-node.sh
endif
define TEST_CMD_HELP_INFO
# Build and run cmdline tests.
#
# Example:
# make test-cmd
endef
.PHONY: test-cmd
ifeq ($(PRINT_HELP),y)
test-cmd:
@echo "$$TEST_CMD_HELP_INFO"
else
test-cmd: generated_files
hack/make-rules/test-kubeadm-cmd.sh
hack/make-rules/test-cmd.sh
hack/make-rules/test-federation-cmd.sh
endif
define CLEAN_HELP_INFO
# Remove all build artifacts.
#
# Example:
# make clean
#
# TODO(thockin): call clean_generated when we stop committing generated code.
endef
.PHONY: clean
ifeq ($(PRINT_HELP),y)
clean:
@echo "$$CLEAN_HELP_INFO"
else
clean: clean_meta
build/make-clean.sh
rm -rf $(OUT_DIR)
rm -rf Godeps/_workspace # Just until we are sure it is gone
endif
define CLEAN_META_HELP_INFO
# Remove make-related metadata files.
#
# Example:
# make clean_meta
endef
.PHONY: clean_meta
ifeq ($(PRINT_HELP),y)
clean_meta:
@echo "$$CLEAN_META_HELP_INFO"
else
clean_meta:
rm -rf $(META_DIR)
endif
define CLEAN_GENERATED_HELP_INFO
# Remove all auto-generated artifacts. Generated artifacts in staging folder should not be removed as they are not
# generated using generated_files.
#
# Example:
# make clean_generated
endef
.PHONY: clean_generated
ifeq ($(PRINT_HELP),y)
clean_generated:
@echo "$$CLEAN_GENERATED_HELP_INFO"
else
clean_generated:
find . -type f -name $(GENERATED_FILE_PREFIX)\* | grep -v "[.]/staging/.*" | xargs rm -f
endif
define VET_HELP_INFO
# Run 'go vet'.
#
# Args:
# WHAT: Directory names to vet. All *.go files under these
# directories will be vetted. If not specified, "everything" will be
# vetted.
#
# Example:
# make vet
# make vet WHAT=pkg/kubelet
endef
.PHONY: vet
ifeq ($(PRINT_HELP),y)
vet:
@echo "$$VET_HELP_INFO"
else
vet:
hack/make-rules/vet.sh $(WHAT)
endif
define RELEASE_HELP_INFO
# Build a release
#
# Example:
# make release
endef
.PHONY: release
ifeq ($(PRINT_HELP),y)
release:
@echo "$$RELEASE_HELP_INFO"
else
release:
build/release.sh
endif
define RELEASE_SKIP_TESTS_HELP_INFO
# Build a release, but skip tests
#
# Example:
# make release-skip-tests
endef
.PHONY: release-skip-tests quick-release
ifeq ($(PRINT_HELP),y)
release-skip-tests quick-release:
@echo "$$RELEASE_SKIP_TESTS_HELP_INFO"
else
release-skip-tests quick-release:
KUBE_RELEASE_RUN_TESTS=n KUBE_FASTBUILD=true build/release.sh
endif
define CROSS_HELP_INFO
# Cross-compile for all platforms
#
# Example:
# make cross
endef
.PHONY: cross
ifeq ($(PRINT_HELP),y)
cross:
@echo "$$CROSS_HELP_INFO"
else
cross:
hack/make-rules/cross.sh
endif
define CMD_HELP_INFO
# Add rules for all directories in cmd/
#
# Example:
# make kubectl kube-proxy
endef
#TODO: make EXCLUDE_TARGET auto-generated when there are other files in cmd/
#TODO: should we exclude the target "libs" but include "cmd/libs/go2idl/*"?
EXCLUDE_TARGET=OWNERS
.PHONY: $(filter-out %$(EXCLUDE_TARGET),$(notdir $(abspath $(wildcard cmd/*/))))
ifeq ($(PRINT_HELP),y)
$(filter-out %$(EXCLUDE_TARGET),$(notdir $(abspath $(wildcard cmd/*/)))):
@echo "$$CMD_HELP_INFO"
else
$(filter-out %$(EXCLUDE_TARGET),$(notdir $(abspath $(wildcard cmd/*/)))): generated_files
hack/make-rules/build.sh cmd/$@
endif
define PLUGIN_CMD_HELP_INFO
# Add rules for all directories in plugin/cmd/
#
# Example:
# make kube-scheduler
endef
.PHONY: $(notdir $(abspath $(wildcard plugin/cmd/*/)))
ifeq ($(PRINT_HELP),y)
$(notdir $(abspath $(wildcard plugin/cmd/*/))):
@echo "$$PLUGIN_CMD_HELP_INFO"
else
$(notdir $(abspath $(wildcard plugin/cmd/*/))): generated_files
hack/make-rules/build.sh plugin/cmd/$@
endif
define FED_CMD_HELP_INFO
# Add rules for all directories in federation/cmd/
#
# Example:
# make federation-apiserver federation-controller-manager
endef
.PHONY: $(notdir $(abspath $(wildcard federation/cmd/*/)))
ifeq ($(PRINT_HELP),y)
$(notdir $(abspath $(wildcard federation/cmd/*/))):
@echo "$$FED_CMD_HELP_INFO"
else
$(notdir $(abspath $(wildcard federation/cmd/*/))): generated_files
hack/make-rules/build.sh federation/cmd/$@
endif
define GENERATED_FILES_HELP_INFO
# Produce auto-generated files needed for the build.
#
# Example:
# make generated_files
endef
.PHONY: generated_files
ifeq ($(PRINT_HELP),y)
generated_files:
@echo "$$GENERATED_FILES_HELP_INFO"
else
generated_files:
$(MAKE) -f Makefile.generated_files $@ CALLED_FROM_MAIN_MAKEFILE=1
endif
define VERIFY_GENERATED_FILES_HELP_INFO
# Verify auto-generated files needed for the build.
#
# Example:
# make verify_generated_files
endef
.PHONY: verify_generated_files
ifeq ($(PRINT_HELP),y)
verify_generated_files:
@echo "$$VERIFY_GENERATED_FILES_HELP_INFO"
else
verify_generated_files:
$(MAKE) -f Makefile.generated_files $@ CALLED_FROM_MAIN_MAKEFILE=1
endif
define HELP_INFO
# Print make targets and help info
#
# Example:
# make help
endef
.PHONY: help
ifeq ($(PRINT_HELP),y)
help:
@echo "$$HELP_INFO"
else
help:
hack/make-rules/make-help.sh
endif
# Non-dockerized bazel rules.
.PHONY: bazel-build bazel-test
ifeq ($(PRINT_HELP),y)
define BAZEL_BUILD_HELP_INFO
# Build with bazel
#
# Example:
# make bazel-build
endef
bazel-build:
@echo "$$BAZEL_BUILD_HELP_INFO"
else
bazel-build:
bazel build //cmd/... //pkg/... //federation/... //plugin/... //build/... //examples/... //test/... //third_party/...
endif
ifeq ($(PRINT_HELP),y)
bazel-test:
define BAZEL_TEST_HELP_INFO
# Test with bazel
#
# Example:
# make bazel-test
endef
@echo "$$BAZEL_TEST_HELP_INFO"
else
bazel-test:
bazel test --test_output=errors //cmd/... //pkg/... //federation/... //plugin/... //build/... //third_party/... //hack/...
endif

754
vendor/k8s.io/kubernetes/Makefile.generated_files generated vendored Normal file
View file

@ -0,0 +1,754 @@
# Copyright 2016 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Don't allow users to call this directly. There are too many variables this
# assumes to inherit from the main Makefile. This is not a user-facing file.
ifeq ($(CALLED_FROM_MAIN_MAKEFILE),)
$(error Please use the main Makefile, e.g. `make generated_files`)
endif
# Don't allow an implicit 'all' rule. This is not a user-facing file.
ifeq ($(MAKECMDGOALS),)
$(error This Makefile requires an explicit rule to be specified)
endif
ifeq ($(DBG_MAKEFILE),1)
$(warning ***** starting Makefile.generated_files for goal(s) "$(MAKECMDGOALS)")
$(warning ***** $(shell date))
endif
# It's necessary to set this because some environments don't link sh -> bash.
SHELL := /bin/bash
# This rule collects all the generated file sets into a single rule. Other
# rules should depend on this to ensure generated files are rebuilt.
.PHONY: generated_files
generated_files: gen_deepcopy gen_defaulter gen_conversion gen_openapi
.PHONY: verify_generated_files
verify_generated_files: verify_gen_deepcopy \
verify_gen_defaulter \
verify_gen_conversion \
verify_gen_openapi
# Code-generation logic.
#
# This stuff can be pretty tricky, and there's probably some corner cases that
# we don't handle well. That said, here's a straightforward test to prove that
# the most common cases work. Sadly, it is manual.
#
# make clean
# find . -name .make\* | xargs rm -f
# find . -name zz_generated\* | xargs rm -f
# # verify `find . -name zz_generated.deepcopy.go | wc -l` is 0
# # verify `find . -name .make | wc -l` is 0
#
# make nonexistent
# # expect "No rule to make target"
# # verify `find .make/ -type f | wc -l` has many files
#
# make gen_deepcopy
# # expect deepcopy-gen is built exactly once
# # expect many files to be regenerated
# # verify `find . -name zz_generated.deepcopy.go | wc -l` has files
# make gen_deepcopy
# # expect nothing to be rebuilt, finish in O(seconds)
# touch pkg/api/types.go
# make gen_deepcopy
# # expect one file to be regenerated
# make gen_deepcopy
# # expect nothing to be rebuilt, finish in O(seconds)
# touch cmd/libs/go2idl/deepcopy-gen/main.go
# make gen_deepcopy
# # expect deepcopy-gen is built exactly once
# # expect many files to be regenerated
# # verify `find . -name zz_generated.deepcopy.go | wc -l` has files
# make gen_deepcopy
# # expect nothing to be rebuilt, finish in O(seconds)
#
# make gen_conversion
# # expect conversion-gen is built exactly once
# # expect many files to be regenerated
# # verify `find . -name zz_generated.conversion.go | wc -l` has files
# make gen_conversion
# # expect nothing to be rebuilt, finish in O(seconds)
# touch pkg/api/types.go
# make gen_conversion
# # expect one file to be regenerated
# make gen_conversion
# # expect nothing to be rebuilt, finish in O(seconds)
# touch cmd/libs/go2idl/conversion-gen/main.go
# make gen_conversion
# # expect conversion-gen is built exactly once
# # expect many files to be regenerated
# # verify `find . -name zz_generated.conversion.go | wc -l` has files
# make gen_conversion
# # expect nothing to be rebuilt, finish in O(seconds)
#
# make all
# # expect it to build
#
# make test
# # expect it to pass
#
# make clean
# # verify `find . -name zz_generated.deepcopy.go | wc -l` is 0
# # verify `find . -name .make | wc -l` is 0
#
# make all WHAT=cmd/kube-proxy
# # expect it to build
#
# make clean
# make test WHAT=cmd/kube-proxy
# # expect it to pass
# This variable holds a list of every directory that contains Go files in this
# project. Other rules and variables can use this as a starting point to
# reduce filesystem accesses.
ifeq ($(DBG_MAKEFILE),1)
$(warning ***** finding all *.go dirs)
endif
ALL_GO_DIRS := $(shell \
hack/make-rules/helpers/cache_go_dirs.sh $(META_DIR)/all_go_dirs.mk \
)
# The name of the metadata file which lists *.go files in each pkg.
GOFILES_META := gofiles.mk
# Establish a dependency between the deps file and the dir. Whenever a dir
# changes (files added or removed) the deps file will be considered stale.
#
# The variable value was set in $(GOFILES_META) and included as part of the
# dependency management logic.
#
# This is looser than we really need (e.g. we don't really care about non *.go
# files or even *_test.go files), but this is much easier to represent.
#
# Because we 'sinclude' the deps file, it is considered for rebuilding, as part
# of make's normal evaluation. If it gets rebuilt, make will restart.
#
# The '$(eval)' is needed because this has a different RHS for each LHS, and
# would otherwise produce results that make can't parse.
$(foreach dir, $(ALL_GO_DIRS), $(eval \
$(META_DIR)/$(dir)/$(GOFILES_META): $(dir) \
))
# How to rebuild a deps file. When make determines that the deps file is stale
# (see above), it executes this rule, and then re-loads the deps file.
#
# This is looser than we really need (e.g. we don't really care about test
# files), but this is MUCH faster than calling `go list`.
#
# We regenerate the output file in order to satisfy make's "newer than" rules,
# but we only need to rebuild targets if the contents actually changed. That
# is what the .stamp file represents.
$(foreach dir, $(ALL_GO_DIRS), \
$(META_DIR)/$(dir)/$(GOFILES_META)):
FILES=$$(ls $</*.go | grep --color=never -v $(GENERATED_FILE_PREFIX)); \
mkdir -p $(@D); \
echo "gofiles__$< := $$(echo $${FILES})" >$@.tmp; \
cmp -s $@.tmp $@ || touch $@.stamp; \
mv $@.tmp $@
# This is required to fill in the DAG, since some cases (e.g. 'make clean all')
# will reference the .stamp file when it doesn't exist. We don't need to
# rebuild it in that case, just keep make happy.
$(foreach dir, $(ALL_GO_DIRS), \
$(META_DIR)/$(dir)/$(GOFILES_META).stamp):
# Include any deps files as additional Makefile rules. This triggers make to
# consider the deps files for rebuild, which makes the whole
# dependency-management logic work. 'sinclude' is "silent include" which does
# not fail if the file does not exist.
$(foreach dir, $(ALL_GO_DIRS), $(eval \
sinclude $(META_DIR)/$(dir)/$(GOFILES_META) \
))
# Generate a list of all files that have a `+k8s:` comment-tag. This will be
# used to derive lists of files/dirs for generation tools.
ifeq ($(DBG_MAKEFILE),1)
$(warning ***** finding all +k8s: tags)
endif
ALL_K8S_TAG_FILES := $(shell \
find $(ALL_GO_DIRS) -maxdepth 1 -type f -name \*.go \
| xargs grep --color=never -l '^// *+k8s:' \
)
#
# Deep-copy generation
#
# Any package that wants deep-copy functions generated must include a
# comment-tag in column 0 of one file of the form:
# // +k8s:deepcopy-gen=<VALUE>
#
# The <VALUE> may be one of:
# generate: generate deep-copy functions into the package
# register: generate deep-copy functions and register them with a
# scheme
# The result file, in each pkg, of deep-copy generation.
DEEPCOPY_BASENAME := $(GENERATED_FILE_PREFIX)deepcopy
DEEPCOPY_FILENAME := $(DEEPCOPY_BASENAME).go
# The tool used to generate deep copies.
DEEPCOPY_GEN := $(BIN_DIR)/deepcopy-gen
# Find all the directories that request deep-copy generation.
ifeq ($(DBG_MAKEFILE),1)
$(warning ***** finding all +k8s:deepcopy-gen tags)
endif
DEEPCOPY_DIRS := $(shell \
grep --color=never -l '+k8s:deepcopy-gen=' $(ALL_K8S_TAG_FILES) \
| xargs -n1 dirname \
| LC_ALL=C sort -u \
)
DEEPCOPY_FILES := $(addsuffix /$(DEEPCOPY_FILENAME), $(DEEPCOPY_DIRS))
# Shell function for reuse in rules.
RUN_GEN_DEEPCOPY = \
function run_gen_deepcopy() { \
if [[ -f $(META_DIR)/$(DEEPCOPY_GEN).todo ]]; then \
./hack/run-in-gopath.sh $(DEEPCOPY_GEN) \
--v $(KUBE_VERBOSE) \
--logtostderr \
-i $$(cat $(META_DIR)/$(DEEPCOPY_GEN).todo | paste -sd, -) \
--bounding-dirs $(PRJ_SRC_PATH) \
-O $(DEEPCOPY_BASENAME) \
"$$@"; \
fi \
}; \
run_gen_deepcopy
# This rule aggregates the set of files to generate and then generates them all
# in a single run of the tool.
.PHONY: gen_deepcopy
gen_deepcopy: $(DEEPCOPY_FILES) $(DEEPCOPY_GEN)
$(RUN_GEN_DEEPCOPY)
.PHONY: verify_gen_deepcopy
verify_gen_deepcopy: $(DEEPCOPY_GEN)
$(RUN_GEN_DEEPCOPY) --verify-only
# For each dir in DEEPCOPY_DIRS, this establishes a dependency between the
# output file and the input files that should trigger a rebuild.
#
# Note that this is a deps-only statement, not a full rule (see below). This
# has to be done in a distinct step because wildcards don't work in static
# pattern rules.
#
# The '$(eval)' is needed because this has a different RHS for each LHS, and
# would otherwise produce results that make can't parse.
#
# We depend on the $(GOFILES_META).stamp to detect when the set of input files
# has changed. This allows us to detect deleted input files.
$(foreach dir, $(DEEPCOPY_DIRS), $(eval \
$(dir)/$(DEEPCOPY_FILENAME): $(META_DIR)/$(dir)/$(GOFILES_META).stamp \
$(gofiles__$(dir)) \
))
# Unilaterally remove any leftovers from previous runs.
$(shell rm -f $(META_DIR)/$(DEEPCOPY_GEN)*.todo)
# How to regenerate deep-copy code. This is a little slow to run, so we batch
# it up and trigger the batch from the 'generated_files' target.
$(DEEPCOPY_FILES): $(DEEPCOPY_GEN)
mkdir -p $$(dirname $(META_DIR)/$(DEEPCOPY_GEN))
echo $(PRJ_SRC_PATH)/$(@D) >> $(META_DIR)/$(DEEPCOPY_GEN).todo
# This calculates the dependencies for the generator tool, so we only rebuild
# it when needed. It is PHONY so that it always runs, but it only updates the
# file if the contents have actually changed. We 'sinclude' this later.
.PHONY: $(META_DIR)/$(DEEPCOPY_GEN).mk
$(META_DIR)/$(DEEPCOPY_GEN).mk:
mkdir -p $(@D); \
(echo -n "$(DEEPCOPY_GEN): "; \
./hack/run-in-gopath.sh go list \
-f '{{.ImportPath}}{{"\n"}}{{range .Deps}}{{.}}{{"\n"}}{{end}}' \
./cmd/libs/go2idl/deepcopy-gen \
| grep --color=never "^$(PRJ_SRC_PATH)/" \
| xargs ./hack/run-in-gopath.sh go list \
-f '{{$$d := .Dir}}{{$$d}}{{"\n"}}{{range .GoFiles}}{{$$d}}/{{.}}{{"\n"}}{{end}}' \
| paste -sd' ' - \
| sed 's/ / \\=,/g' \
| tr '=,' '\n\t' \
| sed "s|$$(pwd -P)/||"; \
) > $@.tmp; \
cmp -s $@.tmp $@ || cat $@.tmp > $@ && rm -f $@.tmp
# Include dependency info for the generator tool. This will cause the rule of
# the same name to be considered and if it is updated, make will restart.
sinclude $(META_DIR)/$(DEEPCOPY_GEN).mk
# How to build the generator tool. The deps for this are defined in
# the $(DEEPCOPY_GEN).mk, above.
#
# A word on the need to touch: This rule might trigger if, for example, a
# non-Go file was added or deleted from a directory on which this depends.
# This target needs to be reconsidered, but Go realizes it doesn't actually
# have to be rebuilt. In that case, make will forever see the dependency as
# newer than the binary, and try to rebuild it over and over. So we touch it,
# and make is happy.
$(DEEPCOPY_GEN):
hack/make-rules/build.sh cmd/libs/go2idl/deepcopy-gen
touch $@
#
# Defaulter generation
#
# Any package that wants defaulter functions generated must include a
# comment-tag in column 0 of one file of the form:
# // +k8s:defaulter-gen=<VALUE>
#
# The <VALUE> depends on context:
# on types:
# true: always generate a defaulter for this type
# false: never generate a defaulter for this type
# on functions:
# covers: if the function name matches SetDefault_NAME, instructs
# the generator not to recurse
# on packages:
# FIELDNAME: any object with a field of this name is a candidate
# for having a defaulter generated
# The result file, in each pkg, of defaulter generation.
DEFAULTER_BASENAME := $(GENERATED_FILE_PREFIX)defaults
DEFAULTER_FILENAME := $(DEFAULTER_BASENAME).go
# The tool used to generate defaulters.
DEFAULTER_GEN := $(BIN_DIR)/defaulter-gen
# All directories that request any form of defaulter generation.
ifeq ($(DBG_MAKEFILE),1)
$(warning ***** finding all +k8s:defaulter-gen tags)
endif
DEFAULTER_DIRS := $(shell \
grep --color=never -l '+k8s:defaulter-gen=' $(ALL_K8S_TAG_FILES) \
| xargs -n1 dirname \
| LC_ALL=C sort -u \
)
DEFAULTER_FILES := $(addsuffix /$(DEFAULTER_FILENAME), $(DEFAULTER_DIRS))
RUN_GEN_DEFAULTER := \
function run_gen_defaulter() { \
if [[ -f $(META_DIR)/$(DEFAULTER_GEN).todo ]]; then \
./hack/run-in-gopath.sh $(DEFAULTER_GEN) \
--v $(KUBE_VERBOSE) \
--logtostderr \
-i $$(cat $(META_DIR)/$(DEFAULTER_GEN).todo | paste -sd, -) \
--extra-peer-dirs $$(echo $(addprefix $(PRJ_SRC_PATH)/, $(DEFAULTER_DIRS)) | sed 's/ /,/g') \
-O $(DEFAULTER_BASENAME) \
"$$@"; \
fi \
}; \
run_gen_defaulter
# This rule aggregates the set of files to generate and then generates them all
# in a single run of the tool.
.PHONY: gen_defaulter
gen_defaulter: $(DEFAULTER_FILES) $(DEFAULTER_GEN)
$(RUN_GEN_DEFAULTER)
.PHONY: verify_gen_deepcopy
verify_gen_defaulter: $(DEFAULTER_GEN)
$(RUN_GEN_DEFAULTER) --verify-only
# For each dir in DEFAULTER_DIRS, this establishes a dependency between the
# output file and the input files that should trigger a rebuild.
#
# The variable value was set in $(GOFILES_META) and included as part of the
# dependency management logic.
#
# Note that this is a deps-only statement, not a full rule (see below). This
# has to be done in a distinct step because wildcards don't work in static
# pattern rules.
#
# The '$(eval)' is needed because this has a different RHS for each LHS, and
# would otherwise produce results that make can't parse.
#
# We depend on the $(GOFILES_META).stamp to detect when the set of input files
# has changed. This allows us to detect deleted input files.
$(foreach dir, $(DEFAULTER_DIRS), $(eval \
$(dir)/$(DEFAULTER_FILENAME): $(META_DIR)/$(dir)/$(GOFILES_META).stamp \
$(gofiles__$(dir)) \
))
# For each dir in DEFAULTER_DIRS, for each target in $(defaulters__$(dir)),
# this establishes a dependency between the output file and the input files
# that should trigger a rebuild.
#
# The variable value was set in $(GOFILES_META) and included as part of the
# dependency management logic.
#
# Note that this is a deps-only statement, not a full rule (see below). This
# has to be done in a distinct step because wildcards don't work in static
# pattern rules.
#
# The '$(eval)' is needed because this has a different RHS for each LHS, and
# would otherwise produce results that make can't parse.
#
# We depend on the $(GOFILES_META).stamp to detect when the set of input files
# has changed. This allows us to detect deleted input files.
$(foreach dir, $(DEFAULTER_DIRS), \
$(foreach tgt, $(defaulters__$(dir)), $(eval \
$(dir)/$(DEFAULTER_FILENAME): $(META_DIR)/$(tgt)/$(GOFILES_META).stamp \
$(gofiles__$(tgt)) \
)) \
)
# Unilaterally remove any leftovers from previous runs.
$(shell rm -f $(META_DIR)/$(DEFAULTER_GEN)*.todo)
# How to regenerate defaulter code. This is a little slow to run, so we batch
# it up and trigger the batch from the 'generated_files' target.
$(DEFAULTER_FILES): $(DEFAULTER_GEN)
mkdir -p $$(dirname $(META_DIR)/$(DEFAULTER_GEN))
echo $(PRJ_SRC_PATH)/$(@D) >> $(META_DIR)/$(DEFAULTER_GEN).todo
# This calculates the dependencies for the generator tool, so we only rebuild
# it when needed. It is PHONY so that it always runs, but it only updates the
# file if the contents have actually changed. We 'sinclude' this later.
.PHONY: $(META_DIR)/$(DEFAULTER_GEN).mk
$(META_DIR)/$(DEFAULTER_GEN).mk:
mkdir -p $(@D); \
(echo -n "$(DEFAULTER_GEN): "; \
./hack/run-in-gopath.sh go list \
-f '{{.ImportPath}}{{"\n"}}{{range .Deps}}{{.}}{{"\n"}}{{end}}' \
./cmd/libs/go2idl/defaulter-gen \
| grep --color=never "^$(PRJ_SRC_PATH)/" \
| xargs ./hack/run-in-gopath.sh go list \
-f '{{$$d := .Dir}}{{$$d}}{{"\n"}}{{range .GoFiles}}{{$$d}}/{{.}}{{"\n"}}{{end}}' \
| paste -sd' ' - \
| sed 's/ / \\=,/g' \
| tr '=,' '\n\t' \
| sed "s|$$(pwd -P)/||"; \
) > $@.tmp; \
cmp -s $@.tmp $@ || cat $@.tmp > $@ && rm -f $@.tmp
# Include dependency info for the generator tool. This will cause the rule of
# the same name to be considered and if it is updated, make will restart.
sinclude $(META_DIR)/$(DEFAULTER_GEN).mk
# How to build the generator tool. The deps for this are defined in
# the $(DEFAULTER_GEN).mk, above.
#
# A word on the need to touch: This rule might trigger if, for example, a
# non-Go file was added or deleted from a directory on which this depends.
# This target needs to be reconsidered, but Go realizes it doesn't actually
# have to be rebuilt. In that case, make will forever see the dependency as
# newer than the binary, and try to rebuild it over and over. So we touch it,
# and make is happy.
$(DEFAULTER_GEN):
hack/make-rules/build.sh cmd/libs/go2idl/defaulter-gen
touch $@
#
# Open-api generation
#
# Any package that wants open-api functions generated must include a
# comment-tag in column 0 of one file of the form:
# // +k8s:openapi-gen=true
#
# The result file, in each pkg, of open-api generation.
OPENAPI_BASENAME := $(GENERATED_FILE_PREFIX)openapi
OPENAPI_FILENAME := $(OPENAPI_BASENAME).go
OPENAPI_OUTPUT_PKG := pkg/generated/openapi
# The tool used to generate open apis.
OPENAPI_GEN := $(BIN_DIR)/openapi-gen
# Find all the directories that request open-api generation.
ifeq ($(DBG_MAKEFILE),1)
$(warning ***** finding all +k8s:openapi-gen tags)
endif
OPENAPI_DIRS := $(shell \
grep --color=never -l '+k8s:openapi-gen=' $(ALL_K8S_TAG_FILES) \
| xargs -n1 dirname \
| LC_ALL=C sort -u \
)
OPENAPI_OUTFILE := $(OPENAPI_OUTPUT_PKG)/$(OPENAPI_FILENAME)
# Shell function for reuse in rules.
RUN_GEN_OPENAPI = \
function run_gen_openapi() { \
./hack/run-in-gopath.sh $(OPENAPI_GEN) \
--v $(KUBE_VERBOSE) \
--logtostderr \
-i $$(echo $(addprefix $(PRJ_SRC_PATH)/, $(OPENAPI_DIRS)) | sed 's/ /,/g') \
-p $(PRJ_SRC_PATH)/$(OPENAPI_OUTPUT_PKG) \
-O $(OPENAPI_BASENAME) \
"$$@"; \
}; \
run_gen_openapi
# This rule is the user-friendly entrypoint for openapi generation.
.PHONY: gen_openapi
gen_openapi: $(OPENAPI_OUTFILE) $(OPENAPI_GEN)
.PHONY: verify_gen_openapi
verify_gen_openapi: $(OPENAPI_GEN)
$(RUN_GEN_OPENAPI) --verify-only
# For each dir in OPENAPI_DIRS, this establishes a dependency between the
# output file and the input files that should trigger a rebuild.
#
# Note that this is a deps-only statement, not a full rule (see below). This
# has to be done in a distinct step because wildcards don't work in static
# pattern rules.
#
# The '$(eval)' is needed because this has a different RHS for each LHS, and
# would otherwise produce results that make can't parse.
#
# We depend on the $(GOFILES_META).stamp to detect when the set of input files
# has changed. This allows us to detect deleted input files.
$(foreach dir, $(OPENAPI_DIRS), $(eval \
$(OPENAPI_OUTFILE): $(META_DIR)/$(dir)/$(GOFILES_META).stamp \
$(gofiles__$(dir)) \
))
# How to regenerate open-api code. This emits a single file for all results.
$(OPENAPI_OUTFILE): $(OPENAPI_GEN) $(OPENAPI_GEN)
$(RUN_GEN_OPENAPI)
# This calculates the dependencies for the generator tool, so we only rebuild
# it when needed. It is PHONY so that it always runs, but it only updates the
# file if the contents have actually changed. We 'sinclude' this later.
.PHONY: $(META_DIR)/$(OPENAPI_GEN).mk
$(META_DIR)/$(OPENAPI_GEN).mk:
mkdir -p $(@D); \
(echo -n "$(OPENAPI_GEN): "; \
./hack/run-in-gopath.sh go list \
-f '{{.ImportPath}}{{"\n"}}{{range .Deps}}{{.}}{{"\n"}}{{end}}' \
./cmd/libs/go2idl/openapi-gen \
| grep --color=never "^$(PRJ_SRC_PATH)/" \
| xargs ./hack/run-in-gopath.sh go list \
-f '{{$$d := .Dir}}{{$$d}}{{"\n"}}{{range .GoFiles}}{{$$d}}/{{.}}{{"\n"}}{{end}}' \
| paste -sd' ' - \
| sed 's/ / \\=,/g' \
| tr '=,' '\n\t' \
| sed "s|$$(pwd -P)/||"; \
) > $@.tmp; \
cmp -s $@.tmp $@ || cat $@.tmp > $@ && rm -f $@.tmp
# Include dependency info for the generator tool. This will cause the rule of
# the same name to be considered and if it is updated, make will restart.
sinclude $(META_DIR)/$(OPENAPI_GEN).mk
# How to build the generator tool. The deps for this are defined in
# the $(OPENAPI_GEN).mk, above.
#
# A word on the need to touch: This rule might trigger if, for example, a
# non-Go file was added or deleted from a directory on which this depends.
# This target needs to be reconsidered, but Go realizes it doesn't actually
# have to be rebuilt. In that case, make will forever see the dependency as
# newer than the binary, and try to rebuild it over and over. So we touch it,
# and make is happy.
$(OPENAPI_GEN):
hack/make-rules/build.sh cmd/libs/go2idl/openapi-gen
touch $@
#
# Conversion generation
#
# Any package that wants conversion functions generated must include one or
# more comment-tags in any .go file, in column 0, of the form:
# // +k8s:conversion-gen=<CONVERSION_TARGET_DIR>
#
# The CONVERSION_TARGET_DIR is a project-local path to another directory which
# should be considered when evaluating peer types for conversions. Types which
# are found in the source package (where conversions are being generated)
# but do not have a peer in one of the target directories will not have
# conversions generated.
#
# TODO: it might be better in the long term to make peer-types explicit in the
# IDL.
# The result file, in each pkg, of conversion generation.
CONVERSION_BASENAME := $(GENERATED_FILE_PREFIX)conversion
CONVERSION_FILENAME := $(CONVERSION_BASENAME).go
# The tool used to generate conversions.
CONVERSION_GEN := $(BIN_DIR)/conversion-gen
# The name of the metadata file listing conversion peers for each pkg.
CONVERSIONS_META := conversions.mk
# All directories that request any form of conversion generation.
ifeq ($(DBG_MAKEFILE),1)
$(warning ***** finding all +k8s:conversion-gen tags)
endif
CONVERSION_DIRS := $(shell \
grep --color=never '^// *+k8s:conversion-gen=' $(ALL_K8S_TAG_FILES) \
| cut -f1 -d: \
| xargs -n1 dirname \
| LC_ALL=C sort -u \
)
CONVERSION_FILES := $(addsuffix /$(CONVERSION_FILENAME), $(CONVERSION_DIRS))
# Shell function for reuse in rules.
RUN_GEN_CONVERSION = \
function run_gen_conversion() { \
if [[ -f $(META_DIR)/$(CONVERSION_GEN).todo ]]; then \
./hack/run-in-gopath.sh $(CONVERSION_GEN) \
--v $(KUBE_VERBOSE) \
--logtostderr \
-i $$(cat $(META_DIR)/$(CONVERSION_GEN).todo | paste -sd, -) \
-O $(CONVERSION_BASENAME) \
"$$@"; \
fi \
}; \
run_gen_conversion
# This rule aggregates the set of files to generate and then generates them all
# in a single run of the tool.
.PHONY: gen_conversion
gen_conversion: $(CONVERSION_FILES) $(CONVERSION_GEN)
$(RUN_GEN_CONVERSION)
.PHONY: verify_gen_conversion
verify_gen_conversion: $(CONVERSION_GEN)
$(RUN_GEN_CONVERSION) --verify-only
# Establish a dependency between the deps file and the dir. Whenever a dir
# changes (files added or removed) the deps file will be considered stale.
#
# This is looser than we really need (e.g. we don't really care about non *.go
# files or even *_test.go files), but this is much easier to represent.
#
# Because we 'sinclude' the deps file, it is considered for rebuilding, as part
# of make's normal evaluation. If it gets rebuilt, make will restart.
#
# The '$(eval)' is needed because this has a different RHS for each LHS, and
# would otherwise produce results that make can't parse.
$(foreach dir, $(CONVERSION_DIRS), $(eval \
$(META_DIR)/$(dir)/$(CONVERSIONS_META): $(dir) \
))
# How to rebuild a deps file. When make determines that the deps file is stale
# (see above), it executes this rule, and then re-loads the deps file.
#
# This is looser than we really need (e.g. we don't really care about test
# files), but this is MUCH faster than calling `go list`.
#
# We regenerate the output file in order to satisfy make's "newer than" rules,
# but we only need to rebuild targets if the contents actually changed. That
# is what the .stamp file represents.
$(foreach dir, $(CONVERSION_DIRS), \
$(META_DIR)/$(dir)/$(CONVERSIONS_META)):
TAGS=$$(grep --color=never -h '^// *+k8s:conversion-gen=' $</*.go \
| cut -f2- -d= \
| sed 's|$(PRJ_SRC_PATH)/||'); \
mkdir -p $(@D); \
echo "conversions__$< := $$(echo $${TAGS})" >$@.tmp; \
cmp -s $@.tmp $@ || touch $@.stamp; \
mv $@.tmp $@
# Include any deps files as additional Makefile rules. This triggers make to
# consider the deps files for rebuild, which makes the whole
# dependency-management logic work. 'sinclude' is "silent include" which does
# not fail if the file does not exist.
$(foreach dir, $(CONVERSION_DIRS), $(eval \
sinclude $(META_DIR)/$(dir)/$(CONVERSIONS_META) \
))
# For each dir in CONVERSION_DIRS, this establishes a dependency between the
# output file and the input files that should trigger a rebuild.
#
# The variable value was set in $(GOFILES_META) and included as part of the
# dependency management logic.
#
# Note that this is a deps-only statement, not a full rule (see below). This
# has to be done in a distinct step because wildcards don't work in static
# pattern rules.
#
# The '$(eval)' is needed because this has a different RHS for each LHS, and
# would otherwise produce results that make can't parse.
#
# We depend on the $(GOFILES_META).stamp to detect when the set of input files
# has changed. This allows us to detect deleted input files.
$(foreach dir, $(CONVERSION_DIRS), $(eval \
$(dir)/$(CONVERSION_FILENAME): $(META_DIR)/$(dir)/$(GOFILES_META).stamp \
$(gofiles__$(dir)) \
))
# For each dir in CONVERSION_DIRS, for each target in $(conversions__$(dir)),
# this establishes a dependency between the output file and the input files
# that should trigger a rebuild.
#
# The variable value was set in $(GOFILES_META) and included as part of the
# dependency management logic.
#
# Note that this is a deps-only statement, not a full rule (see below). This
# has to be done in a distinct step because wildcards don't work in static
# pattern rules.
#
# The '$(eval)' is needed because this has a different RHS for each LHS, and
# would otherwise produce results that make can't parse.
#
# We depend on the $(GOFILES_META).stamp to detect when the set of input files
# has changed. This allows us to detect deleted input files.
$(foreach dir, $(CONVERSION_DIRS), \
$(foreach tgt, $(conversions__$(dir)), $(eval \
$(dir)/$(CONVERSION_FILENAME): $(META_DIR)/$(tgt)/$(GOFILES_META).stamp \
$(gofiles__$(tgt)) \
)) \
)
# Unilaterally remove any leftovers from previous runs.
$(shell rm -f $(META_DIR)/$(CONVERSION_GEN)*.todo)
# How to regenerate conversion code. This is a little slow to run, so we batch
# it up and trigger the batch from the 'generated_files' target.
$(CONVERSION_FILES): $(CONVERSION_GEN)
mkdir -p $$(dirname $(META_DIR)/$(CONVERSION_GEN))
echo $(PRJ_SRC_PATH)/$(@D) >> $(META_DIR)/$(CONVERSION_GEN).todo
# This calculates the dependencies for the generator tool, so we only rebuild
# it when needed. It is PHONY so that it always runs, but it only updates the
# file if the contents have actually changed. We 'sinclude' this later.
.PHONY: $(META_DIR)/$(CONVERSION_GEN).mk
$(META_DIR)/$(CONVERSION_GEN).mk:
mkdir -p $(@D); \
(echo -n "$(CONVERSION_GEN): "; \
./hack/run-in-gopath.sh go list \
-f '{{.ImportPath}}{{"\n"}}{{range .Deps}}{{.}}{{"\n"}}{{end}}' \
./cmd/libs/go2idl/conversion-gen \
| grep --color=never "^$(PRJ_SRC_PATH)/" \
| xargs ./hack/run-in-gopath.sh go list \
-f '{{$$d := .Dir}}{{$$d}}{{"\n"}}{{range .GoFiles}}{{$$d}}/{{.}}{{"\n"}}{{end}}' \
| paste -sd' ' - \
| sed 's/ / \\=,/g' \
| tr '=,' '\n\t' \
| sed "s|$$(pwd -P)/||"; \
) > $@.tmp; \
cmp -s $@.tmp $@ || cat $@.tmp > $@ && rm -f $@.tmp
# Include dependency info for the generator tool. This will cause the rule of
# the same name to be considered and if it is updated, make will restart.
sinclude $(META_DIR)/$(CONVERSION_GEN).mk
# How to build the generator tool. The deps for this are defined in
# the $(CONVERSION_GEN).mk, above.
#
# A word on the need to touch: This rule might trigger if, for example, a
# non-Go file was added or deleted from a directory on which this depends.
# This target needs to be reconsidered, but Go realizes it doesn't actually
# have to be rebuilt. In that case, make will forever see the dependency as
# newer than the binary, and try to rebuild it over and over. So we touch it,
# and make is happy.
$(CONVERSION_GEN):
hack/make-rules/build.sh cmd/libs/go2idl/conversion-gen
touch $@

8
vendor/k8s.io/kubernetes/OWNERS generated vendored Normal file
View file

@ -0,0 +1,8 @@
assignees:
- bgrant0607
- brendandburns
- dchen1107
- jbeda
- lavalamp
- smarterclayton
- thockin

31
vendor/k8s.io/kubernetes/OWNERS_ALIASES generated vendored Normal file
View file

@ -0,0 +1,31 @@
aliases:
sig-cli-maintainers:
- adohe
- brendandburns
- deads2k
- fabianofranz
- janetkuo
- pwittrock
- smarterclayton
sig-cli:
- adohe
- deads2k
- derekwaynecarr
- dims
- dshulyak
- eparis
- ericchiang
- fabianofranz
- ghodss
- kargakis
- pwittrock
- rootfs
- smarterclayton
- soltysh
- sttts
- ymqytw
test-infra-maintainers:
- fejta
- ixdy
- rmmh
- spxtr

92
vendor/k8s.io/kubernetes/README.md generated vendored Normal file
View file

@ -0,0 +1,92 @@
# Kubernetes
[![Submit Queue Widget]][Submit Queue] [![GoDoc Widget]][GoDoc] [![Coverage Status Widget]][Coverage Status]
<img src="https://github.com/kubernetes/kubernetes/raw/master/logo/logo.png" width="100">
[Submit Queue]: http://submit-queue.k8s.io/#/e2e
[Submit Queue Widget]: http://submit-queue.k8s.io/health.svg?v=1
[GoDoc]: https://godoc.org/k8s.io/kubernetes
[GoDoc Widget]: https://godoc.org/k8s.io/kubernetes?status.svg
[Coverage Status]: https://coveralls.io/r/kubernetes/kubernetes
[Coverage Status Widget]: https://coveralls.io/repos/kubernetes/kubernetes/badge.svg
## Introduction
Kubernetes is an open source system for managing [containerized applications](http://kubernetes.io/docs/whatisk8s/) across multiple hosts,
providing basic mechanisms for deployment, maintenance, and scaling of applications. Kubernetes is hosted by the Cloud Native Computing Foundation ([CNCF](https://www.cncf.io))
Kubernetes builds upon a decade and a half of experience at Google running production workloads at scale using a system called [Borg](https://research.google.com/pubs/pub43438.html), combined with best-of-breed ideas and practices from the community.
<hr>
### Are you ...
* Interested in learning more about using Kubernetes?
- See our documentation on [kubernetes.io](http://kubernetes.io)
- Try our [interactive tutorial](http://kubernetes.io/docs/tutorials/kubernetes-basics/)
- Take a free course on [Scalable Microservices with Kubernetes](https://www.udacity.com/course/scalable-microservices-with-kubernetes--ud615).
* Interested in developing the core Kubernetes code base, developing tools using the Kubernetes API or helping in anyway possible ? Keep reading!
## Code of Conduct
The Kubernetes community abides by the CNCF [code of conduct](https://github.com/cncf/foundation/blob/master/code-of-conduct.md). Here is an excerpt:
_As contributors and maintainers of this project, and in the interest of fostering an open and welcoming community, we pledge to respect all people who contribute through reporting issues, posting feature requests, updating documentation, submitting pull requests or patches, and other activities._
## Community
Do you want to help "shape the evolution of technologies that are container-packaged, dynamically-scheduled and microservices-oriented? ". If you are a company, you should consider joining the [CNCF](https://cncf.io/about). For details about who's involved in CNCF and how Kubernetes plays a role, read [the announcement](https://cncf.io/news/announcement/2015/07/new-cloud-native-computing-foundation-drive-alignment-among-container). For general information about our community see the website [community](http://kubernetes.io/community/) page.
Join us on social media ([Twitter](https://twitter.com/kubernetesio), [Google+](https://plus.google.com/u/0/b/116512812300813784482/116512812300813784482)) and read our [blog](http://blog.kubernetes.io/)
Ask questions and help answer them on [Slack](http://slack.k8s.io/) or [Stack Overflow](http://stackoverflow.com/questions/tagged/kubernetes)
Attend our key events ([kubecon](http://events.linuxfoundation.org/events/kubecon), [cloudnativecon](http://events.linuxfoundation.org/events/cloudnativecon), weekly [community meeting](https://github.com/kubernetes/community/blob/master/community/README.md))
Join a Special Interest Group ([SIG](https://github.com/kubernetes/community))
## Contribute
If you're interested in being a contributor and want to get involved in developing Kubernetes, get started with this reading:
* The community [expectations](https://github.com/kubernetes/community/blob/master/contributors/devel/community-expectations.md)
* The [contributor guidelines](CONTRIBUTING.md)
* The [Kubernetes Developer Guide](https://github.com/kubernetes/community/tree/master/contributors/devel)
You will then most certainly gain a lot from joining a [SIG](https://github.com/kubernetes/community), attending the regular hangouts as well as the community [meeting](https://github.com/kubernetes/community/blob/master/community/README.md).
If you have an idea for a new feature, see the [Kubernetes Features](https://github.com/kubernetes/features) repository for a list of features that are coming in new releases as well as details on how to propose one.
### Building Kubernetes for the impatient
If you want to build Kubernetes right away there are two options:
* You have a working [Go environment](https://golang.org/doc/install).
```
$ go get -d k8s.io/kubernetes
$ cd $GOPATH/src/k8s.io/kubernetes
$ make
```
* You have a working [Docker environment](https://docs.docker.com/engine/).
```
$ git clone https://github.com/kubernetes/kubernetes
$ cd kubernetes
$ make quick-release
```
If you are less impatient, head over to the [developer's documentation](https://github.com/kubernetes/community/tree/master/contributors/devel).
## Support
While there are many different channels that you can use to get hold of us ([Slack](http://slack.k8s.io/), [Stack Overflow](http://stackoverflow.com/questions/tagged/kubernetes), [Issues](https://github.com/kubernetes/kubernetes/issues/new), [Forums/Mailing lists](https://groups.google.com/forum/#!forum/kubernetes-users)), you can help make sure that we are efficient in getting you the help that you need.
If you need support, start with the [troubleshooting guide](http://kubernetes.io/docs/troubleshooting/) and work your way through the process that we've outlined.
That said, if you have questions, reach out to us one way or another. We don't bite!
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/README.md?pixel)]()

325
vendor/k8s.io/kubernetes/Vagrantfile generated vendored Normal file
View file

@ -0,0 +1,325 @@
# -*- mode: ruby -*-
# vi: set ft=ruby :
# Vagrantfile API/syntax version. Don't touch unless you know what you're doing!
VAGRANTFILE_API_VERSION = "2"
# Require a recent version of vagrant otherwise some have reported errors setting host names on boxes
Vagrant.require_version ">= 1.7.4"
if ARGV.first == "up" && ENV['USING_KUBE_SCRIPTS'] != 'true'
raise Vagrant::Errors::VagrantError.new, <<END
Calling 'vagrant up' directly is not supported. Instead, please run the following:
export KUBERNETES_PROVIDER=vagrant
export VAGRANT_DEFAULT_PROVIDER=providername
./cluster/kube-up.sh
END
end
# The number of nodes to provision
$num_node = (ENV['NUM_NODES'] || 1).to_i
# ip configuration
$master_ip = ENV['MASTER_IP']
$node_ip_base = ENV['NODE_IP_BASE'] || ""
$node_ips = $num_node.times.collect { |n| $node_ip_base + "#{n+3}" }
# Determine the OS platform to use
$kube_os = ENV['KUBERNETES_OS'] || "fedora"
# Determine whether vagrant should use nfs to sync folders
$use_nfs = ENV['KUBERNETES_VAGRANT_USE_NFS'] == 'true'
# Determine whether vagrant should use rsync to sync folders
$use_rsync = ENV['KUBERNETES_VAGRANT_USE_RSYNC'] == 'true'
# To override the vagrant provider, use (e.g.):
# KUBERNETES_PROVIDER=vagrant VAGRANT_DEFAULT_PROVIDER=... .../cluster/kube-up.sh
# To override the box, use (e.g.):
# KUBERNETES_PROVIDER=vagrant KUBERNETES_BOX_NAME=... .../cluster/kube-up.sh
# You can specify a box version:
# KUBERNETES_PROVIDER=vagrant KUBERNETES_BOX_NAME=... KUBERNETES_BOX_VERSION=... .../cluster/kube-up.sh
# You can specify a box location:
# KUBERNETES_PROVIDER=vagrant KUBERNETES_BOX_NAME=... KUBERNETES_BOX_URL=... .../cluster/kube-up.sh
# KUBERNETES_BOX_URL and KUBERNETES_BOX_VERSION will be ignored unless
# KUBERNETES_BOX_NAME is set
# Default OS platform to provider/box information
$kube_provider_boxes = {
:parallels => {
'fedora' => {
# :box_url and :box_version are optional (and mutually exclusive);
# if :box_url is omitted the box will be retrieved by :box_name (and
# :box_version if provided) from
# http://atlas.hashicorp.com/boxes/search (formerly
# http://vagrantcloud.com/); this allows you override :box_name with
# your own value so long as you provide :box_url; for example, the
# "official" name of this box is "rickard-von-essen/
# opscode_fedora-20", but by providing the URL and our own name, we
# make it appear as yet another provider under the "kube-fedora22"
# box
:box_name => 'kube-fedora23',
:box_url => 'https://opscode-vm-bento.s3.amazonaws.com/vagrant/parallels/opscode_fedora-23_chef-provisionerless.box'
}
},
:virtualbox => {
'fedora' => {
:box_name => 'kube-fedora23',
:box_url => 'https://opscode-vm-bento.s3.amazonaws.com/vagrant/virtualbox/opscode_fedora-23_chef-provisionerless.box'
}
},
:libvirt => {
'fedora' => {
:box_name => 'kube-fedora23',
:box_url => 'https://dl.fedoraproject.org/pub/fedora/linux/releases/23/Cloud/x86_64/Images/Fedora-Cloud-Base-Vagrant-23-20151030.x86_64.vagrant-libvirt.box'
}
},
:vmware_desktop => {
'fedora' => {
:box_name => 'kube-fedora23',
:box_url => 'https://opscode-vm-bento.s3.amazonaws.com/vagrant/vmware/opscode_fedora-23_chef-provisionerless.box'
}
},
:vsphere => {
'fedora' => {
:box_name => 'vsphere-dummy',
:box_url => 'https://github.com/deromka/vagrant-vsphere/blob/master/vsphere-dummy.box?raw=true'
}
}
}
# Give access to all physical cpu cores
# Previously cargo-culted from here:
# http://www.stefanwrobel.com/how-to-make-vagrant-performance-not-suck
# Rewritten to actually determine the number of hardware cores instead of assuming
# that the host has hyperthreading enabled.
host = RbConfig::CONFIG['host_os']
if host =~ /darwin/
$vm_cpus = `sysctl -n hw.physicalcpu`.to_i
elsif host =~ /linux/
#This should work on most processors, however it will fail on ones without the core id field.
#So far i have only seen this on a raspberry pi. which you probably don't want to run vagrant on anyhow...
#But just in case we'll default to the result of nproc if we get 0 just to be safe.
$vm_cpus = `cat /proc/cpuinfo | grep 'core id' | sort -u | wc -l`.to_i
if $vm_cpus < 1
$vm_cpus = `nproc`.to_i
end
else # sorry Windows folks, I can't help you
$vm_cpus = 2
end
# Give VM 1024MB of RAM by default
# In Fedora VM, tmpfs device is mapped to /tmp. tmpfs is given 50% of RAM allocation.
# When doing Salt provisioning, we copy approximately 200MB of content in /tmp before anything else happens.
# This causes problems if anything else was in /tmp or the other directories that are bound to tmpfs device (i.e /run, etc.)
$vm_master_mem = (ENV['KUBERNETES_MASTER_MEMORY'] || ENV['KUBERNETES_MEMORY'] || 1280).to_i
$vm_node_mem = (ENV['KUBERNETES_NODE_MEMORY'] || ENV['KUBERNETES_MEMORY'] || 2048).to_i
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
if Vagrant.has_plugin?("vagrant-proxyconf")
$http_proxy = ENV['KUBERNETES_HTTP_PROXY'] || ""
$https_proxy = ENV['KUBERNETES_HTTPS_PROXY'] || ""
$no_proxy = ENV['KUBERNETES_NO_PROXY'] || "127.0.0.1"
config.proxy.http = $http_proxy
config.proxy.https = $https_proxy
config.proxy.no_proxy = $no_proxy
end
# this corrects a bug in 1.8.5 where an invalid SSH key is inserted.
if Vagrant::VERSION == "1.8.5"
config.ssh.insert_key = false
end
def setvmboxandurl(config, provider)
if ENV['KUBERNETES_BOX_NAME'] then
config.vm.box = ENV['KUBERNETES_BOX_NAME']
if ENV['KUBERNETES_BOX_URL'] then
config.vm.box_url = ENV['KUBERNETES_BOX_URL']
end
if ENV['KUBERNETES_BOX_VERSION'] then
config.vm.box_version = ENV['KUBERNETES_BOX_VERSION']
end
else
config.vm.box = $kube_provider_boxes[provider][$kube_os][:box_name]
if $kube_provider_boxes[provider][$kube_os][:box_url] then
config.vm.box_url = $kube_provider_boxes[provider][$kube_os][:box_url]
end
if $kube_provider_boxes[provider][$kube_os][:box_version] then
config.vm.box_version = $kube_provider_boxes[provider][$kube_os][:box_version]
end
end
end
def customize_vm(config, vm_mem)
if $use_nfs then
config.vm.synced_folder ".", "/vagrant", nfs: true
elsif $use_rsync then
opts = {}
if ENV['KUBERNETES_VAGRANT_RSYNC_ARGS'] then
opts[:rsync__args] = ENV['KUBERNETES_VAGRANT_RSYNC_ARGS'].split(" ")
end
if ENV['KUBERNETES_VAGRANT_RSYNC_EXCLUDE'] then
opts[:rsync__exclude] = ENV['KUBERNETES_VAGRANT_RSYNC_EXCLUDE'].split(" ")
end
config.vm.synced_folder ".", "/vagrant", opts
end
# Try VMWare Fusion first (see
# https://docs.vagrantup.com/v2/providers/basic_usage.html)
config.vm.provider :vmware_fusion do |v, override|
setvmboxandurl(override, :vmware_desktop)
v.vmx['memsize'] = vm_mem
v.vmx['numvcpus'] = $vm_cpus
end
# configure libvirt provider
config.vm.provider :libvirt do |v, override|
setvmboxandurl(override, :libvirt)
v.memory = vm_mem
v.cpus = $vm_cpus
v.nested = true
v.volume_cache = 'none'
end
# Then try VMWare Workstation
config.vm.provider :vmware_workstation do |v, override|
setvmboxandurl(override, :vmware_desktop)
v.vmx['memsize'] = vm_mem
v.vmx['numvcpus'] = $vm_cpus
end
# Then try Parallels
config.vm.provider :parallels do |v, override|
setvmboxandurl(override, :parallels)
v.memory = vm_mem # v.customize ['set', :id, '--memsize', vm_mem]
v.cpus = $vm_cpus # v.customize ['set', :id, '--cpus', $vm_cpus]
# Don't attempt to update the Parallels tools on the image (this can
# be done manually if necessary)
v.update_guest_tools = false # v.customize ['set', :id, '--tools-autoupdate', 'off']
# Set up Parallels folder sharing to behave like VirtualBox (i.e.,
# mount the current directory as /vagrant and that's it)
v.customize ['set', :id, '--shf-guest', 'off']
v.customize ['set', :id, '--shf-guest-automount', 'off']
v.customize ['set', :id, '--shf-host', 'on']
# Synchronize VM clocks to host clock (Avoid certificate invalid issue)
v.customize ['set', :id, '--time-sync', 'on']
# Remove all auto-mounted "shared folders"; the result seems to
# persist between runs (i.e., vagrant halt && vagrant up)
override.vm.provision :shell, :inline => (%q{
set -ex
if [ -d /media/psf ]; then
for i in /media/psf/*; do
if [ -d "${i}" ]; then
umount "${i}" || true
rmdir -v "${i}"
fi
done
rmdir -v /media/psf
fi
exit
}).strip
end
# Then try vsphere
config.vm.provider :vsphere do |vsphere, override|
setvmboxandurl(override, :vsphere)
#config.vm.hostname = ENV['MASTER_NAME']
config.ssh.username = ENV['MASTER_USER']
config.ssh.password = ENV['MASTER_PASSWD']
config.ssh.pty = true
config.ssh.insert_key = true
#config.ssh.private_key_path = '~/.ssh/id_rsa_vsphere'
# Don't attempt to update the tools on the image (this can
# be done manually if necessary)
# vsphere.update_guest_tools = false # v.customize ['set', :id, '--tools-autoupdate', 'off']
# The vSphere host we're going to connect to
vsphere.host = ENV['VAGRANT_VSPHERE_URL']
# The ESX host for the new VM
vsphere.compute_resource_name = ENV['VAGRANT_VSPHERE_RESOURCE_POOL']
# The resource pool for the new VM
#vsphere.resource_pool_name = 'Comp'
# path to folder where new VM should be created, if not specified template's parent folder will be used
vsphere.vm_base_path = ENV['VAGRANT_VSPHERE_BASE_PATH']
# The template we're going to clone
vsphere.template_name = ENV['VAGRANT_VSPHERE_TEMPLATE_NAME']
# The name of the new machine
#vsphere.name = ENV['MASTER_NAME']
# vSphere login
vsphere.user = ENV['VAGRANT_VSPHERE_USERNAME']
# vSphere password
vsphere.password = ENV['VAGRANT_VSPHERE_PASSWORD']
# cpu count
vsphere.cpu_count = $vm_cpus
# memory in MB
vsphere.memory_mb = vm_mem
# If you don't have SSL configured correctly, set this to 'true'
vsphere.insecure = ENV['VAGRANT_VSPHERE_INSECURE']
end
# Don't attempt to update Virtualbox Guest Additions (requires gcc)
if Vagrant.has_plugin?("vagrant-vbguest") then
config.vbguest.auto_update = false
end
# Finally, fall back to VirtualBox
config.vm.provider :virtualbox do |v, override|
setvmboxandurl(override, :virtualbox)
v.memory = vm_mem # v.customize ["modifyvm", :id, "--memory", vm_mem]
v.cpus = $vm_cpus # v.customize ["modifyvm", :id, "--cpus", $vm_cpus]
# Use faster paravirtualized networking
v.customize ["modifyvm", :id, "--nictype1", "virtio"]
v.customize ["modifyvm", :id, "--nictype2", "virtio"]
end
end
# Kubernetes master
config.vm.define "master" do |c|
customize_vm c, $vm_master_mem
if ENV['KUBE_TEMP'] then
script = "#{ENV['KUBE_TEMP']}/master-start.sh"
c.vm.provision "shell", run: "always", path: script
end
c.vm.network "private_network", ip: "#{$master_ip}"
end
# Kubernetes node
$num_node.times do |n|
node_vm_name = "node-#{n+1}"
config.vm.define node_vm_name do |node|
customize_vm node, $vm_node_mem
node_ip = $node_ips[n]
if ENV['KUBE_TEMP'] then
script = "#{ENV['KUBE_TEMP']}/node-start-#{n}.sh"
node.vm.provision "shell", run: "always", path: script
end
node.vm.network "private_network", ip: "#{node_ip}"
end
end
end

61
vendor/k8s.io/kubernetes/WORKSPACE generated vendored Normal file
View file

@ -0,0 +1,61 @@
git_repository(
name = "io_bazel_rules_go",
commit = "d0142854a22a0dd98306280e897e64086289a0de",
remote = "https://github.com/bazelbuild/rules_go.git",
)
git_repository(
name = "io_kubernetes_build",
commit = "418b8e976cb32d94fd765c80f2b04e660c5ec4ec",
remote = "https://github.com/kubernetes/release.git",
)
load("@io_bazel_rules_go//go:def.bzl", "go_repositories")
go_repositories()
# for building docker base images
debs = (
(
"busybox_deb",
"f262cc9cf893740bb70c3dd01da9429b858c94be696badd4a702e0a8c7f6f80b",
"http://ftp.us.debian.org/debian/pool/main/b/busybox/busybox-static_1.22.0-19+b1_amd64.deb",
),
(
"libc_deb",
"ee4d9dea08728e2c2bbf43d819c3c7e61798245fab4b983ae910865980f791ad",
"http://ftp.us.debian.org/debian/pool/main/g/glibc/libc6_2.19-18+deb8u6_amd64.deb",
),
(
"iptables_deb",
"7747388a97ba71fede302d70361c81d486770a2024185514c18b5d8eab6aaf4e",
"http://ftp.us.debian.org/debian/pool/main/i/iptables/iptables_1.4.21-2+b1_amd64.deb",
),
(
"libnetlink_deb",
"5d486022cd9e047e9afbb1617cf4519c0decfc3d2c1fad7e7fe5604943dbbf37",
"http://ftp.us.debian.org/debian/pool/main/libn/libnfnetlink/libnfnetlink0_1.0.1-3_amd64.deb",
),
(
"libxtables_deb",
"6783f316af4cbf3ada8b9a2b7bb5f53a87c0c2575c1903ce371fdbd45d3626c6",
"http://ftp.us.debian.org/debian/pool/main/i/iptables/libxtables10_1.4.21-2+b1_amd64.deb",
),
(
"iproute2_deb",
"3ce9cb1d03a2a1359cbdd4f863b15d0c906096bf713e8eb688149da2f4e350bc",
"http://ftp.us.debian.org/debian/pool/main/i/iproute2/iproute_3.16.0-2_all.deb",
),
)
[http_file(
name = name,
sha256 = sha256,
url = url,
) for name, sha256, url in debs]
http_file(
name = "kubernetes_cni",
sha256 = "ddcb7a429f82b284a13bdb36313eeffd997753b6fa5191205f1e978dcfeb0792",
url = " https://storage.googleapis.com/kubernetes-release/network-plugins/cni-amd64-07a8a28637e97b22eb8dfe710eeae1344f69d16e.tar.gz",
)

20
vendor/k8s.io/kubernetes/api/BUILD generated vendored Normal file
View file

@ -0,0 +1,20 @@
package(default_visibility = ["//visibility:public"])
licenses(["notice"])
filegroup(
name = "package-srcs",
srcs = glob(["**"]),
tags = ["automanaged"],
visibility = ["//visibility:private"],
)
filegroup(
name = "all-srcs",
srcs = [
":package-srcs",
"//api/openapi-spec:all-srcs",
"//api/swagger-spec:all-srcs",
],
tags = ["automanaged"],
)

21
vendor/k8s.io/kubernetes/api/openapi-spec/BUILD generated vendored Normal file
View file

@ -0,0 +1,21 @@
package(default_visibility = ["//visibility:public"])
filegroup(
name = "swagger-spec",
srcs = glob([
"**/*.json",
]),
)
filegroup(
name = "package-srcs",
srcs = glob(["**"]),
tags = ["automanaged"],
visibility = ["//visibility:private"],
)
filegroup(
name = "all-srcs",
srcs = [":package-srcs"],
tags = ["automanaged"],
)

38874
vendor/k8s.io/kubernetes/api/openapi-spec/swagger.json generated vendored Normal file

File diff suppressed because it is too large Load diff

21
vendor/k8s.io/kubernetes/api/swagger-spec/BUILD generated vendored Normal file
View file

@ -0,0 +1,21 @@
package(default_visibility = ["//visibility:public"])
filegroup(
name = "swagger-spec",
srcs = glob([
"**/*.json",
]),
)
filegroup(
name = "package-srcs",
srcs = glob(["**"]),
tags = ["automanaged"],
visibility = ["//visibility:private"],
)
filegroup(
name = "all-srcs",
srcs = [":package-srcs"],
tags = ["automanaged"],
)

87
vendor/k8s.io/kubernetes/api/swagger-spec/api.json generated vendored Normal file
View file

@ -0,0 +1,87 @@
{
"swaggerVersion": "1.2",
"apiVersion": "",
"basePath": "https://10.10.10.10:6443",
"resourcePath": "/api",
"info": {
"title": "",
"description": ""
},
"apis": [
{
"path": "/api",
"description": "get available API versions",
"operations": [
{
"type": "v1.APIVersions",
"method": "GET",
"summary": "get available API versions",
"nickname": "getAPIVersions",
"parameters": [],
"produces": [
"application/json",
"application/yaml",
"application/vnd.kubernetes.protobuf"
],
"consumes": [
"application/json",
"application/yaml",
"application/vnd.kubernetes.protobuf"
]
}
]
}
],
"models": {
"v1.APIVersions": {
"id": "v1.APIVersions",
"description": "APIVersions lists the versions that are available, to allow clients to discover the API at /api, which is the root path of the legacy v1 API.",
"required": [
"versions",
"serverAddressByClientCIDRs"
],
"properties": {
"kind": {
"type": "string",
"description": "Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#types-kinds"
},
"apiVersion": {
"type": "string",
"description": "APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#resources"
},
"versions": {
"type": "array",
"items": {
"type": "string"
},
"description": "versions are the api versions that are available."
},
"serverAddressByClientCIDRs": {
"type": "array",
"items": {
"$ref": "v1.ServerAddressByClientCIDR"
},
"description": "a map of client CIDR to server address that is serving this group. This is to help clients reach servers in the most network-efficient way possible. Clients can use the appropriate server address as per the CIDR that they match. In case of multiple matches, clients should use the longest matching CIDR. The server returns only those CIDRs that it thinks that the client can match. For example: the master will return an internal IP CIDR only, if the client reaches the server using an internal IP. Server looks at X-Forwarded-For header or X-Real-Ip header or request.RemoteAddr (in that order) to get the client IP."
}
}
},
"v1.ServerAddressByClientCIDR": {
"id": "v1.ServerAddressByClientCIDR",
"description": "ServerAddressByClientCIDR helps the client to determine the server address that they should use, depending on the clientCIDR that they match.",
"required": [
"clientCIDR",
"serverAddress"
],
"properties": {
"clientCIDR": {
"type": "string",
"description": "The CIDR with which clients can match their IP to figure out the server address that they should use."
},
"serverAddress": {
"type": "string",
"description": "Address of this server, suitable for a client that matches the above CIDR. This can be a hostname, hostname:port, IP or IP:port."
}
}
}
}
}

138
vendor/k8s.io/kubernetes/api/swagger-spec/apis.json generated vendored Normal file
View file

@ -0,0 +1,138 @@
{
"swaggerVersion": "1.2",
"apiVersion": "",
"basePath": "https://10.10.10.10:6443",
"resourcePath": "/apis",
"info": {
"title": "",
"description": ""
},
"apis": [
{
"path": "/apis",
"description": "get available API versions",
"operations": [
{
"type": "v1.APIGroupList",
"method": "GET",
"summary": "get available API versions",
"nickname": "getAPIVersions",
"parameters": [],
"produces": [
"application/json",
"application/yaml",
"application/vnd.kubernetes.protobuf"
],
"consumes": [
"application/json",
"application/yaml",
"application/vnd.kubernetes.protobuf"
]
}
]
}
],
"models": {
"v1.APIGroupList": {
"id": "v1.APIGroupList",
"description": "APIGroupList is a list of APIGroup, to allow clients to discover the API at /apis.",
"required": [
"groups"
],
"properties": {
"kind": {
"type": "string",
"description": "Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#types-kinds"
},
"apiVersion": {
"type": "string",
"description": "APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#resources"
},
"groups": {
"type": "array",
"items": {
"$ref": "v1.APIGroup"
},
"description": "groups is a list of APIGroup."
}
}
},
"v1.APIGroup": {
"id": "v1.APIGroup",
"description": "APIGroup contains the name, the supported versions, and the preferred version of a group.",
"required": [
"name",
"versions",
"serverAddressByClientCIDRs"
],
"properties": {
"kind": {
"type": "string",
"description": "Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#types-kinds"
},
"apiVersion": {
"type": "string",
"description": "APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#resources"
},
"name": {
"type": "string",
"description": "name is the name of the group."
},
"versions": {
"type": "array",
"items": {
"$ref": "v1.GroupVersionForDiscovery"
},
"description": "versions are the versions supported in this group."
},
"preferredVersion": {
"$ref": "v1.GroupVersionForDiscovery",
"description": "preferredVersion is the version preferred by the API server, which probably is the storage version."
},
"serverAddressByClientCIDRs": {
"type": "array",
"items": {
"$ref": "v1.ServerAddressByClientCIDR"
},
"description": "a map of client CIDR to server address that is serving this group. This is to help clients reach servers in the most network-efficient way possible. Clients can use the appropriate server address as per the CIDR that they match. In case of multiple matches, clients should use the longest matching CIDR. The server returns only those CIDRs that it thinks that the client can match. For example: the master will return an internal IP CIDR only, if the client reaches the server using an internal IP. Server looks at X-Forwarded-For header or X-Real-Ip header or request.RemoteAddr (in that order) to get the client IP."
}
}
},
"v1.GroupVersionForDiscovery": {
"id": "v1.GroupVersionForDiscovery",
"description": "GroupVersion contains the \"group/version\" and \"version\" string of a version. It is made a struct to keep extensibility.",
"required": [
"groupVersion",
"version"
],
"properties": {
"groupVersion": {
"type": "string",
"description": "groupVersion specifies the API group and version in the form \"group/version\""
},
"version": {
"type": "string",
"description": "version specifies the version in the form of \"version\". This is to save the clients the trouble of splitting the GroupVersion."
}
}
},
"v1.ServerAddressByClientCIDR": {
"id": "v1.ServerAddressByClientCIDR",
"description": "ServerAddressByClientCIDR helps the client to determine the server address that they should use, depending on the clientCIDR that they match.",
"required": [
"clientCIDR",
"serverAddress"
],
"properties": {
"clientCIDR": {
"type": "string",
"description": "The CIDR with which clients can match their IP to figure out the server address that they should use."
},
"serverAddress": {
"type": "string",
"description": "Address of this server, suitable for a client that matches the above CIDR. This can be a hostname, hostname:port, IP or IP:port."
}
}
}
}
}

114
vendor/k8s.io/kubernetes/api/swagger-spec/apps.json generated vendored Normal file
View file

@ -0,0 +1,114 @@
{
"swaggerVersion": "1.2",
"apiVersion": "",
"basePath": "https://10.10.10.10:6443",
"resourcePath": "/apis/apps",
"info": {
"title": "",
"description": ""
},
"apis": [
{
"path": "/apis/apps",
"description": "get information of a group",
"operations": [
{
"type": "v1.APIGroup",
"method": "GET",
"summary": "get information of a group",
"nickname": "getAPIGroup",
"parameters": [],
"produces": [
"application/json",
"application/yaml",
"application/vnd.kubernetes.protobuf"
],
"consumes": [
"application/json",
"application/yaml",
"application/vnd.kubernetes.protobuf"
]
}
]
}
],
"models": {
"v1.APIGroup": {
"id": "v1.APIGroup",
"description": "APIGroup contains the name, the supported versions, and the preferred version of a group.",
"required": [
"name",
"versions",
"serverAddressByClientCIDRs"
],
"properties": {
"kind": {
"type": "string",
"description": "Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#types-kinds"
},
"apiVersion": {
"type": "string",
"description": "APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#resources"
},
"name": {
"type": "string",
"description": "name is the name of the group."
},
"versions": {
"type": "array",
"items": {
"$ref": "v1.GroupVersionForDiscovery"
},
"description": "versions are the versions supported in this group."
},
"preferredVersion": {
"$ref": "v1.GroupVersionForDiscovery",
"description": "preferredVersion is the version preferred by the API server, which probably is the storage version."
},
"serverAddressByClientCIDRs": {
"type": "array",
"items": {
"$ref": "v1.ServerAddressByClientCIDR"
},
"description": "a map of client CIDR to server address that is serving this group. This is to help clients reach servers in the most network-efficient way possible. Clients can use the appropriate server address as per the CIDR that they match. In case of multiple matches, clients should use the longest matching CIDR. The server returns only those CIDRs that it thinks that the client can match. For example: the master will return an internal IP CIDR only, if the client reaches the server using an internal IP. Server looks at X-Forwarded-For header or X-Real-Ip header or request.RemoteAddr (in that order) to get the client IP."
}
}
},
"v1.GroupVersionForDiscovery": {
"id": "v1.GroupVersionForDiscovery",
"description": "GroupVersion contains the \"group/version\" and \"version\" string of a version. It is made a struct to keep extensibility.",
"required": [
"groupVersion",
"version"
],
"properties": {
"groupVersion": {
"type": "string",
"description": "groupVersion specifies the API group and version in the form \"group/version\""
},
"version": {
"type": "string",
"description": "version specifies the version in the form of \"version\". This is to save the clients the trouble of splitting the GroupVersion."
}
}
},
"v1.ServerAddressByClientCIDR": {
"id": "v1.ServerAddressByClientCIDR",
"description": "ServerAddressByClientCIDR helps the client to determine the server address that they should use, depending on the clientCIDR that they match.",
"required": [
"clientCIDR",
"serverAddress"
],
"properties": {
"clientCIDR": {
"type": "string",
"description": "The CIDR with which clients can match their IP to figure out the server address that they should use."
},
"serverAddress": {
"type": "string",
"description": "Address of this server, suitable for a client that matches the above CIDR. This can be a hostname, hostname:port, IP or IP:port."
}
}
}
}
}

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,114 @@
{
"swaggerVersion": "1.2",
"apiVersion": "",
"basePath": "https://10.10.10.10:6443",
"resourcePath": "/apis/authentication.k8s.io",
"info": {
"title": "",
"description": ""
},
"apis": [
{
"path": "/apis/authentication.k8s.io",
"description": "get information of a group",
"operations": [
{
"type": "v1.APIGroup",
"method": "GET",
"summary": "get information of a group",
"nickname": "getAPIGroup",
"parameters": [],
"produces": [
"application/json",
"application/yaml",
"application/vnd.kubernetes.protobuf"
],
"consumes": [
"application/json",
"application/yaml",
"application/vnd.kubernetes.protobuf"
]
}
]
}
],
"models": {
"v1.APIGroup": {
"id": "v1.APIGroup",
"description": "APIGroup contains the name, the supported versions, and the preferred version of a group.",
"required": [
"name",
"versions",
"serverAddressByClientCIDRs"
],
"properties": {
"kind": {
"type": "string",
"description": "Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#types-kinds"
},
"apiVersion": {
"type": "string",
"description": "APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#resources"
},
"name": {
"type": "string",
"description": "name is the name of the group."
},
"versions": {
"type": "array",
"items": {
"$ref": "v1.GroupVersionForDiscovery"
},
"description": "versions are the versions supported in this group."
},
"preferredVersion": {
"$ref": "v1.GroupVersionForDiscovery",
"description": "preferredVersion is the version preferred by the API server, which probably is the storage version."
},
"serverAddressByClientCIDRs": {
"type": "array",
"items": {
"$ref": "v1.ServerAddressByClientCIDR"
},
"description": "a map of client CIDR to server address that is serving this group. This is to help clients reach servers in the most network-efficient way possible. Clients can use the appropriate server address as per the CIDR that they match. In case of multiple matches, clients should use the longest matching CIDR. The server returns only those CIDRs that it thinks that the client can match. For example: the master will return an internal IP CIDR only, if the client reaches the server using an internal IP. Server looks at X-Forwarded-For header or X-Real-Ip header or request.RemoteAddr (in that order) to get the client IP."
}
}
},
"v1.GroupVersionForDiscovery": {
"id": "v1.GroupVersionForDiscovery",
"description": "GroupVersion contains the \"group/version\" and \"version\" string of a version. It is made a struct to keep extensibility.",
"required": [
"groupVersion",
"version"
],
"properties": {
"groupVersion": {
"type": "string",
"description": "groupVersion specifies the API group and version in the form \"group/version\""
},
"version": {
"type": "string",
"description": "version specifies the version in the form of \"version\". This is to save the clients the trouble of splitting the GroupVersion."
}
}
},
"v1.ServerAddressByClientCIDR": {
"id": "v1.ServerAddressByClientCIDR",
"description": "ServerAddressByClientCIDR helps the client to determine the server address that they should use, depending on the clientCIDR that they match.",
"required": [
"clientCIDR",
"serverAddress"
],
"properties": {
"clientCIDR": {
"type": "string",
"description": "The CIDR with which clients can match their IP to figure out the server address that they should use."
},
"serverAddress": {
"type": "string",
"description": "Address of this server, suitable for a client that matches the above CIDR. This can be a hostname, hostname:port, IP or IP:port."
}
}
}
}
}

View file

@ -0,0 +1 @@

View file

@ -0,0 +1,329 @@
{
"swaggerVersion": "1.2",
"apiVersion": "authentication.k8s.io/v1beta1",
"basePath": "https://10.10.10.10:6443",
"resourcePath": "/apis/authentication.k8s.io/v1beta1",
"info": {
"title": "",
"description": ""
},
"apis": [
{
"path": "/apis/authentication.k8s.io/v1beta1/tokenreviews",
"description": "API at /apis/authentication.k8s.io/v1beta1",
"operations": [
{
"type": "v1beta1.TokenReview",
"method": "POST",
"summary": "create a TokenReview",
"nickname": "createTokenReview",
"parameters": [
{
"type": "string",
"paramType": "query",
"name": "pretty",
"description": "If 'true', then the output is pretty printed.",
"required": false,
"allowMultiple": false
},
{
"type": "v1beta1.TokenReview",
"paramType": "body",
"name": "body",
"description": "",
"required": true,
"allowMultiple": false
}
],
"responseMessages": [
{
"code": 200,
"message": "OK",
"responseModel": "v1beta1.TokenReview"
}
],
"produces": [
"application/json",
"application/yaml",
"application/vnd.kubernetes.protobuf"
],
"consumes": [
"*/*"
]
}
]
},
{
"path": "/apis/authentication.k8s.io/v1beta1",
"description": "API at /apis/authentication.k8s.io/v1beta1",
"operations": [
{
"type": "v1.APIResourceList",
"method": "GET",
"summary": "get available resources",
"nickname": "getAPIResources",
"parameters": [],
"produces": [
"application/json",
"application/yaml",
"application/vnd.kubernetes.protobuf"
],
"consumes": [
"application/json",
"application/yaml",
"application/vnd.kubernetes.protobuf"
]
}
]
}
],
"models": {
"v1beta1.TokenReview": {
"id": "v1beta1.TokenReview",
"description": "TokenReview attempts to authenticate a token to a known user. Note: TokenReview requests may be cached by the webhook token authenticator plugin in the kube-apiserver.",
"required": [
"spec"
],
"properties": {
"kind": {
"type": "string",
"description": "Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#types-kinds"
},
"apiVersion": {
"type": "string",
"description": "APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#resources"
},
"metadata": {
"$ref": "v1.ObjectMeta"
},
"spec": {
"$ref": "v1beta1.TokenReviewSpec",
"description": "Spec holds information about the request being evaluated"
},
"status": {
"$ref": "v1beta1.TokenReviewStatus",
"description": "Status is filled in by the server and indicates whether the request can be authenticated."
}
}
},
"v1.ObjectMeta": {
"id": "v1.ObjectMeta",
"description": "ObjectMeta is metadata that all persisted resources must have, which includes all objects users must create.",
"properties": {
"name": {
"type": "string",
"description": "Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/identifiers#names"
},
"generateName": {
"type": "string",
"description": "GenerateName is an optional prefix, used by the server, to generate a unique name ONLY IF the Name field has not been provided. If this field is used, the name returned to the client will be different than the name passed. This value will also be combined with a unique suffix. The provided value has the same validation rules as the Name field, and may be truncated by the length of the suffix required to make the value unique on the server.\n\nIf this field is specified and the generated name exists, the server will NOT return a 409 - instead, it will either return 201 Created or 500 with Reason ServerTimeout indicating a unique name could not be found in the time allotted, and the client should retry (optionally after the time indicated in the Retry-After header).\n\nApplied only if Name is not specified. More info: http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#idempotency"
},
"namespace": {
"type": "string",
"description": "Namespace defines the space within each name must be unique. An empty namespace is equivalent to the \"default\" namespace, but \"default\" is the canonical representation. Not all objects are required to be scoped to a namespace - the value of this field for those objects will be empty.\n\nMust be a DNS_LABEL. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/namespaces"
},
"selfLink": {
"type": "string",
"description": "SelfLink is a URL representing this object. Populated by the system. Read-only."
},
"uid": {
"type": "string",
"description": "UID is the unique in time and space value for this object. It is typically generated by the server on successful creation of a resource and is not allowed to change on PUT operations.\n\nPopulated by the system. Read-only. More info: http://kubernetes.io/docs/user-guide/identifiers#uids"
},
"resourceVersion": {
"type": "string",
"description": "An opaque value that represents the internal version of this object that can be used by clients to determine when objects have changed. May be used for optimistic concurrency, change detection, and the watch operation on a resource or set of resources. Clients must treat these values as opaque and passed unmodified back to the server. They may only be valid for a particular resource or set of resources.\n\nPopulated by the system. Read-only. Value must be treated as opaque by clients and . More info: http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#concurrency-control-and-consistency"
},
"generation": {
"type": "integer",
"format": "int64",
"description": "A sequence number representing a specific generation of the desired state. Populated by the system. Read-only."
},
"creationTimestamp": {
"type": "string",
"description": "CreationTimestamp is a timestamp representing the server time when this object was created. It is not guaranteed to be set in happens-before order across separate operations. Clients may not set this value. It is represented in RFC3339 form and is in UTC.\n\nPopulated by the system. Read-only. Null for lists. More info: http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#metadata"
},
"deletionTimestamp": {
"type": "string",
"description": "DeletionTimestamp is RFC 3339 date and time at which this resource will be deleted. This field is set by the server when a graceful deletion is requested by the user, and is not directly settable by a client. The resource is expected to be deleted (no longer visible from resource lists, and not reachable by name) after the time in this field. Once set, this value may not be unset or be set further into the future, although it may be shortened or the resource may be deleted prior to this time. For example, a user may request that a pod is deleted in 30 seconds. The Kubelet will react by sending a graceful termination signal to the containers in the pod. After that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL) to the container and after cleanup, remove the pod from the API. In the presence of network partitions, this object may still exist after this timestamp, until an administrator or automated process can determine the resource is fully terminated. If not set, graceful deletion of the object has not been requested.\n\nPopulated by the system when a graceful deletion is requested. Read-only. More info: http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#metadata"
},
"deletionGracePeriodSeconds": {
"type": "integer",
"format": "int64",
"description": "Number of seconds allowed for this object to gracefully terminate before it will be removed from the system. Only set when deletionTimestamp is also set. May only be shortened. Read-only."
},
"labels": {
"type": "object",
"description": "Map of string keys and values that can be used to organize and categorize (scope and select) objects. May match selectors of replication controllers and services. More info: http://kubernetes.io/docs/user-guide/labels"
},
"annotations": {
"type": "object",
"description": "Annotations is an unstructured key value map stored with a resource that may be set by external tools to store and retrieve arbitrary metadata. They are not queryable and should be preserved when modifying objects. More info: http://kubernetes.io/docs/user-guide/annotations"
},
"ownerReferences": {
"type": "array",
"items": {
"$ref": "v1.OwnerReference"
},
"description": "List of objects depended by this object. If ALL objects in the list have been deleted, this object will be garbage collected. If this object is managed by a controller, then an entry in this list will point to this controller, with the controller field set to true. There cannot be more than one managing controller."
},
"finalizers": {
"type": "array",
"items": {
"type": "string"
},
"description": "Must be empty before the object is deleted from the registry. Each entry is an identifier for the responsible component that will remove the entry from the list. If the deletionTimestamp of the object is non-nil, entries in this list can only be removed."
},
"clusterName": {
"type": "string",
"description": "The name of the cluster which the object belongs to. This is used to distinguish resources with same name and namespace in different clusters. This field is not set anywhere right now and apiserver is going to ignore it if set in create or update request."
}
}
},
"v1.OwnerReference": {
"id": "v1.OwnerReference",
"description": "OwnerReference contains enough information to let you identify an owning object. Currently, an owning object must be in the same namespace, so there is no namespace field.",
"required": [
"apiVersion",
"kind",
"name",
"uid"
],
"properties": {
"apiVersion": {
"type": "string",
"description": "API version of the referent."
},
"kind": {
"type": "string",
"description": "Kind of the referent. More info: http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#types-kinds"
},
"name": {
"type": "string",
"description": "Name of the referent. More info: http://kubernetes.io/docs/user-guide/identifiers#names"
},
"uid": {
"type": "string",
"description": "UID of the referent. More info: http://kubernetes.io/docs/user-guide/identifiers#uids"
},
"controller": {
"type": "boolean",
"description": "If true, this reference points to the managing controller."
}
}
},
"v1beta1.TokenReviewSpec": {
"id": "v1beta1.TokenReviewSpec",
"description": "TokenReviewSpec is a description of the token authentication request.",
"properties": {
"token": {
"type": "string",
"description": "Token is the opaque bearer token."
}
}
},
"v1beta1.TokenReviewStatus": {
"id": "v1beta1.TokenReviewStatus",
"description": "TokenReviewStatus is the result of the token authentication request.",
"properties": {
"authenticated": {
"type": "boolean",
"description": "Authenticated indicates that the token was associated with a known user."
},
"user": {
"$ref": "v1beta1.UserInfo",
"description": "User is the UserInfo associated with the provided token."
},
"error": {
"type": "string",
"description": "Error indicates that the token couldn't be checked"
}
}
},
"v1beta1.UserInfo": {
"id": "v1beta1.UserInfo",
"description": "UserInfo holds the information about the user needed to implement the user.Info interface.",
"properties": {
"username": {
"type": "string",
"description": "The name that uniquely identifies this user among all active users."
},
"uid": {
"type": "string",
"description": "A unique value that identifies this user across time. If this user is deleted and another user by the same name is added, they will have different UIDs."
},
"groups": {
"type": "array",
"items": {
"type": "string"
},
"description": "The names of groups this user is a part of."
},
"extra": {
"type": "object",
"description": "Any additional information provided by the authenticator."
}
}
},
"v1.APIResourceList": {
"id": "v1.APIResourceList",
"description": "APIResourceList is a list of APIResource, it is used to expose the name of the resources supported in a specific group and version, and if the resource is namespaced.",
"required": [
"groupVersion",
"resources"
],
"properties": {
"kind": {
"type": "string",
"description": "Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#types-kinds"
},
"apiVersion": {
"type": "string",
"description": "APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#resources"
},
"groupVersion": {
"type": "string",
"description": "groupVersion is the group and version this APIResourceList is for."
},
"resources": {
"type": "array",
"items": {
"$ref": "v1.APIResource"
},
"description": "resources contains the name of the resources and if they are namespaced."
}
}
},
"v1.APIResource": {
"id": "v1.APIResource",
"description": "APIResource specifies the name of a resource and whether it is namespaced.",
"required": [
"name",
"namespaced",
"kind",
"verbs"
],
"properties": {
"name": {
"type": "string",
"description": "name is the name of the resource."
},
"namespaced": {
"type": "boolean",
"description": "namespaced indicates if a resource is namespaced or not."
},
"kind": {
"type": "string",
"description": "kind is the kind for the resource (e.g. 'Foo' is the kind for a resource 'foo')"
},
"verbs": {
"type": "array",
"items": {
"type": "string"
},
"description": "verbs is a list of supported kube verbs (this includes get, list, watch, create, update, patch, delete, deletecollection, and proxy)"
}
}
}
}
}

View file

@ -0,0 +1,114 @@
{
"swaggerVersion": "1.2",
"apiVersion": "",
"basePath": "https://10.10.10.10:6443",
"resourcePath": "/apis/authorization.k8s.io",
"info": {
"title": "",
"description": ""
},
"apis": [
{
"path": "/apis/authorization.k8s.io",
"description": "get information of a group",
"operations": [
{
"type": "v1.APIGroup",
"method": "GET",
"summary": "get information of a group",
"nickname": "getAPIGroup",
"parameters": [],
"produces": [
"application/json",
"application/yaml",
"application/vnd.kubernetes.protobuf"
],
"consumes": [
"application/json",
"application/yaml",
"application/vnd.kubernetes.protobuf"
]
}
]
}
],
"models": {
"v1.APIGroup": {
"id": "v1.APIGroup",
"description": "APIGroup contains the name, the supported versions, and the preferred version of a group.",
"required": [
"name",
"versions",
"serverAddressByClientCIDRs"
],
"properties": {
"kind": {
"type": "string",
"description": "Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#types-kinds"
},
"apiVersion": {
"type": "string",
"description": "APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#resources"
},
"name": {
"type": "string",
"description": "name is the name of the group."
},
"versions": {
"type": "array",
"items": {
"$ref": "v1.GroupVersionForDiscovery"
},
"description": "versions are the versions supported in this group."
},
"preferredVersion": {
"$ref": "v1.GroupVersionForDiscovery",
"description": "preferredVersion is the version preferred by the API server, which probably is the storage version."
},
"serverAddressByClientCIDRs": {
"type": "array",
"items": {
"$ref": "v1.ServerAddressByClientCIDR"
},
"description": "a map of client CIDR to server address that is serving this group. This is to help clients reach servers in the most network-efficient way possible. Clients can use the appropriate server address as per the CIDR that they match. In case of multiple matches, clients should use the longest matching CIDR. The server returns only those CIDRs that it thinks that the client can match. For example: the master will return an internal IP CIDR only, if the client reaches the server using an internal IP. Server looks at X-Forwarded-For header or X-Real-Ip header or request.RemoteAddr (in that order) to get the client IP."
}
}
},
"v1.GroupVersionForDiscovery": {
"id": "v1.GroupVersionForDiscovery",
"description": "GroupVersion contains the \"group/version\" and \"version\" string of a version. It is made a struct to keep extensibility.",
"required": [
"groupVersion",
"version"
],
"properties": {
"groupVersion": {
"type": "string",
"description": "groupVersion specifies the API group and version in the form \"group/version\""
},
"version": {
"type": "string",
"description": "version specifies the version in the form of \"version\". This is to save the clients the trouble of splitting the GroupVersion."
}
}
},
"v1.ServerAddressByClientCIDR": {
"id": "v1.ServerAddressByClientCIDR",
"description": "ServerAddressByClientCIDR helps the client to determine the server address that they should use, depending on the clientCIDR that they match.",
"required": [
"clientCIDR",
"serverAddress"
],
"properties": {
"clientCIDR": {
"type": "string",
"description": "The CIDR with which clients can match their IP to figure out the server address that they should use."
},
"serverAddress": {
"type": "string",
"description": "Address of this server, suitable for a client that matches the above CIDR. This can be a hostname, hostname:port, IP or IP:port."
}
}
}
}
}

View file

@ -0,0 +1,542 @@
{
"swaggerVersion": "1.2",
"apiVersion": "authorization.k8s.io/v1beta1",
"basePath": "https://10.10.10.10:6443",
"resourcePath": "/apis/authorization.k8s.io/v1beta1",
"info": {
"title": "",
"description": ""
},
"apis": [
{
"path": "/apis/authorization.k8s.io/v1beta1/namespaces/{namespace}/localsubjectaccessreviews",
"description": "API at /apis/authorization.k8s.io/v1beta1",
"operations": [
{
"type": "v1beta1.LocalSubjectAccessReview",
"method": "POST",
"summary": "create a LocalSubjectAccessReview",
"nickname": "createNamespacedLocalSubjectAccessReview",
"parameters": [
{
"type": "string",
"paramType": "query",
"name": "pretty",
"description": "If 'true', then the output is pretty printed.",
"required": false,
"allowMultiple": false
},
{
"type": "v1beta1.LocalSubjectAccessReview",
"paramType": "body",
"name": "body",
"description": "",
"required": true,
"allowMultiple": false
},
{
"type": "string",
"paramType": "path",
"name": "namespace",
"description": "object name and auth scope, such as for teams and projects",
"required": true,
"allowMultiple": false
}
],
"responseMessages": [
{
"code": 200,
"message": "OK",
"responseModel": "v1beta1.LocalSubjectAccessReview"
}
],
"produces": [
"application/json",
"application/yaml",
"application/vnd.kubernetes.protobuf"
],
"consumes": [
"*/*"
]
}
]
},
{
"path": "/apis/authorization.k8s.io/v1beta1/selfsubjectaccessreviews",
"description": "API at /apis/authorization.k8s.io/v1beta1",
"operations": [
{
"type": "v1beta1.SelfSubjectAccessReview",
"method": "POST",
"summary": "create a SelfSubjectAccessReview",
"nickname": "createSelfSubjectAccessReview",
"parameters": [
{
"type": "string",
"paramType": "query",
"name": "pretty",
"description": "If 'true', then the output is pretty printed.",
"required": false,
"allowMultiple": false
},
{
"type": "v1beta1.SelfSubjectAccessReview",
"paramType": "body",
"name": "body",
"description": "",
"required": true,
"allowMultiple": false
}
],
"responseMessages": [
{
"code": 200,
"message": "OK",
"responseModel": "v1beta1.SelfSubjectAccessReview"
}
],
"produces": [
"application/json",
"application/yaml",
"application/vnd.kubernetes.protobuf"
],
"consumes": [
"*/*"
]
}
]
},
{
"path": "/apis/authorization.k8s.io/v1beta1/subjectaccessreviews",
"description": "API at /apis/authorization.k8s.io/v1beta1",
"operations": [
{
"type": "v1beta1.SubjectAccessReview",
"method": "POST",
"summary": "create a SubjectAccessReview",
"nickname": "createSubjectAccessReview",
"parameters": [
{
"type": "string",
"paramType": "query",
"name": "pretty",
"description": "If 'true', then the output is pretty printed.",
"required": false,
"allowMultiple": false
},
{
"type": "v1beta1.SubjectAccessReview",
"paramType": "body",
"name": "body",
"description": "",
"required": true,
"allowMultiple": false
}
],
"responseMessages": [
{
"code": 200,
"message": "OK",
"responseModel": "v1beta1.SubjectAccessReview"
}
],
"produces": [
"application/json",
"application/yaml",
"application/vnd.kubernetes.protobuf"
],
"consumes": [
"*/*"
]
}
]
},
{
"path": "/apis/authorization.k8s.io/v1beta1",
"description": "API at /apis/authorization.k8s.io/v1beta1",
"operations": [
{
"type": "v1.APIResourceList",
"method": "GET",
"summary": "get available resources",
"nickname": "getAPIResources",
"parameters": [],
"produces": [
"application/json",
"application/yaml",
"application/vnd.kubernetes.protobuf"
],
"consumes": [
"application/json",
"application/yaml",
"application/vnd.kubernetes.protobuf"
]
}
]
}
],
"models": {
"v1beta1.LocalSubjectAccessReview": {
"id": "v1beta1.LocalSubjectAccessReview",
"description": "LocalSubjectAccessReview checks whether or not a user or group can perform an action in a given namespace. Having a namespace scoped resource makes it much easier to grant namespace scoped policy that includes permissions checking.",
"required": [
"spec"
],
"properties": {
"kind": {
"type": "string",
"description": "Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#types-kinds"
},
"apiVersion": {
"type": "string",
"description": "APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#resources"
},
"metadata": {
"$ref": "v1.ObjectMeta"
},
"spec": {
"$ref": "v1beta1.SubjectAccessReviewSpec",
"description": "Spec holds information about the request being evaluated. spec.namespace must be equal to the namespace you made the request against. If empty, it is defaulted."
},
"status": {
"$ref": "v1beta1.SubjectAccessReviewStatus",
"description": "Status is filled in by the server and indicates whether the request is allowed or not"
}
}
},
"v1.ObjectMeta": {
"id": "v1.ObjectMeta",
"description": "ObjectMeta is metadata that all persisted resources must have, which includes all objects users must create.",
"properties": {
"name": {
"type": "string",
"description": "Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/identifiers#names"
},
"generateName": {
"type": "string",
"description": "GenerateName is an optional prefix, used by the server, to generate a unique name ONLY IF the Name field has not been provided. If this field is used, the name returned to the client will be different than the name passed. This value will also be combined with a unique suffix. The provided value has the same validation rules as the Name field, and may be truncated by the length of the suffix required to make the value unique on the server.\n\nIf this field is specified and the generated name exists, the server will NOT return a 409 - instead, it will either return 201 Created or 500 with Reason ServerTimeout indicating a unique name could not be found in the time allotted, and the client should retry (optionally after the time indicated in the Retry-After header).\n\nApplied only if Name is not specified. More info: http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#idempotency"
},
"namespace": {
"type": "string",
"description": "Namespace defines the space within each name must be unique. An empty namespace is equivalent to the \"default\" namespace, but \"default\" is the canonical representation. Not all objects are required to be scoped to a namespace - the value of this field for those objects will be empty.\n\nMust be a DNS_LABEL. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/namespaces"
},
"selfLink": {
"type": "string",
"description": "SelfLink is a URL representing this object. Populated by the system. Read-only."
},
"uid": {
"type": "string",
"description": "UID is the unique in time and space value for this object. It is typically generated by the server on successful creation of a resource and is not allowed to change on PUT operations.\n\nPopulated by the system. Read-only. More info: http://kubernetes.io/docs/user-guide/identifiers#uids"
},
"resourceVersion": {
"type": "string",
"description": "An opaque value that represents the internal version of this object that can be used by clients to determine when objects have changed. May be used for optimistic concurrency, change detection, and the watch operation on a resource or set of resources. Clients must treat these values as opaque and passed unmodified back to the server. They may only be valid for a particular resource or set of resources.\n\nPopulated by the system. Read-only. Value must be treated as opaque by clients and . More info: http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#concurrency-control-and-consistency"
},
"generation": {
"type": "integer",
"format": "int64",
"description": "A sequence number representing a specific generation of the desired state. Populated by the system. Read-only."
},
"creationTimestamp": {
"type": "string",
"description": "CreationTimestamp is a timestamp representing the server time when this object was created. It is not guaranteed to be set in happens-before order across separate operations. Clients may not set this value. It is represented in RFC3339 form and is in UTC.\n\nPopulated by the system. Read-only. Null for lists. More info: http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#metadata"
},
"deletionTimestamp": {
"type": "string",
"description": "DeletionTimestamp is RFC 3339 date and time at which this resource will be deleted. This field is set by the server when a graceful deletion is requested by the user, and is not directly settable by a client. The resource is expected to be deleted (no longer visible from resource lists, and not reachable by name) after the time in this field. Once set, this value may not be unset or be set further into the future, although it may be shortened or the resource may be deleted prior to this time. For example, a user may request that a pod is deleted in 30 seconds. The Kubelet will react by sending a graceful termination signal to the containers in the pod. After that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL) to the container and after cleanup, remove the pod from the API. In the presence of network partitions, this object may still exist after this timestamp, until an administrator or automated process can determine the resource is fully terminated. If not set, graceful deletion of the object has not been requested.\n\nPopulated by the system when a graceful deletion is requested. Read-only. More info: http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#metadata"
},
"deletionGracePeriodSeconds": {
"type": "integer",
"format": "int64",
"description": "Number of seconds allowed for this object to gracefully terminate before it will be removed from the system. Only set when deletionTimestamp is also set. May only be shortened. Read-only."
},
"labels": {
"type": "object",
"description": "Map of string keys and values that can be used to organize and categorize (scope and select) objects. May match selectors of replication controllers and services. More info: http://kubernetes.io/docs/user-guide/labels"
},
"annotations": {
"type": "object",
"description": "Annotations is an unstructured key value map stored with a resource that may be set by external tools to store and retrieve arbitrary metadata. They are not queryable and should be preserved when modifying objects. More info: http://kubernetes.io/docs/user-guide/annotations"
},
"ownerReferences": {
"type": "array",
"items": {
"$ref": "v1.OwnerReference"
},
"description": "List of objects depended by this object. If ALL objects in the list have been deleted, this object will be garbage collected. If this object is managed by a controller, then an entry in this list will point to this controller, with the controller field set to true. There cannot be more than one managing controller."
},
"finalizers": {
"type": "array",
"items": {
"type": "string"
},
"description": "Must be empty before the object is deleted from the registry. Each entry is an identifier for the responsible component that will remove the entry from the list. If the deletionTimestamp of the object is non-nil, entries in this list can only be removed."
},
"clusterName": {
"type": "string",
"description": "The name of the cluster which the object belongs to. This is used to distinguish resources with same name and namespace in different clusters. This field is not set anywhere right now and apiserver is going to ignore it if set in create or update request."
}
}
},
"v1.OwnerReference": {
"id": "v1.OwnerReference",
"description": "OwnerReference contains enough information to let you identify an owning object. Currently, an owning object must be in the same namespace, so there is no namespace field.",
"required": [
"apiVersion",
"kind",
"name",
"uid"
],
"properties": {
"apiVersion": {
"type": "string",
"description": "API version of the referent."
},
"kind": {
"type": "string",
"description": "Kind of the referent. More info: http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#types-kinds"
},
"name": {
"type": "string",
"description": "Name of the referent. More info: http://kubernetes.io/docs/user-guide/identifiers#names"
},
"uid": {
"type": "string",
"description": "UID of the referent. More info: http://kubernetes.io/docs/user-guide/identifiers#uids"
},
"controller": {
"type": "boolean",
"description": "If true, this reference points to the managing controller."
}
}
},
"v1beta1.SubjectAccessReviewSpec": {
"id": "v1beta1.SubjectAccessReviewSpec",
"description": "SubjectAccessReviewSpec is a description of the access request. Exactly one of ResourceAuthorizationAttributes and NonResourceAuthorizationAttributes must be set",
"properties": {
"resourceAttributes": {
"$ref": "v1beta1.ResourceAttributes",
"description": "ResourceAuthorizationAttributes describes information for a resource access request"
},
"nonResourceAttributes": {
"$ref": "v1beta1.NonResourceAttributes",
"description": "NonResourceAttributes describes information for a non-resource access request"
},
"user": {
"type": "string",
"description": "User is the user you're testing for. If you specify \"User\" but not \"Group\", then is it interpreted as \"What if User were not a member of any groups"
},
"group": {
"type": "array",
"items": {
"type": "string"
},
"description": "Groups is the groups you're testing for."
},
"extra": {
"type": "object",
"description": "Extra corresponds to the user.Info.GetExtra() method from the authenticator. Since that is input to the authorizer it needs a reflection here."
}
}
},
"v1beta1.ResourceAttributes": {
"id": "v1beta1.ResourceAttributes",
"description": "ResourceAttributes includes the authorization attributes available for resource requests to the Authorizer interface",
"properties": {
"namespace": {
"type": "string",
"description": "Namespace is the namespace of the action being requested. Currently, there is no distinction between no namespace and all namespaces \"\" (empty) is defaulted for LocalSubjectAccessReviews \"\" (empty) is empty for cluster-scoped resources \"\" (empty) means \"all\" for namespace scoped resources from a SubjectAccessReview or SelfSubjectAccessReview"
},
"verb": {
"type": "string",
"description": "Verb is a kubernetes resource API verb, like: get, list, watch, create, update, delete, proxy. \"*\" means all."
},
"group": {
"type": "string",
"description": "Group is the API Group of the Resource. \"*\" means all."
},
"version": {
"type": "string",
"description": "Version is the API Version of the Resource. \"*\" means all."
},
"resource": {
"type": "string",
"description": "Resource is one of the existing resource types. \"*\" means all."
},
"subresource": {
"type": "string",
"description": "Subresource is one of the existing resource types. \"\" means none."
},
"name": {
"type": "string",
"description": "Name is the name of the resource being requested for a \"get\" or deleted for a \"delete\". \"\" (empty) means all."
}
}
},
"v1beta1.NonResourceAttributes": {
"id": "v1beta1.NonResourceAttributes",
"description": "NonResourceAttributes includes the authorization attributes available for non-resource requests to the Authorizer interface",
"properties": {
"path": {
"type": "string",
"description": "Path is the URL path of the request"
},
"verb": {
"type": "string",
"description": "Verb is the standard HTTP verb"
}
}
},
"v1beta1.SubjectAccessReviewStatus": {
"id": "v1beta1.SubjectAccessReviewStatus",
"description": "SubjectAccessReviewStatus",
"required": [
"allowed"
],
"properties": {
"allowed": {
"type": "boolean",
"description": "Allowed is required. True if the action would be allowed, false otherwise."
},
"reason": {
"type": "string",
"description": "Reason is optional. It indicates why a request was allowed or denied."
},
"evaluationError": {
"type": "string",
"description": "EvaluationError is an indication that some error occurred during the authorization check. It is entirely possible to get an error and be able to continue determine authorization status in spite of it. For instance, RBAC can be missing a role, but enough roles are still present and bound to reason about the request."
}
}
},
"v1beta1.SelfSubjectAccessReview": {
"id": "v1beta1.SelfSubjectAccessReview",
"description": "SelfSubjectAccessReview checks whether or the current user can perform an action. Not filling in a spec.namespace means \"in all namespaces\". Self is a special case, because users should always be able to check whether they can perform an action",
"required": [
"spec"
],
"properties": {
"kind": {
"type": "string",
"description": "Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#types-kinds"
},
"apiVersion": {
"type": "string",
"description": "APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#resources"
},
"metadata": {
"$ref": "v1.ObjectMeta"
},
"spec": {
"$ref": "v1beta1.SelfSubjectAccessReviewSpec",
"description": "Spec holds information about the request being evaluated. user and groups must be empty"
},
"status": {
"$ref": "v1beta1.SubjectAccessReviewStatus",
"description": "Status is filled in by the server and indicates whether the request is allowed or not"
}
}
},
"v1beta1.SelfSubjectAccessReviewSpec": {
"id": "v1beta1.SelfSubjectAccessReviewSpec",
"description": "SelfSubjectAccessReviewSpec is a description of the access request. Exactly one of ResourceAuthorizationAttributes and NonResourceAuthorizationAttributes must be set",
"properties": {
"resourceAttributes": {
"$ref": "v1beta1.ResourceAttributes",
"description": "ResourceAuthorizationAttributes describes information for a resource access request"
},
"nonResourceAttributes": {
"$ref": "v1beta1.NonResourceAttributes",
"description": "NonResourceAttributes describes information for a non-resource access request"
}
}
},
"v1beta1.SubjectAccessReview": {
"id": "v1beta1.SubjectAccessReview",
"description": "SubjectAccessReview checks whether or not a user or group can perform an action.",
"required": [
"spec"
],
"properties": {
"kind": {
"type": "string",
"description": "Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#types-kinds"
},
"apiVersion": {
"type": "string",
"description": "APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#resources"
},
"metadata": {
"$ref": "v1.ObjectMeta"
},
"spec": {
"$ref": "v1beta1.SubjectAccessReviewSpec",
"description": "Spec holds information about the request being evaluated"
},
"status": {
"$ref": "v1beta1.SubjectAccessReviewStatus",
"description": "Status is filled in by the server and indicates whether the request is allowed or not"
}
}
},
"v1.APIResourceList": {
"id": "v1.APIResourceList",
"description": "APIResourceList is a list of APIResource, it is used to expose the name of the resources supported in a specific group and version, and if the resource is namespaced.",
"required": [
"groupVersion",
"resources"
],
"properties": {
"kind": {
"type": "string",
"description": "Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#types-kinds"
},
"apiVersion": {
"type": "string",
"description": "APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#resources"
},
"groupVersion": {
"type": "string",
"description": "groupVersion is the group and version this APIResourceList is for."
},
"resources": {
"type": "array",
"items": {
"$ref": "v1.APIResource"
},
"description": "resources contains the name of the resources and if they are namespaced."
}
}
},
"v1.APIResource": {
"id": "v1.APIResource",
"description": "APIResource specifies the name of a resource and whether it is namespaced.",
"required": [
"name",
"namespaced",
"kind",
"verbs"
],
"properties": {
"name": {
"type": "string",
"description": "name is the name of the resource."
},
"namespaced": {
"type": "boolean",
"description": "namespaced indicates if a resource is namespaced or not."
},
"kind": {
"type": "string",
"description": "kind is the kind for the resource (e.g. 'Foo' is the kind for a resource 'foo')"
},
"verbs": {
"type": "array",
"items": {
"type": "string"
},
"description": "verbs is a list of supported kube verbs (this includes get, list, watch, create, update, patch, delete, deletecollection, and proxy)"
}
}
}
}
}

View file

@ -0,0 +1,114 @@
{
"swaggerVersion": "1.2",
"apiVersion": "",
"basePath": "https://10.10.10.10:6443",
"resourcePath": "/apis/autoscaling",
"info": {
"title": "",
"description": ""
},
"apis": [
{
"path": "/apis/autoscaling",
"description": "get information of a group",
"operations": [
{
"type": "v1.APIGroup",
"method": "GET",
"summary": "get information of a group",
"nickname": "getAPIGroup",
"parameters": [],
"produces": [
"application/json",
"application/yaml",
"application/vnd.kubernetes.protobuf"
],
"consumes": [
"application/json",
"application/yaml",
"application/vnd.kubernetes.protobuf"
]
}
]
}
],
"models": {
"v1.APIGroup": {
"id": "v1.APIGroup",
"description": "APIGroup contains the name, the supported versions, and the preferred version of a group.",
"required": [
"name",
"versions",
"serverAddressByClientCIDRs"
],
"properties": {
"kind": {
"type": "string",
"description": "Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#types-kinds"
},
"apiVersion": {
"type": "string",
"description": "APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#resources"
},
"name": {
"type": "string",
"description": "name is the name of the group."
},
"versions": {
"type": "array",
"items": {
"$ref": "v1.GroupVersionForDiscovery"
},
"description": "versions are the versions supported in this group."
},
"preferredVersion": {
"$ref": "v1.GroupVersionForDiscovery",
"description": "preferredVersion is the version preferred by the API server, which probably is the storage version."
},
"serverAddressByClientCIDRs": {
"type": "array",
"items": {
"$ref": "v1.ServerAddressByClientCIDR"
},
"description": "a map of client CIDR to server address that is serving this group. This is to help clients reach servers in the most network-efficient way possible. Clients can use the appropriate server address as per the CIDR that they match. In case of multiple matches, clients should use the longest matching CIDR. The server returns only those CIDRs that it thinks that the client can match. For example: the master will return an internal IP CIDR only, if the client reaches the server using an internal IP. Server looks at X-Forwarded-For header or X-Real-Ip header or request.RemoteAddr (in that order) to get the client IP."
}
}
},
"v1.GroupVersionForDiscovery": {
"id": "v1.GroupVersionForDiscovery",
"description": "GroupVersion contains the \"group/version\" and \"version\" string of a version. It is made a struct to keep extensibility.",
"required": [
"groupVersion",
"version"
],
"properties": {
"groupVersion": {
"type": "string",
"description": "groupVersion specifies the API group and version in the form \"group/version\""
},
"version": {
"type": "string",
"description": "version specifies the version in the form of \"version\". This is to save the clients the trouble of splitting the GroupVersion."
}
}
},
"v1.ServerAddressByClientCIDR": {
"id": "v1.ServerAddressByClientCIDR",
"description": "ServerAddressByClientCIDR helps the client to determine the server address that they should use, depending on the clientCIDR that they match.",
"required": [
"clientCIDR",
"serverAddress"
],
"properties": {
"clientCIDR": {
"type": "string",
"description": "The CIDR with which clients can match their IP to figure out the server address that they should use."
},
"serverAddress": {
"type": "string",
"description": "Address of this server, suitable for a client that matches the above CIDR. This can be a hostname, hostname:port, IP or IP:port."
}
}
}
}
}

File diff suppressed because it is too large Load diff

114
vendor/k8s.io/kubernetes/api/swagger-spec/batch.json generated vendored Normal file
View file

@ -0,0 +1,114 @@
{
"swaggerVersion": "1.2",
"apiVersion": "",
"basePath": "https://10.10.10.10:6443",
"resourcePath": "/apis/batch",
"info": {
"title": "",
"description": ""
},
"apis": [
{
"path": "/apis/batch",
"description": "get information of a group",
"operations": [
{
"type": "v1.APIGroup",
"method": "GET",
"summary": "get information of a group",
"nickname": "getAPIGroup",
"parameters": [],
"produces": [
"application/json",
"application/yaml",
"application/vnd.kubernetes.protobuf"
],
"consumes": [
"application/json",
"application/yaml",
"application/vnd.kubernetes.protobuf"
]
}
]
}
],
"models": {
"v1.APIGroup": {
"id": "v1.APIGroup",
"description": "APIGroup contains the name, the supported versions, and the preferred version of a group.",
"required": [
"name",
"versions",
"serverAddressByClientCIDRs"
],
"properties": {
"kind": {
"type": "string",
"description": "Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#types-kinds"
},
"apiVersion": {
"type": "string",
"description": "APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#resources"
},
"name": {
"type": "string",
"description": "name is the name of the group."
},
"versions": {
"type": "array",
"items": {
"$ref": "v1.GroupVersionForDiscovery"
},
"description": "versions are the versions supported in this group."
},
"preferredVersion": {
"$ref": "v1.GroupVersionForDiscovery",
"description": "preferredVersion is the version preferred by the API server, which probably is the storage version."
},
"serverAddressByClientCIDRs": {
"type": "array",
"items": {
"$ref": "v1.ServerAddressByClientCIDR"
},
"description": "a map of client CIDR to server address that is serving this group. This is to help clients reach servers in the most network-efficient way possible. Clients can use the appropriate server address as per the CIDR that they match. In case of multiple matches, clients should use the longest matching CIDR. The server returns only those CIDRs that it thinks that the client can match. For example: the master will return an internal IP CIDR only, if the client reaches the server using an internal IP. Server looks at X-Forwarded-For header or X-Real-Ip header or request.RemoteAddr (in that order) to get the client IP."
}
}
},
"v1.GroupVersionForDiscovery": {
"id": "v1.GroupVersionForDiscovery",
"description": "GroupVersion contains the \"group/version\" and \"version\" string of a version. It is made a struct to keep extensibility.",
"required": [
"groupVersion",
"version"
],
"properties": {
"groupVersion": {
"type": "string",
"description": "groupVersion specifies the API group and version in the form \"group/version\""
},
"version": {
"type": "string",
"description": "version specifies the version in the form of \"version\". This is to save the clients the trouble of splitting the GroupVersion."
}
}
},
"v1.ServerAddressByClientCIDR": {
"id": "v1.ServerAddressByClientCIDR",
"description": "ServerAddressByClientCIDR helps the client to determine the server address that they should use, depending on the clientCIDR that they match.",
"required": [
"clientCIDR",
"serverAddress"
],
"properties": {
"clientCIDR": {
"type": "string",
"description": "The CIDR with which clients can match their IP to figure out the server address that they should use."
},
"serverAddress": {
"type": "string",
"description": "Address of this server, suitable for a client that matches the above CIDR. This can be a hostname, hostname:port, IP or IP:port."
}
}
}
}
}

3123
vendor/k8s.io/kubernetes/api/swagger-spec/batch_v1.json generated vendored Normal file

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,97 @@
{
"swaggerVersion": "1.2",
"apiVersion": "batch/v2alpha1",
"basePath": "https://10.10.10.10:6443",
"resourcePath": "/apis/batch/v2alpha1",
"info": {
"title": "",
"description": ""
},
"apis": [
{
"path": "/apis/batch/v2alpha1",
"description": "API at /apis/batch/v2alpha1",
"operations": [
{
"type": "v1.APIResourceList",
"method": "GET",
"summary": "get available resources",
"nickname": "getAPIResources",
"parameters": [],
"produces": [
"application/json",
"application/yaml",
"application/vnd.kubernetes.protobuf"
],
"consumes": [
"application/json",
"application/yaml",
"application/vnd.kubernetes.protobuf"
]
}
]
}
],
"models": {
"v1.APIResourceList": {
"id": "v1.APIResourceList",
"description": "APIResourceList is a list of APIResource, it is used to expose the name of the resources supported in a specific group and version, and if the resource is namespaced.",
"required": [
"groupVersion",
"resources"
],
"properties": {
"kind": {
"type": "string",
"description": "Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#types-kinds"
},
"apiVersion": {
"type": "string",
"description": "APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#resources"
},
"groupVersion": {
"type": "string",
"description": "groupVersion is the group and version this APIResourceList is for."
},
"resources": {
"type": "array",
"items": {
"$ref": "v1.APIResource"
},
"description": "resources contains the name of the resources and if they are namespaced."
}
}
},
"v1.APIResource": {
"id": "v1.APIResource",
"description": "APIResource specifies the name of a resource and whether it is namespaced.",
"required": [
"name",
"namespaced",
"kind",
"verbs"
],
"properties": {
"name": {
"type": "string",
"description": "name is the name of the resource."
},
"namespaced": {
"type": "boolean",
"description": "namespaced indicates if a resource is namespaced or not."
},
"kind": {
"type": "string",
"description": "kind is the kind for the resource (e.g. 'Foo' is the kind for a resource 'foo')"
},
"verbs": {
"type": "array",
"items": {
"type": "string"
},
"description": "verbs is a list of supported kube verbs (this includes get, list, watch, create, update, patch, delete, deletecollection, and proxy)"
}
}
}
}
}

View file

@ -0,0 +1,114 @@
{
"swaggerVersion": "1.2",
"apiVersion": "",
"basePath": "https://10.10.10.10:6443",
"resourcePath": "/apis/certificates.k8s.io",
"info": {
"title": "",
"description": ""
},
"apis": [
{
"path": "/apis/certificates.k8s.io",
"description": "get information of a group",
"operations": [
{
"type": "v1.APIGroup",
"method": "GET",
"summary": "get information of a group",
"nickname": "getAPIGroup",
"parameters": [],
"produces": [
"application/json",
"application/yaml",
"application/vnd.kubernetes.protobuf"
],
"consumes": [
"application/json",
"application/yaml",
"application/vnd.kubernetes.protobuf"
]
}
]
}
],
"models": {
"v1.APIGroup": {
"id": "v1.APIGroup",
"description": "APIGroup contains the name, the supported versions, and the preferred version of a group.",
"required": [
"name",
"versions",
"serverAddressByClientCIDRs"
],
"properties": {
"kind": {
"type": "string",
"description": "Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#types-kinds"
},
"apiVersion": {
"type": "string",
"description": "APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#resources"
},
"name": {
"type": "string",
"description": "name is the name of the group."
},
"versions": {
"type": "array",
"items": {
"$ref": "v1.GroupVersionForDiscovery"
},
"description": "versions are the versions supported in this group."
},
"preferredVersion": {
"$ref": "v1.GroupVersionForDiscovery",
"description": "preferredVersion is the version preferred by the API server, which probably is the storage version."
},
"serverAddressByClientCIDRs": {
"type": "array",
"items": {
"$ref": "v1.ServerAddressByClientCIDR"
},
"description": "a map of client CIDR to server address that is serving this group. This is to help clients reach servers in the most network-efficient way possible. Clients can use the appropriate server address as per the CIDR that they match. In case of multiple matches, clients should use the longest matching CIDR. The server returns only those CIDRs that it thinks that the client can match. For example: the master will return an internal IP CIDR only, if the client reaches the server using an internal IP. Server looks at X-Forwarded-For header or X-Real-Ip header or request.RemoteAddr (in that order) to get the client IP."
}
}
},
"v1.GroupVersionForDiscovery": {
"id": "v1.GroupVersionForDiscovery",
"description": "GroupVersion contains the \"group/version\" and \"version\" string of a version. It is made a struct to keep extensibility.",
"required": [
"groupVersion",
"version"
],
"properties": {
"groupVersion": {
"type": "string",
"description": "groupVersion specifies the API group and version in the form \"group/version\""
},
"version": {
"type": "string",
"description": "version specifies the version in the form of \"version\". This is to save the clients the trouble of splitting the GroupVersion."
}
}
},
"v1.ServerAddressByClientCIDR": {
"id": "v1.ServerAddressByClientCIDR",
"description": "ServerAddressByClientCIDR helps the client to determine the server address that they should use, depending on the clientCIDR that they match.",
"required": [
"clientCIDR",
"serverAddress"
],
"properties": {
"clientCIDR": {
"type": "string",
"description": "The CIDR with which clients can match their IP to figure out the server address that they should use."
},
"serverAddress": {
"type": "string",
"description": "Address of this server, suitable for a client that matches the above CIDR. This can be a hostname, hostname:port, IP or IP:port."
}
}
}
}
}

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,114 @@
{
"swaggerVersion": "1.2",
"apiVersion": "",
"basePath": "https://10.10.10.10:6443",
"resourcePath": "/apis/extensions",
"info": {
"title": "",
"description": ""
},
"apis": [
{
"path": "/apis/extensions",
"description": "get information of a group",
"operations": [
{
"type": "v1.APIGroup",
"method": "GET",
"summary": "get information of a group",
"nickname": "getAPIGroup",
"parameters": [],
"produces": [
"application/json",
"application/yaml",
"application/vnd.kubernetes.protobuf"
],
"consumes": [
"application/json",
"application/yaml",
"application/vnd.kubernetes.protobuf"
]
}
]
}
],
"models": {
"v1.APIGroup": {
"id": "v1.APIGroup",
"description": "APIGroup contains the name, the supported versions, and the preferred version of a group.",
"required": [
"name",
"versions",
"serverAddressByClientCIDRs"
],
"properties": {
"kind": {
"type": "string",
"description": "Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#types-kinds"
},
"apiVersion": {
"type": "string",
"description": "APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#resources"
},
"name": {
"type": "string",
"description": "name is the name of the group."
},
"versions": {
"type": "array",
"items": {
"$ref": "v1.GroupVersionForDiscovery"
},
"description": "versions are the versions supported in this group."
},
"preferredVersion": {
"$ref": "v1.GroupVersionForDiscovery",
"description": "preferredVersion is the version preferred by the API server, which probably is the storage version."
},
"serverAddressByClientCIDRs": {
"type": "array",
"items": {
"$ref": "v1.ServerAddressByClientCIDR"
},
"description": "a map of client CIDR to server address that is serving this group. This is to help clients reach servers in the most network-efficient way possible. Clients can use the appropriate server address as per the CIDR that they match. In case of multiple matches, clients should use the longest matching CIDR. The server returns only those CIDRs that it thinks that the client can match. For example: the master will return an internal IP CIDR only, if the client reaches the server using an internal IP. Server looks at X-Forwarded-For header or X-Real-Ip header or request.RemoteAddr (in that order) to get the client IP."
}
}
},
"v1.GroupVersionForDiscovery": {
"id": "v1.GroupVersionForDiscovery",
"description": "GroupVersion contains the \"group/version\" and \"version\" string of a version. It is made a struct to keep extensibility.",
"required": [
"groupVersion",
"version"
],
"properties": {
"groupVersion": {
"type": "string",
"description": "groupVersion specifies the API group and version in the form \"group/version\""
},
"version": {
"type": "string",
"description": "version specifies the version in the form of \"version\". This is to save the clients the trouble of splitting the GroupVersion."
}
}
},
"v1.ServerAddressByClientCIDR": {
"id": "v1.ServerAddressByClientCIDR",
"description": "ServerAddressByClientCIDR helps the client to determine the server address that they should use, depending on the clientCIDR that they match.",
"required": [
"clientCIDR",
"serverAddress"
],
"properties": {
"clientCIDR": {
"type": "string",
"description": "The CIDR with which clients can match their IP to figure out the server address that they should use."
},
"serverAddress": {
"type": "string",
"description": "Address of this server, suitable for a client that matches the above CIDR. This can be a hostname, hostname:port, IP or IP:port."
}
}
}
}
}

File diff suppressed because it is too large Load diff

46
vendor/k8s.io/kubernetes/api/swagger-spec/logs.json generated vendored Normal file
View file

@ -0,0 +1,46 @@
{
"swaggerVersion": "1.2",
"apiVersion": "",
"basePath": "https://10.10.10.10:6443",
"resourcePath": "/logs",
"info": {
"title": "",
"description": ""
},
"apis": [
{
"path": "/logs/{logpath}",
"description": "get log files",
"operations": [
{
"type": "void",
"method": "GET",
"nickname": "logFileHandler",
"parameters": [
{
"type": "string",
"paramType": "path",
"name": "logpath",
"description": "path to the log",
"required": true,
"allowMultiple": false
}
]
}
]
},
{
"path": "/logs",
"description": "get log files",
"operations": [
{
"type": "void",
"method": "GET",
"nickname": "logFileListHandler",
"parameters": []
}
]
}
],
"models": {}
}

114
vendor/k8s.io/kubernetes/api/swagger-spec/policy.json generated vendored Normal file
View file

@ -0,0 +1,114 @@
{
"swaggerVersion": "1.2",
"apiVersion": "",
"basePath": "https://10.10.10.10:6443",
"resourcePath": "/apis/policy",
"info": {
"title": "",
"description": ""
},
"apis": [
{
"path": "/apis/policy",
"description": "get information of a group",
"operations": [
{
"type": "v1.APIGroup",
"method": "GET",
"summary": "get information of a group",
"nickname": "getAPIGroup",
"parameters": [],
"produces": [
"application/json",
"application/yaml",
"application/vnd.kubernetes.protobuf"
],
"consumes": [
"application/json",
"application/yaml",
"application/vnd.kubernetes.protobuf"
]
}
]
}
],
"models": {
"v1.APIGroup": {
"id": "v1.APIGroup",
"description": "APIGroup contains the name, the supported versions, and the preferred version of a group.",
"required": [
"name",
"versions",
"serverAddressByClientCIDRs"
],
"properties": {
"kind": {
"type": "string",
"description": "Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#types-kinds"
},
"apiVersion": {
"type": "string",
"description": "APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#resources"
},
"name": {
"type": "string",
"description": "name is the name of the group."
},
"versions": {
"type": "array",
"items": {
"$ref": "v1.GroupVersionForDiscovery"
},
"description": "versions are the versions supported in this group."
},
"preferredVersion": {
"$ref": "v1.GroupVersionForDiscovery",
"description": "preferredVersion is the version preferred by the API server, which probably is the storage version."
},
"serverAddressByClientCIDRs": {
"type": "array",
"items": {
"$ref": "v1.ServerAddressByClientCIDR"
},
"description": "a map of client CIDR to server address that is serving this group. This is to help clients reach servers in the most network-efficient way possible. Clients can use the appropriate server address as per the CIDR that they match. In case of multiple matches, clients should use the longest matching CIDR. The server returns only those CIDRs that it thinks that the client can match. For example: the master will return an internal IP CIDR only, if the client reaches the server using an internal IP. Server looks at X-Forwarded-For header or X-Real-Ip header or request.RemoteAddr (in that order) to get the client IP."
}
}
},
"v1.GroupVersionForDiscovery": {
"id": "v1.GroupVersionForDiscovery",
"description": "GroupVersion contains the \"group/version\" and \"version\" string of a version. It is made a struct to keep extensibility.",
"required": [
"groupVersion",
"version"
],
"properties": {
"groupVersion": {
"type": "string",
"description": "groupVersion specifies the API group and version in the form \"group/version\""
},
"version": {
"type": "string",
"description": "version specifies the version in the form of \"version\". This is to save the clients the trouble of splitting the GroupVersion."
}
}
},
"v1.ServerAddressByClientCIDR": {
"id": "v1.ServerAddressByClientCIDR",
"description": "ServerAddressByClientCIDR helps the client to determine the server address that they should use, depending on the clientCIDR that they match.",
"required": [
"clientCIDR",
"serverAddress"
],
"properties": {
"clientCIDR": {
"type": "string",
"description": "The CIDR with which clients can match their IP to figure out the server address that they should use."
},
"serverAddress": {
"type": "string",
"description": "Address of this server, suitable for a client that matches the above CIDR. This can be a hostname, hostname:port, IP or IP:port."
}
}
}
}
}

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,114 @@
{
"swaggerVersion": "1.2",
"apiVersion": "",
"basePath": "https://10.10.10.10:6443",
"resourcePath": "/apis/rbac.authorization.k8s.io",
"info": {
"title": "",
"description": ""
},
"apis": [
{
"path": "/apis/rbac.authorization.k8s.io",
"description": "get information of a group",
"operations": [
{
"type": "v1.APIGroup",
"method": "GET",
"summary": "get information of a group",
"nickname": "getAPIGroup",
"parameters": [],
"produces": [
"application/json",
"application/yaml",
"application/vnd.kubernetes.protobuf"
],
"consumes": [
"application/json",
"application/yaml",
"application/vnd.kubernetes.protobuf"
]
}
]
}
],
"models": {
"v1.APIGroup": {
"id": "v1.APIGroup",
"description": "APIGroup contains the name, the supported versions, and the preferred version of a group.",
"required": [
"name",
"versions",
"serverAddressByClientCIDRs"
],
"properties": {
"kind": {
"type": "string",
"description": "Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#types-kinds"
},
"apiVersion": {
"type": "string",
"description": "APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#resources"
},
"name": {
"type": "string",
"description": "name is the name of the group."
},
"versions": {
"type": "array",
"items": {
"$ref": "v1.GroupVersionForDiscovery"
},
"description": "versions are the versions supported in this group."
},
"preferredVersion": {
"$ref": "v1.GroupVersionForDiscovery",
"description": "preferredVersion is the version preferred by the API server, which probably is the storage version."
},
"serverAddressByClientCIDRs": {
"type": "array",
"items": {
"$ref": "v1.ServerAddressByClientCIDR"
},
"description": "a map of client CIDR to server address that is serving this group. This is to help clients reach servers in the most network-efficient way possible. Clients can use the appropriate server address as per the CIDR that they match. In case of multiple matches, clients should use the longest matching CIDR. The server returns only those CIDRs that it thinks that the client can match. For example: the master will return an internal IP CIDR only, if the client reaches the server using an internal IP. Server looks at X-Forwarded-For header or X-Real-Ip header or request.RemoteAddr (in that order) to get the client IP."
}
}
},
"v1.GroupVersionForDiscovery": {
"id": "v1.GroupVersionForDiscovery",
"description": "GroupVersion contains the \"group/version\" and \"version\" string of a version. It is made a struct to keep extensibility.",
"required": [
"groupVersion",
"version"
],
"properties": {
"groupVersion": {
"type": "string",
"description": "groupVersion specifies the API group and version in the form \"group/version\""
},
"version": {
"type": "string",
"description": "version specifies the version in the form of \"version\". This is to save the clients the trouble of splitting the GroupVersion."
}
}
},
"v1.ServerAddressByClientCIDR": {
"id": "v1.ServerAddressByClientCIDR",
"description": "ServerAddressByClientCIDR helps the client to determine the server address that they should use, depending on the clientCIDR that they match.",
"required": [
"clientCIDR",
"serverAddress"
],
"properties": {
"clientCIDR": {
"type": "string",
"description": "The CIDR with which clients can match their IP to figure out the server address that they should use."
},
"serverAddress": {
"type": "string",
"description": "Address of this server, suitable for a client that matches the above CIDR. This can be a hostname, hostname:port, IP or IP:port."
}
}
}
}
}

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,114 @@
{
"swaggerVersion": "1.2",
"apis": [
{
"path": "/version",
"description": "git code version from which this is built"
},
{
"path": "/apis",
"description": "get available API versions"
},
{
"path": "/logs",
"description": "get log files"
},
{
"path": "/api/v1",
"description": "API at /api/v1"
},
{
"path": "/api",
"description": "get available API versions"
},
{
"path": "/apis/apps/v1beta1",
"description": "API at /apis/apps/v1beta1"
},
{
"path": "/apis/apps",
"description": "get information of a group"
},
{
"path": "/apis/authentication.k8s.io/v1beta1",
"description": "API at /apis/authentication.k8s.io/v1beta1"
},
{
"path": "/apis/authentication.k8s.io",
"description": "get information of a group"
},
{
"path": "/apis/authorization.k8s.io/v1beta1",
"description": "API at /apis/authorization.k8s.io/v1beta1"
},
{
"path": "/apis/authorization.k8s.io",
"description": "get information of a group"
},
{
"path": "/apis/autoscaling/v1",
"description": "API at /apis/autoscaling/v1"
},
{
"path": "/apis/autoscaling",
"description": "get information of a group"
},
{
"path": "/apis/batch/v1",
"description": "API at /apis/batch/v1"
},
{
"path": "/apis/batch/v2alpha1",
"description": "API at /apis/batch/v2alpha1"
},
{
"path": "/apis/batch",
"description": "get information of a group"
},
{
"path": "/apis/certificates.k8s.io/v1alpha1",
"description": "API at /apis/certificates.k8s.io/v1alpha1"
},
{
"path": "/apis/certificates.k8s.io",
"description": "get information of a group"
},
{
"path": "/apis/extensions/v1beta1",
"description": "API at /apis/extensions/v1beta1"
},
{
"path": "/apis/extensions",
"description": "get information of a group"
},
{
"path": "/apis/policy/v1beta1",
"description": "API at /apis/policy/v1beta1"
},
{
"path": "/apis/policy",
"description": "get information of a group"
},
{
"path": "/apis/rbac.authorization.k8s.io/v1alpha1",
"description": "API at /apis/rbac.authorization.k8s.io/v1alpha1"
},
{
"path": "/apis/rbac.authorization.k8s.io",
"description": "get information of a group"
},
{
"path": "/apis/storage.k8s.io/v1beta1",
"description": "API at /apis/storage.k8s.io/v1beta1"
},
{
"path": "/apis/storage.k8s.io",
"description": "get information of a group"
}
],
"apiVersion": "",
"info": {
"title": "",
"description": ""
}
}

View file

@ -0,0 +1 @@

View file

@ -0,0 +1,114 @@
{
"swaggerVersion": "1.2",
"apiVersion": "",
"basePath": "https://10.10.10.10:6443",
"resourcePath": "/apis/storage.k8s.io",
"info": {
"title": "",
"description": ""
},
"apis": [
{
"path": "/apis/storage.k8s.io",
"description": "get information of a group",
"operations": [
{
"type": "v1.APIGroup",
"method": "GET",
"summary": "get information of a group",
"nickname": "getAPIGroup",
"parameters": [],
"produces": [
"application/json",
"application/yaml",
"application/vnd.kubernetes.protobuf"
],
"consumes": [
"application/json",
"application/yaml",
"application/vnd.kubernetes.protobuf"
]
}
]
}
],
"models": {
"v1.APIGroup": {
"id": "v1.APIGroup",
"description": "APIGroup contains the name, the supported versions, and the preferred version of a group.",
"required": [
"name",
"versions",
"serverAddressByClientCIDRs"
],
"properties": {
"kind": {
"type": "string",
"description": "Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#types-kinds"
},
"apiVersion": {
"type": "string",
"description": "APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#resources"
},
"name": {
"type": "string",
"description": "name is the name of the group."
},
"versions": {
"type": "array",
"items": {
"$ref": "v1.GroupVersionForDiscovery"
},
"description": "versions are the versions supported in this group."
},
"preferredVersion": {
"$ref": "v1.GroupVersionForDiscovery",
"description": "preferredVersion is the version preferred by the API server, which probably is the storage version."
},
"serverAddressByClientCIDRs": {
"type": "array",
"items": {
"$ref": "v1.ServerAddressByClientCIDR"
},
"description": "a map of client CIDR to server address that is serving this group. This is to help clients reach servers in the most network-efficient way possible. Clients can use the appropriate server address as per the CIDR that they match. In case of multiple matches, clients should use the longest matching CIDR. The server returns only those CIDRs that it thinks that the client can match. For example: the master will return an internal IP CIDR only, if the client reaches the server using an internal IP. Server looks at X-Forwarded-For header or X-Real-Ip header or request.RemoteAddr (in that order) to get the client IP."
}
}
},
"v1.GroupVersionForDiscovery": {
"id": "v1.GroupVersionForDiscovery",
"description": "GroupVersion contains the \"group/version\" and \"version\" string of a version. It is made a struct to keep extensibility.",
"required": [
"groupVersion",
"version"
],
"properties": {
"groupVersion": {
"type": "string",
"description": "groupVersion specifies the API group and version in the form \"group/version\""
},
"version": {
"type": "string",
"description": "version specifies the version in the form of \"version\". This is to save the clients the trouble of splitting the GroupVersion."
}
}
},
"v1.ServerAddressByClientCIDR": {
"id": "v1.ServerAddressByClientCIDR",
"description": "ServerAddressByClientCIDR helps the client to determine the server address that they should use, depending on the clientCIDR that they match.",
"required": [
"clientCIDR",
"serverAddress"
],
"properties": {
"clientCIDR": {
"type": "string",
"description": "The CIDR with which clients can match their IP to figure out the server address that they should use."
},
"serverAddress": {
"type": "string",
"description": "Address of this server, suitable for a client that matches the above CIDR. This can be a hostname, hostname:port, IP or IP:port."
}
}
}
}
}

View file

@ -0,0 +1,997 @@
{
"swaggerVersion": "1.2",
"apiVersion": "storage.k8s.io/v1beta1",
"basePath": "https://10.10.10.10:6443",
"resourcePath": "/apis/storage.k8s.io/v1beta1",
"info": {
"title": "",
"description": ""
},
"apis": [
{
"path": "/apis/storage.k8s.io/v1beta1/storageclasses",
"description": "API at /apis/storage.k8s.io/v1beta1",
"operations": [
{
"type": "v1beta1.StorageClassList",
"method": "GET",
"summary": "list or watch objects of kind StorageClass",
"nickname": "listStorageClass",
"parameters": [
{
"type": "string",
"paramType": "query",
"name": "pretty",
"description": "If 'true', then the output is pretty printed.",
"required": false,
"allowMultiple": false
},
{
"type": "string",
"paramType": "query",
"name": "labelSelector",
"description": "A selector to restrict the list of returned objects by their labels. Defaults to everything.",
"required": false,
"allowMultiple": false
},
{
"type": "string",
"paramType": "query",
"name": "fieldSelector",
"description": "A selector to restrict the list of returned objects by their fields. Defaults to everything.",
"required": false,
"allowMultiple": false
},
{
"type": "boolean",
"paramType": "query",
"name": "watch",
"description": "Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.",
"required": false,
"allowMultiple": false
},
{
"type": "string",
"paramType": "query",
"name": "resourceVersion",
"description": "When specified with a watch call, shows changes that occur after that particular version of a resource. Defaults to changes from the beginning of history. When specified for list: - if unset, then the result is returned from remote storage based on quorum-read flag; - if it's 0, then we simply return what we currently have in cache, no guarantee; - if set to non zero, then the result is at least as fresh as given rv.",
"required": false,
"allowMultiple": false
},
{
"type": "integer",
"paramType": "query",
"name": "timeoutSeconds",
"description": "Timeout for the list/watch call.",
"required": false,
"allowMultiple": false
}
],
"responseMessages": [
{
"code": 200,
"message": "OK",
"responseModel": "v1beta1.StorageClassList"
}
],
"produces": [
"application/json",
"application/yaml",
"application/vnd.kubernetes.protobuf",
"application/json;stream=watch",
"application/vnd.kubernetes.protobuf;stream=watch"
],
"consumes": [
"*/*"
]
},
{
"type": "v1beta1.StorageClass",
"method": "POST",
"summary": "create a StorageClass",
"nickname": "createStorageClass",
"parameters": [
{
"type": "string",
"paramType": "query",
"name": "pretty",
"description": "If 'true', then the output is pretty printed.",
"required": false,
"allowMultiple": false
},
{
"type": "v1beta1.StorageClass",
"paramType": "body",
"name": "body",
"description": "",
"required": true,
"allowMultiple": false
}
],
"responseMessages": [
{
"code": 200,
"message": "OK",
"responseModel": "v1beta1.StorageClass"
}
],
"produces": [
"application/json",
"application/yaml",
"application/vnd.kubernetes.protobuf"
],
"consumes": [
"*/*"
]
},
{
"type": "v1.Status",
"method": "DELETE",
"summary": "delete collection of StorageClass",
"nickname": "deletecollectionStorageClass",
"parameters": [
{
"type": "string",
"paramType": "query",
"name": "pretty",
"description": "If 'true', then the output is pretty printed.",
"required": false,
"allowMultiple": false
},
{
"type": "string",
"paramType": "query",
"name": "labelSelector",
"description": "A selector to restrict the list of returned objects by their labels. Defaults to everything.",
"required": false,
"allowMultiple": false
},
{
"type": "string",
"paramType": "query",
"name": "fieldSelector",
"description": "A selector to restrict the list of returned objects by their fields. Defaults to everything.",
"required": false,
"allowMultiple": false
},
{
"type": "boolean",
"paramType": "query",
"name": "watch",
"description": "Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.",
"required": false,
"allowMultiple": false
},
{
"type": "string",
"paramType": "query",
"name": "resourceVersion",
"description": "When specified with a watch call, shows changes that occur after that particular version of a resource. Defaults to changes from the beginning of history. When specified for list: - if unset, then the result is returned from remote storage based on quorum-read flag; - if it's 0, then we simply return what we currently have in cache, no guarantee; - if set to non zero, then the result is at least as fresh as given rv.",
"required": false,
"allowMultiple": false
},
{
"type": "integer",
"paramType": "query",
"name": "timeoutSeconds",
"description": "Timeout for the list/watch call.",
"required": false,
"allowMultiple": false
}
],
"responseMessages": [
{
"code": 200,
"message": "OK",
"responseModel": "v1.Status"
}
],
"produces": [
"application/json",
"application/yaml",
"application/vnd.kubernetes.protobuf"
],
"consumes": [
"*/*"
]
}
]
},
{
"path": "/apis/storage.k8s.io/v1beta1/watch/storageclasses",
"description": "API at /apis/storage.k8s.io/v1beta1",
"operations": [
{
"type": "v1.WatchEvent",
"method": "GET",
"summary": "watch individual changes to a list of StorageClass",
"nickname": "watchStorageClassList",
"parameters": [
{
"type": "string",
"paramType": "query",
"name": "pretty",
"description": "If 'true', then the output is pretty printed.",
"required": false,
"allowMultiple": false
},
{
"type": "string",
"paramType": "query",
"name": "labelSelector",
"description": "A selector to restrict the list of returned objects by their labels. Defaults to everything.",
"required": false,
"allowMultiple": false
},
{
"type": "string",
"paramType": "query",
"name": "fieldSelector",
"description": "A selector to restrict the list of returned objects by their fields. Defaults to everything.",
"required": false,
"allowMultiple": false
},
{
"type": "boolean",
"paramType": "query",
"name": "watch",
"description": "Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.",
"required": false,
"allowMultiple": false
},
{
"type": "string",
"paramType": "query",
"name": "resourceVersion",
"description": "When specified with a watch call, shows changes that occur after that particular version of a resource. Defaults to changes from the beginning of history. When specified for list: - if unset, then the result is returned from remote storage based on quorum-read flag; - if it's 0, then we simply return what we currently have in cache, no guarantee; - if set to non zero, then the result is at least as fresh as given rv.",
"required": false,
"allowMultiple": false
},
{
"type": "integer",
"paramType": "query",
"name": "timeoutSeconds",
"description": "Timeout for the list/watch call.",
"required": false,
"allowMultiple": false
}
],
"responseMessages": [
{
"code": 200,
"message": "OK",
"responseModel": "v1.WatchEvent"
}
],
"produces": [
"application/json",
"application/yaml",
"application/vnd.kubernetes.protobuf",
"application/json;stream=watch",
"application/vnd.kubernetes.protobuf;stream=watch"
],
"consumes": [
"*/*"
]
}
]
},
{
"path": "/apis/storage.k8s.io/v1beta1/storageclasses/{name}",
"description": "API at /apis/storage.k8s.io/v1beta1",
"operations": [
{
"type": "v1beta1.StorageClass",
"method": "GET",
"summary": "read the specified StorageClass",
"nickname": "readStorageClass",
"parameters": [
{
"type": "string",
"paramType": "query",
"name": "pretty",
"description": "If 'true', then the output is pretty printed.",
"required": false,
"allowMultiple": false
},
{
"type": "boolean",
"paramType": "query",
"name": "export",
"description": "Should this value be exported. Export strips fields that a user can not specify.",
"required": false,
"allowMultiple": false
},
{
"type": "boolean",
"paramType": "query",
"name": "exact",
"description": "Should the export be exact. Exact export maintains cluster-specific fields like 'Namespace'.",
"required": false,
"allowMultiple": false
},
{
"type": "string",
"paramType": "path",
"name": "name",
"description": "name of the StorageClass",
"required": true,
"allowMultiple": false
}
],
"responseMessages": [
{
"code": 200,
"message": "OK",
"responseModel": "v1beta1.StorageClass"
}
],
"produces": [
"application/json",
"application/yaml",
"application/vnd.kubernetes.protobuf"
],
"consumes": [
"*/*"
]
},
{
"type": "v1beta1.StorageClass",
"method": "PUT",
"summary": "replace the specified StorageClass",
"nickname": "replaceStorageClass",
"parameters": [
{
"type": "string",
"paramType": "query",
"name": "pretty",
"description": "If 'true', then the output is pretty printed.",
"required": false,
"allowMultiple": false
},
{
"type": "v1beta1.StorageClass",
"paramType": "body",
"name": "body",
"description": "",
"required": true,
"allowMultiple": false
},
{
"type": "string",
"paramType": "path",
"name": "name",
"description": "name of the StorageClass",
"required": true,
"allowMultiple": false
}
],
"responseMessages": [
{
"code": 200,
"message": "OK",
"responseModel": "v1beta1.StorageClass"
}
],
"produces": [
"application/json",
"application/yaml",
"application/vnd.kubernetes.protobuf"
],
"consumes": [
"*/*"
]
},
{
"type": "v1beta1.StorageClass",
"method": "PATCH",
"summary": "partially update the specified StorageClass",
"nickname": "patchStorageClass",
"parameters": [
{
"type": "string",
"paramType": "query",
"name": "pretty",
"description": "If 'true', then the output is pretty printed.",
"required": false,
"allowMultiple": false
},
{
"type": "v1.Patch",
"paramType": "body",
"name": "body",
"description": "",
"required": true,
"allowMultiple": false
},
{
"type": "string",
"paramType": "path",
"name": "name",
"description": "name of the StorageClass",
"required": true,
"allowMultiple": false
}
],
"responseMessages": [
{
"code": 200,
"message": "OK",
"responseModel": "v1beta1.StorageClass"
}
],
"produces": [
"application/json",
"application/yaml",
"application/vnd.kubernetes.protobuf"
],
"consumes": [
"application/json-patch+json",
"application/merge-patch+json",
"application/strategic-merge-patch+json"
]
},
{
"type": "v1.Status",
"method": "DELETE",
"summary": "delete a StorageClass",
"nickname": "deleteStorageClass",
"parameters": [
{
"type": "string",
"paramType": "query",
"name": "pretty",
"description": "If 'true', then the output is pretty printed.",
"required": false,
"allowMultiple": false
},
{
"type": "v1.DeleteOptions",
"paramType": "body",
"name": "body",
"description": "",
"required": true,
"allowMultiple": false
},
{
"type": "integer",
"paramType": "query",
"name": "gracePeriodSeconds",
"description": "The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately.",
"required": false,
"allowMultiple": false
},
{
"type": "boolean",
"paramType": "query",
"name": "orphanDependents",
"description": "Should the dependent objects be orphaned. If true/false, the \"orphan\" finalizer will be added to/removed from the object's finalizers list.",
"required": false,
"allowMultiple": false
},
{
"type": "string",
"paramType": "path",
"name": "name",
"description": "name of the StorageClass",
"required": true,
"allowMultiple": false
}
],
"responseMessages": [
{
"code": 200,
"message": "OK",
"responseModel": "v1.Status"
}
],
"produces": [
"application/json",
"application/yaml",
"application/vnd.kubernetes.protobuf"
],
"consumes": [
"*/*"
]
}
]
},
{
"path": "/apis/storage.k8s.io/v1beta1/watch/storageclasses/{name}",
"description": "API at /apis/storage.k8s.io/v1beta1",
"operations": [
{
"type": "v1.WatchEvent",
"method": "GET",
"summary": "watch changes to an object of kind StorageClass",
"nickname": "watchStorageClass",
"parameters": [
{
"type": "string",
"paramType": "query",
"name": "pretty",
"description": "If 'true', then the output is pretty printed.",
"required": false,
"allowMultiple": false
},
{
"type": "string",
"paramType": "query",
"name": "labelSelector",
"description": "A selector to restrict the list of returned objects by their labels. Defaults to everything.",
"required": false,
"allowMultiple": false
},
{
"type": "string",
"paramType": "query",
"name": "fieldSelector",
"description": "A selector to restrict the list of returned objects by their fields. Defaults to everything.",
"required": false,
"allowMultiple": false
},
{
"type": "boolean",
"paramType": "query",
"name": "watch",
"description": "Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.",
"required": false,
"allowMultiple": false
},
{
"type": "string",
"paramType": "query",
"name": "resourceVersion",
"description": "When specified with a watch call, shows changes that occur after that particular version of a resource. Defaults to changes from the beginning of history. When specified for list: - if unset, then the result is returned from remote storage based on quorum-read flag; - if it's 0, then we simply return what we currently have in cache, no guarantee; - if set to non zero, then the result is at least as fresh as given rv.",
"required": false,
"allowMultiple": false
},
{
"type": "integer",
"paramType": "query",
"name": "timeoutSeconds",
"description": "Timeout for the list/watch call.",
"required": false,
"allowMultiple": false
},
{
"type": "string",
"paramType": "path",
"name": "name",
"description": "name of the StorageClass",
"required": true,
"allowMultiple": false
}
],
"responseMessages": [
{
"code": 200,
"message": "OK",
"responseModel": "v1.WatchEvent"
}
],
"produces": [
"application/json",
"application/yaml",
"application/vnd.kubernetes.protobuf",
"application/json;stream=watch",
"application/vnd.kubernetes.protobuf;stream=watch"
],
"consumes": [
"*/*"
]
}
]
},
{
"path": "/apis/storage.k8s.io/v1beta1",
"description": "API at /apis/storage.k8s.io/v1beta1",
"operations": [
{
"type": "v1.APIResourceList",
"method": "GET",
"summary": "get available resources",
"nickname": "getAPIResources",
"parameters": [],
"produces": [
"application/json",
"application/yaml",
"application/vnd.kubernetes.protobuf"
],
"consumes": [
"application/json",
"application/yaml",
"application/vnd.kubernetes.protobuf"
]
}
]
}
],
"models": {
"v1beta1.StorageClassList": {
"id": "v1beta1.StorageClassList",
"description": "StorageClassList is a collection of storage classes.",
"required": [
"items"
],
"properties": {
"kind": {
"type": "string",
"description": "Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#types-kinds"
},
"apiVersion": {
"type": "string",
"description": "APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#resources"
},
"metadata": {
"$ref": "v1.ListMeta",
"description": "Standard list metadata More info: http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#metadata"
},
"items": {
"type": "array",
"items": {
"$ref": "v1beta1.StorageClass"
},
"description": "Items is the list of StorageClasses"
}
}
},
"v1.ListMeta": {
"id": "v1.ListMeta",
"description": "ListMeta describes metadata that synthetic resources must have, including lists and various status objects. A resource may have only one of {ObjectMeta, ListMeta}.",
"properties": {
"selfLink": {
"type": "string",
"description": "SelfLink is a URL representing this object. Populated by the system. Read-only."
},
"resourceVersion": {
"type": "string",
"description": "String that identifies the server's internal version of this object that can be used by clients to determine when objects have changed. Value must be treated as opaque by clients and passed unmodified back to the server. Populated by the system. Read-only. More info: http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#concurrency-control-and-consistency"
}
}
},
"v1beta1.StorageClass": {
"id": "v1beta1.StorageClass",
"description": "StorageClass describes the parameters for a class of storage for which PersistentVolumes can be dynamically provisioned.\n\nStorageClasses are non-namespaced; the name of the storage class according to etcd is in ObjectMeta.Name.",
"required": [
"provisioner"
],
"properties": {
"kind": {
"type": "string",
"description": "Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#types-kinds"
},
"apiVersion": {
"type": "string",
"description": "APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#resources"
},
"metadata": {
"$ref": "v1.ObjectMeta",
"description": "Standard object's metadata. More info: http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#metadata"
},
"provisioner": {
"type": "string",
"description": "Provisioner indicates the type of the provisioner."
},
"parameters": {
"type": "object",
"description": "Parameters holds the parameters for the provisioner that should create volumes of this storage class."
}
}
},
"v1.ObjectMeta": {
"id": "v1.ObjectMeta",
"description": "ObjectMeta is metadata that all persisted resources must have, which includes all objects users must create.",
"properties": {
"name": {
"type": "string",
"description": "Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/identifiers#names"
},
"generateName": {
"type": "string",
"description": "GenerateName is an optional prefix, used by the server, to generate a unique name ONLY IF the Name field has not been provided. If this field is used, the name returned to the client will be different than the name passed. This value will also be combined with a unique suffix. The provided value has the same validation rules as the Name field, and may be truncated by the length of the suffix required to make the value unique on the server.\n\nIf this field is specified and the generated name exists, the server will NOT return a 409 - instead, it will either return 201 Created or 500 with Reason ServerTimeout indicating a unique name could not be found in the time allotted, and the client should retry (optionally after the time indicated in the Retry-After header).\n\nApplied only if Name is not specified. More info: http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#idempotency"
},
"namespace": {
"type": "string",
"description": "Namespace defines the space within each name must be unique. An empty namespace is equivalent to the \"default\" namespace, but \"default\" is the canonical representation. Not all objects are required to be scoped to a namespace - the value of this field for those objects will be empty.\n\nMust be a DNS_LABEL. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/namespaces"
},
"selfLink": {
"type": "string",
"description": "SelfLink is a URL representing this object. Populated by the system. Read-only."
},
"uid": {
"type": "string",
"description": "UID is the unique in time and space value for this object. It is typically generated by the server on successful creation of a resource and is not allowed to change on PUT operations.\n\nPopulated by the system. Read-only. More info: http://kubernetes.io/docs/user-guide/identifiers#uids"
},
"resourceVersion": {
"type": "string",
"description": "An opaque value that represents the internal version of this object that can be used by clients to determine when objects have changed. May be used for optimistic concurrency, change detection, and the watch operation on a resource or set of resources. Clients must treat these values as opaque and passed unmodified back to the server. They may only be valid for a particular resource or set of resources.\n\nPopulated by the system. Read-only. Value must be treated as opaque by clients and . More info: http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#concurrency-control-and-consistency"
},
"generation": {
"type": "integer",
"format": "int64",
"description": "A sequence number representing a specific generation of the desired state. Populated by the system. Read-only."
},
"creationTimestamp": {
"type": "string",
"description": "CreationTimestamp is a timestamp representing the server time when this object was created. It is not guaranteed to be set in happens-before order across separate operations. Clients may not set this value. It is represented in RFC3339 form and is in UTC.\n\nPopulated by the system. Read-only. Null for lists. More info: http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#metadata"
},
"deletionTimestamp": {
"type": "string",
"description": "DeletionTimestamp is RFC 3339 date and time at which this resource will be deleted. This field is set by the server when a graceful deletion is requested by the user, and is not directly settable by a client. The resource is expected to be deleted (no longer visible from resource lists, and not reachable by name) after the time in this field. Once set, this value may not be unset or be set further into the future, although it may be shortened or the resource may be deleted prior to this time. For example, a user may request that a pod is deleted in 30 seconds. The Kubelet will react by sending a graceful termination signal to the containers in the pod. After that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL) to the container and after cleanup, remove the pod from the API. In the presence of network partitions, this object may still exist after this timestamp, until an administrator or automated process can determine the resource is fully terminated. If not set, graceful deletion of the object has not been requested.\n\nPopulated by the system when a graceful deletion is requested. Read-only. More info: http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#metadata"
},
"deletionGracePeriodSeconds": {
"type": "integer",
"format": "int64",
"description": "Number of seconds allowed for this object to gracefully terminate before it will be removed from the system. Only set when deletionTimestamp is also set. May only be shortened. Read-only."
},
"labels": {
"type": "object",
"description": "Map of string keys and values that can be used to organize and categorize (scope and select) objects. May match selectors of replication controllers and services. More info: http://kubernetes.io/docs/user-guide/labels"
},
"annotations": {
"type": "object",
"description": "Annotations is an unstructured key value map stored with a resource that may be set by external tools to store and retrieve arbitrary metadata. They are not queryable and should be preserved when modifying objects. More info: http://kubernetes.io/docs/user-guide/annotations"
},
"ownerReferences": {
"type": "array",
"items": {
"$ref": "v1.OwnerReference"
},
"description": "List of objects depended by this object. If ALL objects in the list have been deleted, this object will be garbage collected. If this object is managed by a controller, then an entry in this list will point to this controller, with the controller field set to true. There cannot be more than one managing controller."
},
"finalizers": {
"type": "array",
"items": {
"type": "string"
},
"description": "Must be empty before the object is deleted from the registry. Each entry is an identifier for the responsible component that will remove the entry from the list. If the deletionTimestamp of the object is non-nil, entries in this list can only be removed."
},
"clusterName": {
"type": "string",
"description": "The name of the cluster which the object belongs to. This is used to distinguish resources with same name and namespace in different clusters. This field is not set anywhere right now and apiserver is going to ignore it if set in create or update request."
}
}
},
"v1.OwnerReference": {
"id": "v1.OwnerReference",
"description": "OwnerReference contains enough information to let you identify an owning object. Currently, an owning object must be in the same namespace, so there is no namespace field.",
"required": [
"apiVersion",
"kind",
"name",
"uid"
],
"properties": {
"apiVersion": {
"type": "string",
"description": "API version of the referent."
},
"kind": {
"type": "string",
"description": "Kind of the referent. More info: http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#types-kinds"
},
"name": {
"type": "string",
"description": "Name of the referent. More info: http://kubernetes.io/docs/user-guide/identifiers#names"
},
"uid": {
"type": "string",
"description": "UID of the referent. More info: http://kubernetes.io/docs/user-guide/identifiers#uids"
},
"controller": {
"type": "boolean",
"description": "If true, this reference points to the managing controller."
}
}
},
"v1.Status": {
"id": "v1.Status",
"description": "Status is a return value for calls that don't return other objects.",
"properties": {
"kind": {
"type": "string",
"description": "Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#types-kinds"
},
"apiVersion": {
"type": "string",
"description": "APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#resources"
},
"metadata": {
"$ref": "v1.ListMeta",
"description": "Standard list metadata. More info: http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#types-kinds"
},
"status": {
"type": "string",
"description": "Status of the operation. One of: \"Success\" or \"Failure\". More info: http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#spec-and-status"
},
"message": {
"type": "string",
"description": "A human-readable description of the status of this operation."
},
"reason": {
"type": "string",
"description": "A machine-readable description of why this operation is in the \"Failure\" status. If this value is empty there is no information available. A Reason clarifies an HTTP status code but does not override it."
},
"details": {
"$ref": "v1.StatusDetails",
"description": "Extended data associated with the reason. Each reason may define its own extended details. This field is optional and the data returned is not guaranteed to conform to any schema except that defined by the reason type."
},
"code": {
"type": "integer",
"format": "int32",
"description": "Suggested HTTP return code for this status, 0 if not set."
}
}
},
"v1.StatusDetails": {
"id": "v1.StatusDetails",
"description": "StatusDetails is a set of additional properties that MAY be set by the server to provide additional information about a response. The Reason field of a Status object defines what attributes will be set. Clients must ignore fields that do not match the defined type of each attribute, and should assume that any attribute may be empty, invalid, or under defined.",
"properties": {
"name": {
"type": "string",
"description": "The name attribute of the resource associated with the status StatusReason (when there is a single name which can be described)."
},
"group": {
"type": "string",
"description": "The group attribute of the resource associated with the status StatusReason."
},
"kind": {
"type": "string",
"description": "The kind attribute of the resource associated with the status StatusReason. On some operations may differ from the requested resource Kind. More info: http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#types-kinds"
},
"causes": {
"type": "array",
"items": {
"$ref": "v1.StatusCause"
},
"description": "The Causes array includes more details associated with the StatusReason failure. Not all StatusReasons may provide detailed causes."
},
"retryAfterSeconds": {
"type": "integer",
"format": "int32",
"description": "If specified, the time in seconds before the operation should be retried."
}
}
},
"v1.StatusCause": {
"id": "v1.StatusCause",
"description": "StatusCause provides more information about an api.Status failure, including cases when multiple errors are encountered.",
"properties": {
"reason": {
"type": "string",
"description": "A machine-readable description of the cause of the error. If this value is empty there is no information available."
},
"message": {
"type": "string",
"description": "A human-readable description of the cause of the error. This field may be presented as-is to a reader."
},
"field": {
"type": "string",
"description": "The field of the resource that has caused this error, as named by its JSON serialization. May include dot and postfix notation for nested attributes. Arrays are zero-indexed. Fields may appear more than once in an array of causes due to fields having multiple errors. Optional.\n\nExamples:\n \"name\" - the field \"name\" on the current resource\n \"items[0].name\" - the field \"name\" on the first array entry in \"items\""
}
}
},
"v1.WatchEvent": {
"id": "v1.WatchEvent",
"required": [
"type",
"object"
],
"properties": {
"type": {
"type": "string"
},
"object": {
"type": "string"
}
}
},
"v1.Patch": {
"id": "v1.Patch",
"description": "Patch is provided to give a concrete name and type to the Kubernetes PATCH request body.",
"properties": {}
},
"v1.DeleteOptions": {
"id": "v1.DeleteOptions",
"description": "DeleteOptions may be provided when deleting an API object",
"properties": {
"kind": {
"type": "string",
"description": "Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#types-kinds"
},
"apiVersion": {
"type": "string",
"description": "APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#resources"
},
"gracePeriodSeconds": {
"type": "integer",
"format": "int64",
"description": "The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately."
},
"preconditions": {
"$ref": "v1.Preconditions",
"description": "Must be fulfilled before a deletion is carried out. If not possible, a 409 Conflict status will be returned."
},
"orphanDependents": {
"type": "boolean",
"description": "Should the dependent objects be orphaned. If true/false, the \"orphan\" finalizer will be added to/removed from the object's finalizers list."
}
}
},
"v1.Preconditions": {
"id": "v1.Preconditions",
"description": "Preconditions must be fulfilled before an operation (update, delete, etc.) is carried out.",
"properties": {
"uid": {
"$ref": "types.UID",
"description": "Specifies the target UID."
}
}
},
"types.UID": {
"id": "types.UID",
"properties": {}
},
"v1.APIResourceList": {
"id": "v1.APIResourceList",
"description": "APIResourceList is a list of APIResource, it is used to expose the name of the resources supported in a specific group and version, and if the resource is namespaced.",
"required": [
"groupVersion",
"resources"
],
"properties": {
"kind": {
"type": "string",
"description": "Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#types-kinds"
},
"apiVersion": {
"type": "string",
"description": "APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#resources"
},
"groupVersion": {
"type": "string",
"description": "groupVersion is the group and version this APIResourceList is for."
},
"resources": {
"type": "array",
"items": {
"$ref": "v1.APIResource"
},
"description": "resources contains the name of the resources and if they are namespaced."
}
}
},
"v1.APIResource": {
"id": "v1.APIResource",
"description": "APIResource specifies the name of a resource and whether it is namespaced.",
"required": [
"name",
"namespaced",
"kind",
"verbs"
],
"properties": {
"name": {
"type": "string",
"description": "name is the name of the resource."
},
"namespaced": {
"type": "boolean",
"description": "namespaced indicates if a resource is namespaced or not."
},
"kind": {
"type": "string",
"description": "kind is the kind for the resource (e.g. 'Foo' is the kind for a resource 'foo')"
},
"verbs": {
"type": "array",
"items": {
"type": "string"
},
"description": "verbs is a list of supported kube verbs (this includes get, list, watch, create, update, patch, delete, deletecollection, and proxy)"
}
}
}
}
}

20241
vendor/k8s.io/kubernetes/api/swagger-spec/v1.json generated vendored Normal file

File diff suppressed because it is too large Load diff

76
vendor/k8s.io/kubernetes/api/swagger-spec/version.json generated vendored Normal file
View file

@ -0,0 +1,76 @@
{
"swaggerVersion": "1.2",
"apiVersion": "",
"basePath": "https://10.10.10.10:6443",
"resourcePath": "/version",
"info": {
"title": "",
"description": ""
},
"apis": [
{
"path": "/version",
"description": "git code version from which this is built",
"operations": [
{
"type": "version.Info",
"method": "GET",
"summary": "get the code version",
"nickname": "getCodeVersion",
"parameters": [],
"produces": [
"application/json"
],
"consumes": [
"application/json"
]
}
]
}
],
"models": {
"version.Info": {
"id": "version.Info",
"required": [
"major",
"minor",
"gitVersion",
"gitCommit",
"gitTreeState",
"buildDate",
"goVersion",
"compiler",
"platform"
],
"properties": {
"major": {
"type": "string"
},
"minor": {
"type": "string"
},
"gitVersion": {
"type": "string"
},
"gitCommit": {
"type": "string"
},
"gitTreeState": {
"type": "string"
},
"buildDate": {
"type": "string"
},
"goVersion": {
"type": "string"
},
"compiler": {
"type": "string"
},
"platform": {
"type": "string"
}
}
}
}
}

76
vendor/k8s.io/kubernetes/build/BUILD generated vendored Normal file
View file

@ -0,0 +1,76 @@
package(default_visibility = ["//visibility:public"])
load("@bazel_tools//tools/build_defs/docker:docker.bzl", "docker_build")
docker_build(
name = "busybox",
debs = [
"@busybox_deb//file",
],
symlinks = {
"/bin/sh": "/bin/busybox",
"/usr/bin/busybox": "/bin/busybox",
"/usr/sbin/busybox": "/bin/busybox",
"/sbin/busybox": "/bin/busybox",
},
)
docker_build(
name = "busybox-libc",
base = ":busybox",
debs = [
"@libc_deb//file",
],
)
docker_build(
name = "busybox-net",
base = ":busybox-libc",
debs = [
"@iptables_deb//file",
"@iproute2_deb//file",
"@libnetlink_deb//file",
"@libxtables_deb//file",
],
)
[docker_build(
name = binary,
base = ":busybox-libc",
cmd = ["/usr/bin/" + binary],
debs = [
"//build/debs:%s.deb" % binary,
],
repository = "gcr.io/google-containers",
) for binary in [
"kube-apiserver",
"kube-controller-manager",
"kube-scheduler",
"kube-aggregator",
]]
docker_build(
name = "kube-proxy",
base = ":busybox-net",
cmd = ["/usr/bin/kube-proxy"],
debs = [
"//build/debs:kube-proxy.deb",
],
repository = "gcr.io/google-containers",
)
filegroup(
name = "package-srcs",
srcs = glob(["**"]),
tags = ["automanaged"],
visibility = ["//visibility:private"],
)
filegroup(
name = "all-srcs",
srcs = [
":package-srcs",
"//build/debs:all-srcs",
],
tags = ["automanaged"],
)

6
vendor/k8s.io/kubernetes/build/OWNERS generated vendored Normal file
View file

@ -0,0 +1,6 @@
assignees:
- ihmccreery
- ixdy
- jbeda
- lavalamp
- zmerlynn

112
vendor/k8s.io/kubernetes/build/README.md generated vendored Normal file
View file

@ -0,0 +1,112 @@
# Building Kubernetes
Building Kubernetes is easy if you take advantage of the containerized build environment. This document will help guide you through understanding this build process.
## Requirements
1. Docker, using one of the following configurations:
1. **Mac OS X** You can either use Docker for Mac or docker-machine. See installation instructions [here](https://docs.docker.com/docker-for-mac/).
**Note**: You will want to set the Docker VM to have at least 3GB of initial memory or building will likely fail. (See: [#11852]( http://issue.k8s.io/11852)).
2. **Linux with local Docker** Install Docker according to the [instructions](https://docs.docker.com/installation/#installation) for your OS.
3. **Remote Docker engine** Use a big machine in the cloud to build faster. This is a little trickier so look at the section later on.
2. **Optional** [Google Cloud SDK](https://developers.google.com/cloud/sdk/)
You must install and configure Google Cloud SDK if you want to upload your release to Google Cloud Storage and may safely omit this otherwise.
## Overview
While it is possible to build Kubernetes using a local golang installation, we have a build process that runs in a Docker container. This simplifies initial set up and provides for a very consistent build and test environment.
## Key scripts
The following scripts are found in the `build/` directory. Note that all scripts must be run from the Kubernetes root directory.
* `build/run.sh`: Run a command in a build docker container. Common invocations:
* `build/run.sh make`: Build just linux binaries in the container. Pass options and packages as necessary.
* `build/run.sh make cross`: Build all binaries for all platforms
* `build/run.sh make test`: Run all unit tests
* `build/run.sh make test-integration`: Run integration test
* `build/run.sh make test-cmd`: Run CLI tests
* `build/copy-output.sh`: This will copy the contents of `_output/dockerized/bin` from the Docker container to the local `_output/dockerized/bin`. It will also copy out specific file patterns that are generated as part of the build process. This is run automatically as part of `build/run.sh`.
* `build/make-clean.sh`: Clean out the contents of `_output`, remove any locally built container images and remove the data container.
* `/build/shell.sh`: Drop into a `bash` shell in a build container with a snapshot of the current repo code.
## Basic Flow
The scripts directly under `build/` are used to build and test. They will ensure that the `kube-build` Docker image is built (based on `build/build-image/Dockerfile`) and then execute the appropriate command in that container. These scripts will both ensure that the right data is cached from run to run for incremental builds and will copy the results back out of the container.
The `kube-build` container image is built by first creating a "context" directory in `_output/images/build-image`. It is done there instead of at the root of the Kubernetes repo to minimize the amount of data we need to package up when building the image.
There are 3 different containers instances that are run from this image. The first is a "data" container to store all data that needs to persist across to support incremental builds. Next there is an "rsync" container that is used to transfer data in and out to the data container. Lastly there is a "build" container that is used for actually doing build actions. The data container persists across runs while the rsync and build containers are deleted after each use.
`rsync` is used transparently behind the scenes to efficiently move data in and out of the container. This will use an ephemeral port picked by Docker. You can modify this by setting the `KUBE_RSYNC_PORT` env variable.
All Docker names are suffixed with a hash derived from the file path (to allow concurrent usage on things like CI machines) and a version number. When the version number changes all state is cleared and clean build is started. This allows the build infrastructure to be changed and signal to CI systems that old artifacts need to be deleted.
## Proxy Settings
If you are behind a proxy and you are letting these scripts use `docker-machine` to set up your local VM for you on macOS, you need to export proxy settings for Kubernetes build, the following environment variables should be defined.
```
export KUBERNETES_HTTP_PROXY=http://username:password@proxyaddr:proxyport
export KUBERNETES_HTTPS_PROXY=https://username:password@proxyaddr:proxyport
```
Optionally, you can specify addresses of no proxy for Kubernetes build, for example
```
export KUBERNETES_NO_PROXY=127.0.0.1
```
If you are using sudo to make Kubernetes build for example make quick-release, you need run `sudo -E make quick-release` to pass the environment variables.
## Really Remote Docker Engine
It is possible to use a Docker Engine that is running remotely (under your desk or in the cloud). Docker must be configured to connect to that machine and the local rsync port must be forwarded (via SSH or nc) from localhost to the remote machine.
To do this easily with GCE and `docker-machine`, do something like this:
```
# Create the remote docker machine on GCE. This is a pretty beefy machine with SSD disk.
KUBE_BUILD_VM=k8s-build
KUBE_BUILD_GCE_PROJECT=<project>
docker-machine create \
--driver=google \
--google-project=${KUBE_BUILD_GCE_PROJECT} \
--google-zone=us-west1-a \
--google-machine-type=n1-standard-8 \
--google-disk-size=50 \
--google-disk-type=pd-ssd \
${KUBE_BUILD_VM}
# Set up local docker to talk to that machine
eval $(docker-machine env ${KUBE_BUILD_VM})
# Pin down the port that rsync will be exposed on on the remote machine
export KUBE_RSYNC_PORT=8730
# forward local 8730 to that machine so that rsync works
docker-machine ssh ${KUBE_BUILD_VM} -L ${KUBE_RSYNC_PORT}:localhost:${KUBE_RSYNC_PORT} -N &
```
Look at `docker-machine stop`, `docker-machine start` and `docker-machine rm` to manage this VM.
## Releasing
The `build/release.sh` script will build a release. It will build binaries, run tests, (optionally) build runtime Docker images.
The main output is a tar file: `kubernetes.tar.gz`. This includes:
* Cross compiled client utilities.
* Script (`kubectl`) for picking and running the right client binary based on platform.
* Examples
* Cluster deployment scripts for various clouds
* Tar file containing all server binaries
* Tar file containing salt deployment tree shared across multiple cloud deployments.
In addition, there are some other tar files that are created:
* `kubernetes-client-*.tar.gz` Client binaries for a specific platform.
* `kubernetes-server-*.tar.gz` Server binaries for a specific platform.
* `kubernetes-salt.tar.gz` The salt script/tree shared across multiple deployment scripts.
When building final release tars, they are first staged into `_output/release-stage` before being tar'd up and put into `_output/release-tars`.
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/build/README.md?pixel)]()

47
vendor/k8s.io/kubernetes/build/build-image/Dockerfile generated vendored Normal file
View file

@ -0,0 +1,47 @@
# Copyright 2016 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# This file creates a standard build environment for building Kubernetes
FROM gcr.io/google_containers/kube-cross:KUBE_BUILD_IMAGE_CROSS_TAG
# Mark this as a kube-build container
RUN touch /kube-build-image
# To run as non-root we sometimes need to rebuild go stdlib packages.
RUN chmod -R a+rwx /usr/local/go/pkg ${K8S_PATCHED_GOROOT}/pkg
# For running integration tests /var/run/kubernetes is required
# and should be writable by user
RUN mkdir /var/run/kubernetes && chmod a+rwx /var/run/kubernetes
# The kubernetes source is expected to be mounted here. This will be the base
# of operations.
ENV HOME /go/src/k8s.io/kubernetes
WORKDIR ${HOME}
# Make output from the dockerized build go someplace else
ENV KUBE_OUTPUT_SUBPATH _output/dockerized
# Pick up version stuff here as we don't copy our .git over.
ENV KUBE_GIT_VERSION_FILE ${HOME}/.dockerized-kube-version-defs
# Make log messages use the right timezone
ADD localtime /etc/localtime
RUN chmod a+r /etc/localtime
# Set up rsyncd
ADD rsyncd.password /
RUN chmod a+r /rsyncd.password
ADD rsyncd.sh /
RUN chmod a+rx /rsyncd.sh

1
vendor/k8s.io/kubernetes/build/build-image/VERSION generated vendored Normal file
View file

@ -0,0 +1 @@
4

View file

@ -0,0 +1,93 @@
# Copyright 2016 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# This file creates a standard build environment for building cross
# platform go binary for the architecture kubernetes cares about.
FROM golang:1.7.4
ENV GOARM 6
ENV KUBE_DYNAMIC_CROSSPLATFORMS \
armel \
arm64 \
s390x \
ppc64el
ENV KUBE_CROSSPLATFORMS \
linux/386 \
linux/arm linux/arm64 \
linux/ppc64le \
linux/s390x \
darwin/amd64 darwin/386 \
windows/amd64 windows/386
# Pre-compile the standard go library when cross-compiling. This is much easier now when we have go1.5+
RUN for platform in ${KUBE_CROSSPLATFORMS}; do GOOS=${platform%/*} GOARCH=${platform##*/} go install std; done
# Install g++, then download and install protoc for generating protobuf output
RUN apt-get update \
&& apt-get install -y g++ rsync apt-utils file patch \
&& apt-get clean && rm -rf /var/lib/apt/lists/*
RUN mkdir -p /usr/local/src/protobuf \
&& cd /usr/local/src/protobuf \
&& curl -sSL https://github.com/google/protobuf/releases/download/v3.0.0-beta-2/protobuf-cpp-3.0.0-beta-2.tar.gz | tar -xzv \
&& cd protobuf-3.0.0-beta-2 \
&& ./configure \
&& make install \
&& ldconfig \
&& cd .. \
&& rm -rf protobuf-3.0.0-beta-2 \
&& protoc --version
# Use dynamic cgo linking for architectures other than amd64 for the server platforms
# To install crossbuild essential for other architectures add the following repository.
RUN echo "deb http://archive.ubuntu.com/ubuntu xenial main universe" > /etc/apt/sources.list.d/cgocrosscompiling.list \
&& apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 40976EAF437D05B5 3B4FE6ACC0B21F32 \
&& apt-get update \
&& apt-get install -y build-essential \
&& for platform in ${KUBE_DYNAMIC_CROSSPLATFORMS}; do apt-get install -y crossbuild-essential-${platform}; done \
&& apt-get clean && rm -rf /var/lib/apt/lists/*
# work around 64MB tmpfs size in Docker 1.6
ENV TMPDIR /tmp.k8s
# Get the code coverage tool, goimports, and godep
RUN mkdir $TMPDIR \
&& chmod a+rwx $TMPDIR \
&& chmod o+t $TMPDIR \
&& go get golang.org/x/tools/cmd/cover \
golang.org/x/tools/cmd/goimports \
github.com/tools/godep
# Download and symlink etcd. We need this for our integration tests.
RUN export ETCD_VERSION=v3.0.14; \
mkdir -p /usr/local/src/etcd \
&& cd /usr/local/src/etcd \
&& curl -fsSL https://github.com/coreos/etcd/releases/download/${ETCD_VERSION}/etcd-${ETCD_VERSION}-linux-amd64.tar.gz | tar -xz \
&& ln -s ../src/etcd/etcd-${ETCD_VERSION}-linux-amd64/etcd /usr/local/bin/
# TODO: Remove the patched GOROOT when we have an official golang that has a working arm and ppc64le linker
ENV K8S_PATCHED_GOLANG_VERSION=1.7.4 \
K8S_PATCHED_GOROOT=/usr/local/go_k8s_patched
RUN mkdir -p ${K8S_PATCHED_GOROOT} \
&& curl -sSL https://github.com/golang/go/archive/go${K8S_PATCHED_GOLANG_VERSION}.tar.gz | tar -xz -C ${K8S_PATCHED_GOROOT} --strip-components=1
# We need a patched go1.7.1 for linux/arm (https://github.com/kubernetes/kubernetes/issues/29904)
COPY golang-patches/CL28857-go1.7.1-luxas.patch ${K8S_PATCHED_GOROOT}/
RUN cd ${K8S_PATCHED_GOROOT} \
&& patch -p1 < CL28857-go1.7.1-luxas.patch \
&& cd src \
&& GOROOT_FINAL=${K8S_PATCHED_GOROOT} GOROOT_BOOTSTRAP=/usr/local/go ./make.bash \
&& for platform in linux/arm; do GOOS=${platform%/*} GOARCH=${platform##*/} GOROOT=${K8S_PATCHED_GOROOT} go install std; done

View file

@ -0,0 +1,27 @@
# Copyright 2016 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
.PHONY: build push
IMAGE=kube-cross
TAG=$(shell cat VERSION)
all: push
build:
docker build --pull -t gcr.io/google_containers/$(IMAGE):$(TAG) .
push: build
gcloud docker --server=gcr.io -- push gcr.io/google_containers/$(IMAGE):$(TAG)

View file

@ -0,0 +1 @@
v1.7.4-1

View file

@ -0,0 +1,390 @@
From cc1015ff9bb020ea92c1854026e9ae395a8504b2 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Lucas=20K=C3=A4ldstr=C3=B6m?=
<lucas.kaldstrom@hotmail.co.uk>
Date: Mon, 12 Sep 2016 22:55:42 +0300
Subject: [PATCH] Ported cherrymui's CL 28857 to the go1.7.1 branch
---
src/cmd/compile/internal/arm/ssa.go | 16 ++++++
src/cmd/compile/internal/gc/cgen.go | 5 ++
src/cmd/compile/internal/gc/main.go | 11 ++--
src/cmd/compile/internal/gc/plive.go | 5 ++
src/cmd/internal/obj/arm/asm5.go | 32 ++++++++++++
src/cmd/internal/obj/arm/obj5.go | 3 ++
src/cmd/internal/obj/link.go | 98 +++++++++++++++++++-----------------
src/cmd/link/internal/arm/asm.go | 18 ++++++-
src/cmd/link/internal/ld/lib.go | 4 +-
src/runtime/asm_arm.s | 2 +-
10 files changed, 138 insertions(+), 56 deletions(-)
diff --git a/src/cmd/compile/internal/arm/ssa.go b/src/cmd/compile/internal/arm/ssa.go
index 8f466e3..a066e02 100644
--- a/src/cmd/compile/internal/arm/ssa.go
+++ b/src/cmd/compile/internal/arm/ssa.go
@@ -111,6 +111,22 @@ func ssaGenValue(s *gc.SSAGenState, v *ssa.Value) {
gc.AddAux(&p.To, v)
case ssa.OpARMCALLstatic:
// TODO: deferreturn
+ if v.Aux.(*gc.Sym) == gc.Deferreturn.Sym {
+ // Deferred calls will appear to be returning to
+ // the CALL deferreturn(SB) that we are about to emit.
+ // However, the stack trace code will show the line
+ // of the instruction byte before the return PC.
+ // To avoid that being an unrelated instruction,
+ // insert an actual hardware NOP that will have the right line number.
+ // This is different from obj.ANOP, which is a virtual no-op
+ // that doesn't make it into the instruction stream.
+ ginsnop()
+ if !gc.Ctxt.Flag_largemodel {
+ // We always back up two instructions.
+ // For non-large build, insert another NOP.
+ ginsnop()
+ }
+ }
p := gc.Prog(obj.ACALL)
p.To.Type = obj.TYPE_MEM
p.To.Name = obj.NAME_EXTERN
diff --git a/src/cmd/compile/internal/gc/cgen.go b/src/cmd/compile/internal/gc/cgen.go
index 74fe463..7507d5f 100644
--- a/src/cmd/compile/internal/gc/cgen.go
+++ b/src/cmd/compile/internal/gc/cgen.go
@@ -2373,6 +2373,11 @@ func Ginscall(f *Node, proc int) {
Thearch.Ginsnop()
}
}
+ if Ctxt.Arch.Family == sys.ARM && !Ctxt.Flag_largemodel {
+ // On ARM we always back up two instructions.
+ // For non-large build, insert another NOP.
+ Thearch.Ginsnop()
+ }
}
p := Thearch.Gins(obj.ACALL, nil, f)
diff --git a/src/cmd/compile/internal/gc/main.go b/src/cmd/compile/internal/gc/main.go
index b4df7ed..c0a8aa1 100644
--- a/src/cmd/compile/internal/gc/main.go
+++ b/src/cmd/compile/internal/gc/main.go
@@ -148,9 +148,9 @@ func Main() {
goos = obj.Getgoos()
Nacl = goos == "nacl"
- if Nacl {
- flag_largemodel = true
- }
+ //if Nacl {
+ // flag_largemodel = true
+ //}
flag.BoolVar(&compiling_runtime, "+", false, "compiling runtime")
obj.Flagcount("%", "debug non-static initializers", &Debug['%'])
@@ -204,9 +204,7 @@ func Main() {
flag.BoolVar(&flag_shared, "shared", false, "generate code that can be linked into a shared library")
flag.BoolVar(&flag_dynlink, "dynlink", false, "support references to Go symbols defined in other shared libraries")
}
- if Thearch.LinkArch.Family == sys.AMD64 {
- flag.BoolVar(&flag_largemodel, "largemodel", false, "generate code that assumes a large memory model")
- }
+ flag.BoolVar(&flag_largemodel, "largemodel", false, "generate code that assumes a large memory model")
flag.StringVar(&cpuprofile, "cpuprofile", "", "write cpu profile to `file`")
flag.StringVar(&memprofile, "memprofile", "", "write memory profile to `file`")
flag.Int64Var(&memprofilerate, "memprofilerate", 0, "set runtime.MemProfileRate to `rate`")
@@ -216,6 +214,7 @@ func Main() {
Ctxt.Flag_shared = flag_dynlink || flag_shared
Ctxt.Flag_dynlink = flag_dynlink
Ctxt.Flag_optimize = Debug['N'] == 0
+ Ctxt.Flag_largemodel = flag_largemodel
Ctxt.Debugasm = int32(Debug['S'])
Ctxt.Debugvlog = int32(Debug['v'])
diff --git a/src/cmd/compile/internal/gc/plive.go b/src/cmd/compile/internal/gc/plive.go
index ca0421d..ad26d24 100644
--- a/src/cmd/compile/internal/gc/plive.go
+++ b/src/cmd/compile/internal/gc/plive.go
@@ -1400,6 +1400,11 @@ func livenessepilogue(lv *Liveness) {
// the call.
prev = prev.Opt.(*obj.Prog)
}
+ if Ctxt.Arch.Family == sys.ARM && !Ctxt.Flag_largemodel {
+ // On ARM we always back up two instructions.
+ // For non-large build, there is another NOP.
+ prev = prev.Opt.(*obj.Prog)
+ }
splicebefore(lv, bb, newpcdataprog(prev, pos), prev)
} else {
splicebefore(lv, bb, newpcdataprog(p, pos), p)
diff --git a/src/cmd/internal/obj/arm/asm5.go b/src/cmd/internal/obj/arm/asm5.go
index d37091f..8136492 100644
--- a/src/cmd/internal/obj/arm/asm5.go
+++ b/src/cmd/internal/obj/arm/asm5.go
@@ -585,6 +585,30 @@ func span5(ctxt *obj.Link, cursym *obj.LSym) {
break
}
+ if ctxt.Flag_largemodel && (p.As == AB || p.As == ABL || p.As == obj.ADUFFZERO || p.As == obj.ADUFFCOPY) && p.To.Name == obj.NAME_EXTERN {
+ // in large mode, emit indirect call
+ // MOVW $target, Rtmp
+ // BL (Rtmp)
+ // use REGLINK as Rtmp, as soft div calls expects REGTMP to pass argument
+ tmp := int16(REGLINK)
+ q := obj.Appendp(ctxt, p)
+ q.As = ABL
+ if p.As == AB {
+ q.As = AB
+ tmp = REGTMP // should not clobber REGLINK in this case
+ }
+ q.To.Type = obj.TYPE_MEM
+ q.To.Reg = tmp
+ q.To.Sym = p.To.Sym // tell asmout to emits R_CALLARMLARGE reloc
+
+ p.As = AMOVW
+ p.From = p.To // jump target
+ p.From.Type = obj.TYPE_ADDR
+ p.To = obj.Addr{}
+ p.To.Type = obj.TYPE_REG
+ p.To.Reg = tmp
+ }
+
ctxt.Curp = p
p.Pc = int64(c)
o = oplook(ctxt, p)
@@ -1583,6 +1607,14 @@ func asmout(ctxt *obj.Link, p *obj.Prog, o *Optab, out []uint32) {
}
o1 = oprrr(ctxt, ABL, int(p.Scond))
o1 |= (uint32(p.To.Reg) & 15) << 0
+ if p.To.Sym != nil {
+ rel := obj.Addrel(ctxt.Cursym)
+ rel.Off = int32(ctxt.Pc)
+ rel.Siz = 4
+ rel.Sym = p.To.Sym
+ rel.Type = obj.R_CALLARMLARGE
+ break
+ }
rel := obj.Addrel(ctxt.Cursym)
rel.Off = int32(ctxt.Pc)
rel.Siz = 0
diff --git a/src/cmd/internal/obj/arm/obj5.go b/src/cmd/internal/obj/arm/obj5.go
index 9cf2f29..13bf952 100644
--- a/src/cmd/internal/obj/arm/obj5.go
+++ b/src/cmd/internal/obj/arm/obj5.go
@@ -584,6 +584,7 @@ func preprocess(ctxt *obj.Link, cursym *obj.LSym) {
p.As = ABL
p.Lineno = q1.Lineno
p.To.Type = obj.TYPE_BRANCH
+ p.To.Name = obj.NAME_EXTERN
switch o {
case ADIV:
p.To.Sym = ctxt.Sym_div
@@ -687,6 +688,7 @@ func softfloat(ctxt *obj.Link, cursym *obj.LSym) {
p.Link = next
p.As = ABL
p.To.Type = obj.TYPE_BRANCH
+ p.To.Name = obj.NAME_EXTERN
p.To.Sym = symsfloat
p.Lineno = next.Lineno
@@ -820,6 +822,7 @@ func stacksplit(ctxt *obj.Link, p *obj.Prog, framesize int32) *obj.Prog {
call := obj.Appendp(ctxt, movw)
call.As = obj.ACALL
call.To.Type = obj.TYPE_BRANCH
+ call.To.Name = obj.NAME_EXTERN
morestack := "runtime.morestack"
switch {
case ctxt.Cursym.Cfunc:
diff --git a/src/cmd/internal/obj/link.go b/src/cmd/internal/obj/link.go
index b6861f4..51e9fd3 100644
--- a/src/cmd/internal/obj/link.go
+++ b/src/cmd/internal/obj/link.go
@@ -588,6 +588,11 @@ const (
// R_ADDRMIPSTLS (only used on mips64) resolves to the low 16 bits of a TLS
// address (offset from thread pointer), by encoding it into the instruction.
R_ADDRMIPSTLS
+
+ // R_CALLARMLARGE applies on an indirect CALL with known target used on large mode.
+ // Currently it does nothing but tell the linker the target for stack split check.
+ // In the future linker may optimize this to a NOP and a direct CALL if it is safe.
+ R_CALLARMLARGE
)
type Auto struct {
@@ -617,52 +622,53 @@ const (
// Link holds the context for writing object code from a compiler
// to be linker input or for reading that input into the linker.
type Link struct {
- Goarm int32
- Headtype int
- Arch *LinkArch
- Debugasm int32
- Debugvlog int32
- Debugdivmod int32
- Debugpcln int32
- Flag_shared bool
- Flag_dynlink bool
- Flag_optimize bool
- Bso *bufio.Writer
- Pathname string
- Goroot string
- Goroot_final string
- Hash map[SymVer]*LSym
- LineHist LineHist
- Imports []string
- Plist *Plist
- Plast *Plist
- Sym_div *LSym
- Sym_divu *LSym
- Sym_mod *LSym
- Sym_modu *LSym
- Plan9privates *LSym
- Curp *Prog
- Printp *Prog
- Blitrl *Prog
- Elitrl *Prog
- Rexflag int
- Vexflag int
- Rep int
- Repn int
- Lock int
- Asmode int
- AsmBuf AsmBuf // instruction buffer for x86
- Instoffset int64
- Autosize int32
- Armsize int32
- Pc int64
- DiagFunc func(string, ...interface{})
- Mode int
- Cursym *LSym
- Version int
- Textp *LSym
- Etextp *LSym
- Errors int
+ Goarm int32
+ Headtype int
+ Arch *LinkArch
+ Debugasm int32
+ Debugvlog int32
+ Debugdivmod int32
+ Debugpcln int32
+ Flag_shared bool
+ Flag_dynlink bool
+ Flag_optimize bool
+ Flag_largemodel bool // generate code that assumes a large memory model
+ Bso *bufio.Writer
+ Pathname string
+ Goroot string
+ Goroot_final string
+ Hash map[SymVer]*LSym
+ LineHist LineHist
+ Imports []string
+ Plist *Plist
+ Plast *Plist
+ Sym_div *LSym
+ Sym_divu *LSym
+ Sym_mod *LSym
+ Sym_modu *LSym
+ Plan9privates *LSym
+ Curp *Prog
+ Printp *Prog
+ Blitrl *Prog
+ Elitrl *Prog
+ Rexflag int
+ Vexflag int
+ Rep int
+ Repn int
+ Lock int
+ Asmode int
+ AsmBuf AsmBuf // instruction buffer for x86
+ Instoffset int64
+ Autosize int32
+ Armsize int32
+ Pc int64
+ DiagFunc func(string, ...interface{})
+ Mode int
+ Cursym *LSym
+ Version int
+ Textp *LSym
+ Etextp *LSym
+ Errors int
Framepointer_enabled bool
diff --git a/src/cmd/link/internal/arm/asm.go b/src/cmd/link/internal/arm/asm.go
index 0c3e957..0cf61bb 100644
--- a/src/cmd/link/internal/arm/asm.go
+++ b/src/cmd/link/internal/arm/asm.go
@@ -410,6 +410,11 @@ func machoreloc1(r *ld.Reloc, sectoff int64) int {
return 0
}
+// sign extend a 24-bit integer
+func signext24(x int64) int32 {
+ return (int32(x) << 8) >> 8
+}
+
func archreloc(r *ld.Reloc, s *ld.LSym, val *int64) int {
if ld.Linkmode == ld.LinkExternal {
switch r.Type {
@@ -445,6 +450,9 @@ func archreloc(r *ld.Reloc, s *ld.LSym, val *int64) int {
*val = int64(braddoff(int32(0xff000000&uint32(r.Add)), int32(0xffffff&uint32(r.Xadd/4))))
return 0
+
+ case obj.R_CALLARMLARGE:
+ return 0
}
return -1
@@ -479,8 +487,16 @@ func archreloc(r *ld.Reloc, s *ld.LSym, val *int64) int {
return 0
case obj.R_CALLARM: // bl XXXXXX or b YYYYYY
- *val = int64(braddoff(int32(0xff000000&uint32(r.Add)), int32(0xffffff&uint32((ld.Symaddr(r.Sym)+int64((uint32(r.Add))*4)-(s.Value+int64(r.Off)))/4))))
+ // low 24-bit encodes the target address
+ t := (ld.Symaddr(r.Sym) + int64(signext24(r.Add&0xffffff)*4) - (s.Value + int64(r.Off))) / 4
+ if t > 0x7fffff || t < -0x800000 {
+ ld.Diag("direct call too far %d, should build with -gcflags -largemodel", t)
+ }
+ *val = int64(braddoff(int32(0xff000000&uint32(r.Add)), int32(0xffffff&t)))
+
+ return 0
+ case obj.R_CALLARMLARGE:
return 0
}
diff --git a/src/cmd/link/internal/ld/lib.go b/src/cmd/link/internal/ld/lib.go
index 14f4fa9..cc3b50e 100644
--- a/src/cmd/link/internal/ld/lib.go
+++ b/src/cmd/link/internal/ld/lib.go
@@ -1826,7 +1826,7 @@ func stkcheck(up *Chain, depth int) int {
r = &s.R[ri]
switch r.Type {
// Direct call.
- case obj.R_CALL, obj.R_CALLARM, obj.R_CALLARM64, obj.R_CALLPOWER, obj.R_CALLMIPS:
+ case obj.R_CALL, obj.R_CALLARM, obj.R_CALLARM64, obj.R_CALLPOWER, obj.R_CALLMIPS, obj.R_CALLARMLARGE:
ch.limit = int(int32(limit) - pcsp.value - int32(callsize()))
ch.sym = r.Sym
if stkcheck(&ch, depth+1) < 0 {
@@ -2164,7 +2164,7 @@ func callgraph() {
if r.Sym == nil {
continue
}
- if (r.Type == obj.R_CALL || r.Type == obj.R_CALLARM || r.Type == obj.R_CALLPOWER || r.Type == obj.R_CALLMIPS) && r.Sym.Type == obj.STEXT {
+ if (r.Type == obj.R_CALL || r.Type == obj.R_CALLARM || r.Type == obj.R_CALLPOWER || r.Type == obj.R_CALLMIPS || r.Type == obj.R_CALLARMLARGE) && r.Sym.Type == obj.STEXT {
fmt.Fprintf(Bso, "%s calls %s\n", s.Name, r.Sym.Name)
}
}
diff --git a/src/runtime/asm_arm.s b/src/runtime/asm_arm.s
index f02297e..0930585 100644
--- a/src/runtime/asm_arm.s
+++ b/src/runtime/asm_arm.s
@@ -465,7 +465,7 @@ CALLFN(·call1073741824, 1073741824)
// (And double-check that pop is atomic in that way.)
TEXT runtime·jmpdefer(SB),NOSPLIT,$0-8
MOVW 0(R13), LR
- MOVW $-4(LR), LR // BL deferreturn
+ MOVW $-8(LR), LR // BL deferreturn
MOVW fv+0(FP), R7
MOVW argp+4(FP), R13
MOVW $-4(R13), R13 // SP is 4 below argp, due to saved LR
--
2.5.0

83
vendor/k8s.io/kubernetes/build/build-image/rsyncd.sh generated vendored Executable file
View file

@ -0,0 +1,83 @@
#!/bin/bash
# Copyright 2016 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# This script will set up and run rsyncd to allow data to move into and out of
# our dockerized build system. This is used for syncing sources and changes of
# sources into the docker-build-container. It is also used to transfer built binaries
# and generated files back out.
#
# When run as root (rare) it'll preserve the file ids as sent from the client.
# Usually it'll be run as non-dockerized UID/GID and end up translating all file
# ownership to that.
set -o errexit
set -o nounset
set -o pipefail
# The directory that gets sync'd
VOLUME=${HOME}
# Assume that this is running in Docker on a bridge. Allow connections from
# anything on the local subnet.
ALLOW=$(ip route | awk '/^default via/ { reg = "^[0-9./]+ dev "$5 } ; $0 ~ reg { print $1 }')
CONFDIR="/tmp/rsync.k8s"
PIDFILE="${CONFDIR}/rsyncd.pid"
CONFFILE="${CONFDIR}/rsyncd.conf"
SECRETS="${CONFDIR}/rsyncd.secrets"
mkdir -p "${CONFDIR}"
if [[ -f "${PIDFILE}" ]]; then
PID=$(cat "${PIDFILE}")
echo "Cleaning up old PID file: ${PIDFILE}"
kill $PID &> /dev/null || true
rm "${PIDFILE}"
fi
PASSWORD=$(</rsyncd.password)
cat <<EOF >"${SECRETS}"
k8s:${PASSWORD}
EOF
chmod go= "${SECRETS}"
USER_CONFIG=
if [[ "$(id -u)" == "0" ]]; then
USER_CONFIG=" uid = 0"$'\n'" gid = 0"
fi
cat <<EOF >"${CONFFILE}"
pid file = ${PIDFILE}
use chroot = no
log file = /dev/stdout
reverse lookup = no
munge symlinks = no
port = 8730
[k8s]
numeric ids = true
$USER_CONFIG
hosts deny = *
hosts allow = ${ALLOW}
auth users = k8s
secrets file = ${SECRETS}
read only = false
path = ${VOLUME}
filter = - /.make/ - /.git/ - /_tmp/
EOF
exec /usr/bin/rsync --no-detach --daemon --config="${CONFFILE}" "$@"

54
vendor/k8s.io/kubernetes/build/cni/Makefile generated vendored Normal file
View file

@ -0,0 +1,54 @@
# Copyright 2016 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Build the CNI binaries.
#
# Usage:
# [ARCH=amd64] [CNI_RELEASE=v0.3.0] make (build|push)
ARCH?=amd64
CNI_RELEASE?=07a8a28637e97b22eb8dfe710eeae1344f69d16e
CNI_TARBALL=cni-$(ARCH)-$(CNI_RELEASE).tar.gz
CUR_DIR=$(shell pwd)
OUTPUT_DIR=$(CUR_DIR)/output
all: build
build:
mkdir -p $(OUTPUT_DIR)
docker run -it -v $(OUTPUT_DIR):/output golang:1.6 /bin/bash -c "\
git clone https://github.com/containernetworking/cni\
&& cd cni \
&& git checkout $(CNI_RELEASE) \
&& CGO_ENABLED=0 GOOS=linux GOARCH=$(ARCH) ./build \
&& tar -zcvf $(CNI_TARBALL) bin/ \
&& mv $(CNI_TARBALL) /output/"
# Backward Compatibility
ifeq ($(ARCH),amd64)
cp $(OUTPUT_DIR)/$(CNI_TARBALL) $(OUTPUT_DIR)/cni-$(CNI_RELEASE).tar.gz
endif
push: build
gsutil cp $(OUTPUT_DIR)/$(CNI_TARBALL) gs://kubernetes-release/network-plugins
ifeq ($(ARCH),amd64)
gsutil cp $(OUTPUT_DIR)/cni-$(CNI_RELEASE).tar.gz gs://kubernetes-release/network-plugins
endif
clean:
rm -rf output/
.PHONY: all

716
vendor/k8s.io/kubernetes/build/common.sh generated vendored Executable file
View file

@ -0,0 +1,716 @@
#!/bin/bash
# Copyright 2014 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Common utilities, variables and checks for all build scripts.
set -o errexit
set -o nounset
set -o pipefail
DOCKER_OPTS=${DOCKER_OPTS:-""}
DOCKER=(docker ${DOCKER_OPTS})
DOCKER_HOST=${DOCKER_HOST:-""}
DOCKER_MACHINE_NAME=${DOCKER_MACHINE_NAME:-"kube-dev"}
readonly DOCKER_MACHINE_DRIVER=${DOCKER_MACHINE_DRIVER:-"virtualbox --virtualbox-memory 4096 --virtualbox-cpu-count -1"}
# This will canonicalize the path
KUBE_ROOT=$(cd $(dirname "${BASH_SOURCE}")/.. && pwd -P)
source "${KUBE_ROOT}/hack/lib/init.sh"
# Set KUBE_BUILD_PPC64LE to y to build for ppc64le in addition to other
# platforms.
# TODO(IBM): remove KUBE_BUILD_PPC64LE and reenable ppc64le compilation by
# default when
# https://github.com/kubernetes/kubernetes/issues/30384 and
# https://github.com/kubernetes/kubernetes/issues/25886 are fixed.
# The majority of the logic is in hack/lib/golang.sh.
readonly KUBE_BUILD_PPC64LE="${KUBE_BUILD_PPC64LE:-n}"
# Constants
readonly KUBE_BUILD_IMAGE_REPO=kube-build
readonly KUBE_BUILD_IMAGE_CROSS_TAG="$(cat ${KUBE_ROOT}/build/build-image/cross/VERSION)"
# This version number is used to cause everyone to rebuild their data containers
# and build image. This is especially useful for automated build systems like
# Jenkins.
#
# Increment/change this number if you change the build image (anything under
# build/build-image) or change the set of volumes in the data container.
readonly KUBE_BUILD_IMAGE_VERSION_BASE="$(cat ${KUBE_ROOT}/build/build-image/VERSION)"
readonly KUBE_BUILD_IMAGE_VERSION="${KUBE_BUILD_IMAGE_VERSION_BASE}-${KUBE_BUILD_IMAGE_CROSS_TAG}"
# Here we map the output directories across both the local and remote _output
# directories:
#
# *_OUTPUT_ROOT - the base of all output in that environment.
# *_OUTPUT_SUBPATH - location where golang stuff is built/cached. Also
# persisted across docker runs with a volume mount.
# *_OUTPUT_BINPATH - location where final binaries are placed. If the remote
# is really remote, this is the stuff that has to be copied
# back.
# OUT_DIR can come in from the Makefile, so honor it.
readonly LOCAL_OUTPUT_ROOT="${KUBE_ROOT}/${OUT_DIR:-_output}"
readonly LOCAL_OUTPUT_SUBPATH="${LOCAL_OUTPUT_ROOT}/dockerized"
readonly LOCAL_OUTPUT_BINPATH="${LOCAL_OUTPUT_SUBPATH}/bin"
readonly LOCAL_OUTPUT_GOPATH="${LOCAL_OUTPUT_SUBPATH}/go"
readonly LOCAL_OUTPUT_IMAGE_STAGING="${LOCAL_OUTPUT_ROOT}/images"
# This is a symlink to binaries for "this platform" (e.g. build tools).
readonly THIS_PLATFORM_BIN="${LOCAL_OUTPUT_ROOT}/bin"
readonly REMOTE_ROOT="/go/src/${KUBE_GO_PACKAGE}"
readonly REMOTE_OUTPUT_ROOT="${REMOTE_ROOT}/_output"
readonly REMOTE_OUTPUT_SUBPATH="${REMOTE_OUTPUT_ROOT}/dockerized"
readonly REMOTE_OUTPUT_BINPATH="${REMOTE_OUTPUT_SUBPATH}/bin"
readonly REMOTE_OUTPUT_GOPATH="${REMOTE_OUTPUT_SUBPATH}/go"
# This is the port on the workstation host to expose RSYNC on. Set this if you
# are doing something fancy with ssh tunneling.
readonly KUBE_RSYNC_PORT="${KUBE_RSYNC_PORT:-}"
# This is the port that rsync is running on *inside* the container. This may be
# mapped to KUBE_RSYNC_PORT via docker networking.
readonly KUBE_CONTAINER_RSYNC_PORT=8730
# Get the set of master binaries that run in Docker (on Linux)
# Entry format is "<name-of-binary>,<base-image>".
# Binaries are placed in /usr/local/bin inside the image.
#
# $1 - server architecture
kube::build::get_docker_wrapped_binaries() {
case $1 in
"amd64")
local targets=(
kube-apiserver,busybox
kube-controller-manager,busybox
kube-scheduler,busybox
kube-aggregator,busybox
kube-proxy,gcr.io/google_containers/debian-iptables-amd64:v5
);;
"arm")
local targets=(
kube-apiserver,armel/busybox
kube-controller-manager,armel/busybox
kube-scheduler,armel/busybox
kube-aggregator,armel/busybox
kube-proxy,gcr.io/google_containers/debian-iptables-arm:v5
);;
"arm64")
local targets=(
kube-apiserver,aarch64/busybox
kube-controller-manager,aarch64/busybox
kube-scheduler,aarch64/busybox
kube-aggregator,aarch64/busybox
kube-proxy,gcr.io/google_containers/debian-iptables-arm64:v5
);;
"ppc64le")
local targets=(
kube-apiserver,ppc64le/busybox
kube-controller-manager,ppc64le/busybox
kube-scheduler,ppc64le/busybox
kube-aggregator,ppc64le/busybox
kube-proxy,gcr.io/google_containers/debian-iptables-ppc64le:v5
);;
"s390x")
local targets=(
kube-apiserver,s390x/busybox
kube-controller-manager,s390x/busybox
kube-scheduler,s390x/busybox
kube-aggregator,s390x/busybox
kube-proxy,gcr.io/google_containers/debian-iptables-s390x:v5
);;
esac
echo "${targets[@]}"
}
# ---------------------------------------------------------------------------
# Basic setup functions
# Verify that the right utilities and such are installed for building Kube. Set
# up some dynamic constants.
# Args:
# $1 - boolean of whether to require functioning docker (default true)
#
# Vars set:
# KUBE_ROOT_HASH
# KUBE_BUILD_IMAGE_TAG_BASE
# KUBE_BUILD_IMAGE_TAG
# KUBE_BUILD_IMAGE
# KUBE_BUILD_CONTAINER_NAME_BASE
# KUBE_BUILD_CONTAINER_NAME
# KUBE_DATA_CONTAINER_NAME_BASE
# KUBE_DATA_CONTAINER_NAME
# KUBE_RSYNC_CONTAINER_NAME_BASE
# KUBE_RSYNC_CONTAINER_NAME
# DOCKER_MOUNT_ARGS
# LOCAL_OUTPUT_BUILD_CONTEXT
function kube::build::verify_prereqs() {
local -r require_docker=${1:-true}
kube::log::status "Verifying Prerequisites...."
kube::build::ensure_tar || return 1
kube::build::ensure_rsync || return 1
if ${require_docker}; then
kube::build::ensure_docker_in_path || return 1
if kube::build::is_osx; then
kube::build::docker_available_on_osx || return 1
fi
kube::util::ensure_docker_daemon_connectivity || return 1
if (( ${KUBE_VERBOSE} > 6 )); then
kube::log::status "Docker Version:"
"${DOCKER[@]}" version | kube::log::info_from_stdin
fi
fi
KUBE_ROOT_HASH=$(kube::build::short_hash "${HOSTNAME:-}:${KUBE_ROOT}")
KUBE_BUILD_IMAGE_TAG_BASE="build-${KUBE_ROOT_HASH}"
KUBE_BUILD_IMAGE_TAG="${KUBE_BUILD_IMAGE_TAG_BASE}-${KUBE_BUILD_IMAGE_VERSION}"
KUBE_BUILD_IMAGE="${KUBE_BUILD_IMAGE_REPO}:${KUBE_BUILD_IMAGE_TAG}"
KUBE_BUILD_CONTAINER_NAME_BASE="kube-build-${KUBE_ROOT_HASH}"
KUBE_BUILD_CONTAINER_NAME="${KUBE_BUILD_CONTAINER_NAME_BASE}-${KUBE_BUILD_IMAGE_VERSION}"
KUBE_RSYNC_CONTAINER_NAME_BASE="kube-rsync-${KUBE_ROOT_HASH}"
KUBE_RSYNC_CONTAINER_NAME="${KUBE_RSYNC_CONTAINER_NAME_BASE}-${KUBE_BUILD_IMAGE_VERSION}"
KUBE_DATA_CONTAINER_NAME_BASE="kube-build-data-${KUBE_ROOT_HASH}"
KUBE_DATA_CONTAINER_NAME="${KUBE_DATA_CONTAINER_NAME_BASE}-${KUBE_BUILD_IMAGE_VERSION}"
DOCKER_MOUNT_ARGS=(--volumes-from "${KUBE_DATA_CONTAINER_NAME}")
LOCAL_OUTPUT_BUILD_CONTEXT="${LOCAL_OUTPUT_IMAGE_STAGING}/${KUBE_BUILD_IMAGE}"
kube::version::get_version_vars
kube::version::save_version_vars "${KUBE_ROOT}/.dockerized-kube-version-defs"
}
# ---------------------------------------------------------------------------
# Utility functions
function kube::build::docker_available_on_osx() {
if [[ -z "${DOCKER_HOST}" ]]; then
if [[ -S "/var/run/docker.sock" ]]; then
kube::log::status "Using Docker for MacOS"
return 0
fi
kube::log::status "No docker host is set. Checking options for setting one..."
if [[ -z "$(which docker-machine)" ]]; then
kube::log::status "It looks like you're running Mac OS X, yet neither Docker for Mac nor docker-machine can be found."
kube::log::status "See: https://docs.docker.com/engine/installation/mac/ for installation instructions."
return 1
elif [[ -n "$(which docker-machine)" ]]; then
kube::build::prepare_docker_machine
fi
fi
}
function kube::build::prepare_docker_machine() {
kube::log::status "docker-machine was found."
docker-machine inspect "${DOCKER_MACHINE_NAME}" &> /dev/null || {
kube::log::status "Creating a machine to build Kubernetes"
docker-machine create --driver ${DOCKER_MACHINE_DRIVER} \
--engine-env HTTP_PROXY="${KUBERNETES_HTTP_PROXY:-}" \
--engine-env HTTPS_PROXY="${KUBERNETES_HTTPS_PROXY:-}" \
--engine-env NO_PROXY="${KUBERNETES_NO_PROXY:-127.0.0.1}" \
"${DOCKER_MACHINE_NAME}" > /dev/null || {
kube::log::error "Something went wrong creating a machine."
kube::log::error "Try the following: "
kube::log::error "docker-machine create -d ${DOCKER_MACHINE_DRIVER} ${DOCKER_MACHINE_NAME}"
return 1
}
}
docker-machine start "${DOCKER_MACHINE_NAME}" &> /dev/null
# it takes `docker-machine env` a few seconds to work if the machine was just started
local docker_machine_out
while ! docker_machine_out=$(docker-machine env "${DOCKER_MACHINE_NAME}" 2>&1); do
if [[ ${docker_machine_out} =~ "Error checking TLS connection" ]]; then
echo ${docker_machine_out}
docker-machine regenerate-certs ${DOCKER_MACHINE_NAME}
else
sleep 1
fi
done
eval $(docker-machine env "${DOCKER_MACHINE_NAME}")
kube::log::status "A Docker host using docker-machine named '${DOCKER_MACHINE_NAME}' is ready to go!"
return 0
}
function kube::build::is_osx() {
[[ "$(uname)" == "Darwin" ]]
}
function kube::build::is_gnu_sed() {
[[ $(sed --version 2>&1) == *GNU* ]]
}
function kube::build::ensure_rsync() {
if [[ -z "$(which rsync)" ]]; then
kube::log::error "Can't find 'rsync' in PATH, please fix and retry."
return 1
fi
}
function kube::build::update_dockerfile() {
if kube::build::is_gnu_sed; then
sed_opts=(-i)
else
sed_opts=(-i '')
fi
sed "${sed_opts[@]}" "s/KUBE_BUILD_IMAGE_CROSS_TAG/${KUBE_BUILD_IMAGE_CROSS_TAG}/" "${LOCAL_OUTPUT_BUILD_CONTEXT}/Dockerfile"
}
function kube::build::ensure_docker_in_path() {
if [[ -z "$(which docker)" ]]; then
kube::log::error "Can't find 'docker' in PATH, please fix and retry."
kube::log::error "See https://docs.docker.com/installation/#installation for installation instructions."
return 1
fi
}
function kube::build::ensure_tar() {
if [[ -n "${TAR:-}" ]]; then
return
fi
# Find gnu tar if it is available, bomb out if not.
TAR=tar
if which gtar &>/dev/null; then
TAR=gtar
else
if which gnutar &>/dev/null; then
TAR=gnutar
fi
fi
if ! "${TAR}" --version | grep -q GNU; then
echo " !!! Cannot find GNU tar. Build on Linux or install GNU tar"
echo " on Mac OS X (brew install gnu-tar)."
return 1
fi
}
function kube::build::has_docker() {
which docker &> /dev/null
}
# Detect if a specific image exists
#
# $1 - image repo name
# #2 - image tag
function kube::build::docker_image_exists() {
[[ -n $1 && -n $2 ]] || {
kube::log::error "Internal error. Image not specified in docker_image_exists."
exit 2
}
[[ $("${DOCKER[@]}" images -q "${1}:${2}") ]]
}
# Delete all images that match a tag prefix except for the "current" version
#
# $1: The image repo/name
# $2: The tag base. We consider any image that matches $2*
# $3: The current image not to delete if provided
function kube::build::docker_delete_old_images() {
# In Docker 1.12, we can replace this with
# docker images "$1" --format "{{.Tag}}"
for tag in $("${DOCKER[@]}" images ${1} | tail -n +2 | awk '{print $2}') ; do
if [[ "${tag}" != "${2}"* ]] ; then
V=3 kube::log::status "Keeping image ${1}:${tag}"
continue
fi
if [[ -z "${3:-}" || "${tag}" != "${3}" ]] ; then
V=2 kube::log::status "Deleting image ${1}:${tag}"
"${DOCKER[@]}" rmi "${1}:${tag}" >/dev/null
else
V=3 kube::log::status "Keeping image ${1}:${tag}"
fi
done
}
# Stop and delete all containers that match a pattern
#
# $1: The base container prefix
# $2: The current container to keep, if provided
function kube::build::docker_delete_old_containers() {
# In Docker 1.12 we can replace this line with
# docker ps -a --format="{{.Names}}"
for container in $("${DOCKER[@]}" ps -a | tail -n +2 | awk '{print $NF}') ; do
if [[ "${container}" != "${1}"* ]] ; then
V=3 kube::log::status "Keeping container ${container}"
continue
fi
if [[ -z "${2:-}" || "${container}" != "${2}" ]] ; then
V=2 kube::log::status "Deleting container ${container}"
kube::build::destroy_container "${container}"
else
V=3 kube::log::status "Keeping container ${container}"
fi
done
}
# Takes $1 and computes a short has for it. Useful for unique tag generation
function kube::build::short_hash() {
[[ $# -eq 1 ]] || {
kube::log::error "Internal error. No data based to short_hash."
exit 2
}
local short_hash
if which md5 >/dev/null 2>&1; then
short_hash=$(md5 -q -s "$1")
else
short_hash=$(echo -n "$1" | md5sum)
fi
echo ${short_hash:0:10}
}
# Pedantically kill, wait-on and remove a container. The -f -v options
# to rm don't actually seem to get the job done, so force kill the
# container, wait to ensure it's stopped, then try the remove. This is
# a workaround for bug https://github.com/docker/docker/issues/3968.
function kube::build::destroy_container() {
"${DOCKER[@]}" kill "$1" >/dev/null 2>&1 || true
"${DOCKER[@]}" wait "$1" >/dev/null 2>&1 || true
"${DOCKER[@]}" rm -f -v "$1" >/dev/null 2>&1 || true
}
# ---------------------------------------------------------------------------
# Building
function kube::build::clean() {
if kube::build::has_docker ; then
kube::build::docker_delete_old_containers "${KUBE_BUILD_CONTAINER_NAME_BASE}"
kube::build::docker_delete_old_containers "${KUBE_RSYNC_CONTAINER_NAME_BASE}"
kube::build::docker_delete_old_containers "${KUBE_DATA_CONTAINER_NAME_BASE}"
kube::build::docker_delete_old_images "${KUBE_BUILD_IMAGE_REPO}" "${KUBE_BUILD_IMAGE_TAG_BASE}"
V=2 kube::log::status "Cleaning all untagged docker images"
"${DOCKER[@]}" rmi $("${DOCKER[@]}" images -q --filter 'dangling=true') 2> /dev/null || true
fi
kube::log::status "Removing _output directory"
rm -rf "${LOCAL_OUTPUT_ROOT}"
}
# Set up the context directory for the kube-build image and build it.
function kube::build::build_image() {
mkdir -p "${LOCAL_OUTPUT_BUILD_CONTEXT}"
cp /etc/localtime "${LOCAL_OUTPUT_BUILD_CONTEXT}/"
cp build/build-image/Dockerfile "${LOCAL_OUTPUT_BUILD_CONTEXT}/Dockerfile"
cp build/build-image/rsyncd.sh "${LOCAL_OUTPUT_BUILD_CONTEXT}/"
dd if=/dev/urandom bs=512 count=1 2>/dev/null | LC_ALL=C tr -dc 'A-Za-z0-9' | dd bs=32 count=1 2>/dev/null > "${LOCAL_OUTPUT_BUILD_CONTEXT}/rsyncd.password"
chmod go= "${LOCAL_OUTPUT_BUILD_CONTEXT}/rsyncd.password"
kube::build::update_dockerfile
kube::build::docker_build "${KUBE_BUILD_IMAGE}" "${LOCAL_OUTPUT_BUILD_CONTEXT}" 'false'
# Clean up old versions of everything
kube::build::docker_delete_old_containers "${KUBE_BUILD_CONTAINER_NAME_BASE}" "${KUBE_BUILD_CONTAINER_NAME}"
kube::build::docker_delete_old_containers "${KUBE_RSYNC_CONTAINER_NAME_BASE}" "${KUBE_RSYNC_CONTAINER_NAME}"
kube::build::docker_delete_old_containers "${KUBE_DATA_CONTAINER_NAME_BASE}" "${KUBE_DATA_CONTAINER_NAME}"
kube::build::docker_delete_old_images "${KUBE_BUILD_IMAGE_REPO}" "${KUBE_BUILD_IMAGE_TAG_BASE}" "${KUBE_BUILD_IMAGE_TAG}"
kube::build::ensure_data_container
kube::build::sync_to_container
}
# Build a docker image from a Dockerfile.
# $1 is the name of the image to build
# $2 is the location of the "context" directory, with the Dockerfile at the root.
# $3 is the value to set the --pull flag for docker build; true by default
function kube::build::docker_build() {
local -r image=$1
local -r context_dir=$2
local -r pull="${3:-true}"
local -ra build_cmd=("${DOCKER[@]}" build -t "${image}" "--pull=${pull}" "${context_dir}")
kube::log::status "Building Docker image ${image}"
local docker_output
docker_output=$("${build_cmd[@]}" 2>&1) || {
cat <<EOF >&2
+++ Docker build command failed for ${image}
${docker_output}
To retry manually, run:
${build_cmd[*]}
EOF
return 1
}
}
function kube::build::ensure_data_container() {
# If the data container exists AND exited successfully, we can use it.
# Otherwise nuke it and start over.
local ret=0
local code=$(docker inspect \
-f '{{.State.ExitCode}}' \
"${KUBE_DATA_CONTAINER_NAME}" 2>/dev/null || ret=$?)
if [[ "${ret}" == 0 && "${code}" != 0 ]]; then
kube::build::destroy_container "${KUBE_DATA_CONTAINER_NAME}"
ret=1
fi
if [[ "${ret}" != 0 ]]; then
kube::log::status "Creating data container ${KUBE_DATA_CONTAINER_NAME}"
# We have to ensure the directory exists, or else the docker run will
# create it as root.
mkdir -p "${LOCAL_OUTPUT_GOPATH}"
# We want this to run as root to be able to chown, so non-root users can
# later use the result as a data container. This run both creates the data
# container and chowns the GOPATH.
#
# The data container creates volumes for all of the directories that store
# intermediates for the Go build. This enables incremental builds across
# Docker sessions. The *_cgo paths are re-compiled versions of the go std
# libraries for true static building.
local -ra docker_cmd=(
"${DOCKER[@]}" run
--volume "${REMOTE_ROOT}" # white-out the whole output dir
--volume /usr/local/go/pkg/linux_386_cgo
--volume /usr/local/go/pkg/linux_amd64_cgo
--volume /usr/local/go/pkg/linux_arm_cgo
--volume /usr/local/go/pkg/linux_arm64_cgo
--volume /usr/local/go/pkg/linux_ppc64le_cgo
--volume /usr/local/go/pkg/darwin_amd64_cgo
--volume /usr/local/go/pkg/darwin_386_cgo
--volume /usr/local/go/pkg/windows_amd64_cgo
--volume /usr/local/go/pkg/windows_386_cgo
--name "${KUBE_DATA_CONTAINER_NAME}"
--hostname "${HOSTNAME}"
"${KUBE_BUILD_IMAGE}"
chown -R $(id -u).$(id -g)
"${REMOTE_ROOT}"
/usr/local/go/pkg/
)
"${docker_cmd[@]}"
fi
}
# Run a command in the kube-build image. This assumes that the image has
# already been built.
function kube::build::run_build_command() {
kube::log::status "Running build command..."
kube::build::run_build_command_ex "${KUBE_BUILD_CONTAINER_NAME}" -- "$@"
}
# Run a command in the kube-build image. This assumes that the image has
# already been built.
#
# Arguments are in the form of
# <container name> <extra docker args> -- <command>
function kube::build::run_build_command_ex() {
[[ $# != 0 ]] || { echo "Invalid input - please specify a container name." >&2; return 4; }
local container_name="${1}"
shift
local -a docker_run_opts=(
"--name=${container_name}"
"--user=$(id -u):$(id -g)"
"--hostname=${HOSTNAME}"
"${DOCKER_MOUNT_ARGS[@]}"
)
local detach=false
[[ $# != 0 ]] || { echo "Invalid input - please specify docker arguments followed by --." >&2; return 4; }
# Everything before "--" is an arg to docker
until [ -z "${1-}" ] ; do
if [[ "$1" == "--" ]]; then
shift
break
fi
docker_run_opts+=("$1")
if [[ "$1" == "-d" || "$1" == "--detach" ]] ; then
detach=true
fi
shift
done
# Everything after "--" is the command to run
[[ $# != 0 ]] || { echo "Invalid input - please specify a command to run." >&2; return 4; }
local -a cmd=()
until [ -z "${1-}" ] ; do
cmd+=("$1")
shift
done
docker_run_opts+=(
--env "KUBE_FASTBUILD=${KUBE_FASTBUILD:-false}"
--env "KUBE_BUILDER_OS=${OSTYPE:-notdetected}"
--env "KUBE_BUILD_PPC64LE=${KUBE_BUILD_PPC64LE}" # TODO(IBM): remove
--env "KUBE_VERBOSE=${KUBE_VERBOSE}"
)
# If we have stdin we can run interactive. This allows things like 'shell.sh'
# to work. However, if we run this way and don't have stdin, then it ends up
# running in a daemon-ish mode. So if we don't have a stdin, we explicitly
# attach stderr/stdout but don't bother asking for a tty.
if [[ -t 0 ]]; then
docker_run_opts+=(--interactive --tty)
elif [[ "${detach}" == false ]]; then
docker_run_opts+=(--attach=stdout --attach=stderr)
fi
local -ra docker_cmd=(
"${DOCKER[@]}" run "${docker_run_opts[@]}" "${KUBE_BUILD_IMAGE}")
# Clean up container from any previous run
kube::build::destroy_container "${container_name}"
"${docker_cmd[@]}" "${cmd[@]}"
if [[ "${detach}" == false ]]; then
kube::build::destroy_container "${container_name}"
fi
}
function kube::build::rsync_probe {
# Wait unil rsync is up and running.
local tries=20
while (( ${tries} > 0 )) ; do
if rsync "rsync://k8s@${1}:${2}/" \
--password-file="${LOCAL_OUTPUT_BUILD_CONTEXT}/rsyncd.password" \
&> /dev/null ; then
return 0
fi
tries=$(( ${tries} - 1))
sleep 0.1
done
return 1
}
# Start up the rsync container in the backgound. This should be explicitly
# stoped with kube::build::stop_rsyncd_container.
#
# This will set the global var KUBE_RSYNC_ADDR to the effective port that the
# rsync daemon can be reached out.
function kube::build::start_rsyncd_container() {
kube::build::stop_rsyncd_container
V=3 kube::log::status "Starting rsyncd container"
kube::build::run_build_command_ex \
"${KUBE_RSYNC_CONTAINER_NAME}" -p 127.0.0.1:${KUBE_RSYNC_PORT}:${KUBE_CONTAINER_RSYNC_PORT} -d \
-- /rsyncd.sh >/dev/null
local mapped_port
if ! mapped_port=$("${DOCKER[@]}" port "${KUBE_RSYNC_CONTAINER_NAME}" ${KUBE_CONTAINER_RSYNC_PORT} 2> /dev/null | cut -d: -f 2) ; then
kube::log::error "Could not get effective rsync port"
return 1
fi
local container_ip
container_ip=$("${DOCKER[@]}" inspect --format '{{ .NetworkSettings.IPAddress }}' "${KUBE_RSYNC_CONTAINER_NAME}")
# Sometimes we can reach rsync through localhost and a NAT'd port. Other
# times (when we are running in another docker container on the Jenkins
# machines) we have to talk directly to the container IP. There is no one
# strategy that works in all cases so we test to figure out which situation we
# are in.
if kube::build::rsync_probe 127.0.0.1 ${mapped_port}; then
KUBE_RSYNC_ADDR="127.0.0.1:${mapped_port}"
return 0
elif kube::build::rsync_probe "${container_ip}" ${KUBE_CONTAINER_RSYNC_PORT}; then
KUBE_RSYNC_ADDR="${container_ip}:${KUBE_CONTAINER_RSYNC_PORT}"
return 0
fi
kube::log::error "Could not connect to rsync container. See build/README.md for setting up remote Docker engine."
return 1
}
function kube::build::stop_rsyncd_container() {
V=3 kube::log::status "Stopping any currently running rsyncd container"
unset KUBE_RSYNC_ADDR
kube::build::destroy_container "${KUBE_RSYNC_CONTAINER_NAME}"
}
function kube::build::rsync {
local -a rsync_opts=(
--archive
--prune-empty-dirs
--password-file="${LOCAL_OUTPUT_BUILD_CONTEXT}/rsyncd.password"
)
if (( ${KUBE_VERBOSE} >= 6 )); then
rsync_opts+=("-iv")
fi
if (( ${KUBE_RSYNC_COMPRESS} > 0 )); then
rsync_opts+=("--compress-level=${KUBE_RSYNC_COMPRESS}")
fi
V=3 kube::log::status "Running rsync"
rsync "${rsync_opts[@]}" "$@"
}
# This will launch rsyncd in a container and then sync the source tree to the
# container over the local network.
function kube::build::sync_to_container() {
kube::log::status "Syncing sources to container"
kube::build::start_rsyncd_container
# rsync filters are a bit confusing. Here we are syncing everything except
# output only directories and things that are not necessary like the git
# directory and generated files. The '- /' filter prevents rsync
# from trying to set the uid/gid/perms on the root of the sync tree.
# As an exception, we need to sync generated files in staging/, because
# they will not be re-generated by 'make'.
kube::build::rsync \
--delete \
--filter='+ /staging/**' \
--filter='- /.git/' \
--filter='- /.make/' \
--filter='- /_tmp/' \
--filter='- /_output/' \
--filter='- /' \
--filter='- zz_generated.*' \
--filter='- generated.proto' \
"${KUBE_ROOT}/" "rsync://k8s@${KUBE_RSYNC_ADDR}/k8s/"
kube::build::stop_rsyncd_container
}
# Copy all build results back out.
function kube::build::copy_output() {
kube::log::status "Syncing out of container"
kube::build::start_rsyncd_container
local rsync_extra=""
if (( ${KUBE_VERBOSE} >= 6 )); then
rsync_extra="-iv"
fi
# The filter syntax for rsync is a little obscure. It filters on files and
# directories. If you don't go in to a directory you won't find any files
# there. Rules are evaluated in order. The last two rules are a little
# magic. '+ */' says to go in to every directory and '- /**' says to ignore
# any file or directory that isn't already specifically allowed.
#
# We are looking to copy out all of the built binaries along with various
# generated files.
kube::build::rsync \
--filter='- /vendor/' \
--filter='- /_temp/' \
--filter='+ /_output/dockerized/bin/**' \
--filter='+ zz_generated.*' \
--filter='+ generated.proto' \
--filter='+ *.pb.go' \
--filter='+ */' \
--filter='- /**' \
"rsync://k8s@${KUBE_RSYNC_ADDR}/k8s/" "${KUBE_ROOT}"
kube::build::stop_rsyncd_container
}

26
vendor/k8s.io/kubernetes/build/copy-output.sh generated vendored Executable file
View file

@ -0,0 +1,26 @@
#!/bin/bash
# Copyright 2014 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Copies any built binaries (and other generated files) out of the Docker build contianer.
set -o errexit
set -o nounset
set -o pipefail
KUBE_ROOT=$(dirname "${BASH_SOURCE}")/..
source "${KUBE_ROOT}/build/common.sh"
kube::build::verify_prereqs
kube::build::copy_output

View file

@ -0,0 +1,27 @@
# Copyright 2016 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
FROM BASEIMAGE
# If we're building for another architecture than amd64, the CROSS_BUILD_ placeholder is removed so e.g. CROSS_BUILD_COPY turns into COPY
# If we're building normally, for amd64, CROSS_BUILD lines are removed
CROSS_BUILD_COPY qemu-ARCH-static /usr/bin/
# All apt-get's must be in one run command or the
# cleanup has no effect.
RUN DEBIAN_FRONTEND=noninteractive apt-get update \
&& DEBIAN_FRONTEND=noninteractive apt-get install -y iptables \
&& DEBIAN_FRONTEND=noninteractive apt-get install -y ebtables \
&& DEBIAN_FRONTEND=noninteractive apt-get install -y conntrack \
&& rm -rf /var/lib/apt/lists/*

View file

@ -0,0 +1,64 @@
# Copyright 2016 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
.PHONY: build push
REGISTRY?="gcr.io/google_containers"
IMAGE=debian-iptables
TAG=v5
ARCH?=amd64
TEMP_DIR:=$(shell mktemp -d)
ifeq ($(ARCH),amd64)
BASEIMAGE?=debian:jessie
endif
ifeq ($(ARCH),arm)
BASEIMAGE?=armel/debian:jessie
QEMUARCH=arm
endif
ifeq ($(ARCH),arm64)
BASEIMAGE?=aarch64/debian:jessie
QEMUARCH=aarch64
endif
ifeq ($(ARCH),ppc64le)
BASEIMAGE?=ppc64le/debian:jessie
QEMUARCH=ppc64le
endif
ifeq ($(ARCH),s390x)
BASEIMAGE?=s390x/debian:jessie
QEMUARCH=s390x
endif
build:
cp ./* $(TEMP_DIR)
cd $(TEMP_DIR) && sed -i "s|BASEIMAGE|$(BASEIMAGE)|g" Dockerfile
cd $(TEMP_DIR) && sed -i "s|ARCH|$(QEMUARCH)|g" Dockerfile
ifeq ($(ARCH),amd64)
# When building "normally" for amd64, remove the whole line, it has no part in the amd64 image
cd $(TEMP_DIR) && sed -i "/CROSS_BUILD_/d" Dockerfile
else
# When cross-building, only the placeholder "CROSS_BUILD_" should be removed
# Register /usr/bin/qemu-ARCH-static as the handler for ARM binaries in the kernel
docker run --rm --privileged multiarch/qemu-user-static:register --reset
curl -sSL https://github.com/multiarch/qemu-user-static/releases/download/v2.6.0/x86_64_qemu-$(QEMUARCH)-static.tar.gz | tar -xz -C $(TEMP_DIR)
cd $(TEMP_DIR) && sed -i "s/CROSS_BUILD_//g" Dockerfile
endif
docker build --pull -t $(REGISTRY)/$(IMAGE)-$(ARCH):$(TAG) $(TEMP_DIR)
push: build
gcloud docker -- push $(REGISTRY)/$(IMAGE)-$(ARCH):$(TAG)
all: push

View file

@ -0,0 +1,32 @@
### debian-iptables
Serves as the base image for `gcr.io/google_containers/kube-proxy-${ARCH}` and multiarch (not `amd64`) `gcr.io/google_containers/flannel-${ARCH}` images.
This image is compiled for multiple architectures.
#### How to release
If you're editing the Dockerfile or some other thing, please bump the `TAG` in the Makefile.
```console
# Build for linux/amd64 (default)
$ make push ARCH=amd64
# ---> gcr.io/google_containers/debian-iptables-amd64:TAG
$ make push ARCH=arm
# ---> gcr.io/google_containers/debian-iptables-arm:TAG
$ make push ARCH=arm64
# ---> gcr.io/google_containers/debian-iptables-arm64:TAG
$ make push ARCH=ppc64le
# ---> gcr.io/google_containers/debian-iptables-ppc64le:TAG
$ make push ARCH=s390x
# ---> gcr.io/google_containers/debian-iptables-s390x:TAG
```
If you don't want to push the images, run `make` or `make build` instead
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/build/debian-iptables/README.md?pixel)]()

169
vendor/k8s.io/kubernetes/build/debs/BUILD generated vendored Normal file
View file

@ -0,0 +1,169 @@
package(default_visibility = ["//visibility:public"])
load("@bazel_tools//tools/build_defs/pkg:pkg.bzl", "pkg_tar", "pkg_deb")
load("@io_kubernetes_build//defs:deb.bzl", "k8s_deb", "deb_data")
# We do not include kube-scheduler, kube-controller-manager,
# kube-apiserver, and kube-proxy in this list even though we
# produce debs for them. We recommend that they be run in docker
# images. We use the debs that we produce here to build those
# images.
filegroup(
name = "debs",
srcs = [
":kubeadm.deb",
":kubectl.deb",
":kubelet.deb",
":kubernetes-cni.deb",
],
)
[deb_data(
name = binary,
data = [
{
"files": ["//cmd/" + binary],
"mode": "0755",
"dir": "/usr/bin",
},
],
) for binary in [
"kubectl",
"kube-apiserver",
"kube-controller-manager",
"kube-proxy",
"kube-aggregator",
]]
deb_data(
name = "kube-scheduler",
data = [
{
"files": ["//plugin/cmd/kube-scheduler"],
"mode": "0755",
"dir": "/usr/bin",
},
],
)
deb_data(
name = "kubelet",
data = [
{
"files": ["//cmd/kubelet"],
"mode": "0755",
"dir": "/usr/bin",
},
{
"files": ["kubelet.service"],
"mode": "644",
"dir": "/lib/systemd/system",
},
],
)
deb_data(
name = "kubeadm",
data = [
{
"files": ["//cmd/kubeadm"],
"mode": "0755",
"dir": "/usr/bin",
},
{
"files": ["kubeadm-10.conf"],
"mode": "644",
"dir": "/etc/systemd/system/kubelet.service.d",
},
],
)
pkg_tar(
name = "kubernetes-cni-data",
package_dir = "/opt/cni",
deps = ["@kubernetes_cni//file"],
)
k8s_deb(
name = "kubectl",
description = """Kubernetes Command Line Tool
The Kubernetes command line tool for interacting with the Kubernetes API.
""",
)
k8s_deb(
name = "kube-apiserver",
description = "Kubernetes API Server",
)
k8s_deb(
name = "kube-controller-manager",
description = "Kubernetes Controller Manager",
)
k8s_deb(
name = "kube-scheduler",
description = "Kubernetes Scheduler",
)
k8s_deb(
name = "kube-proxy",
depends = [
"iptables (>= 1.4.21)",
"iproute2",
],
description = "Kubernetes Service Proxy",
)
k8s_deb(
name = "kube-aggregator",
description = "Kubernetes Federated API Server",
)
k8s_deb(
name = "kubelet",
depends = [
"iptables (>= 1.4.21)",
"kubernetes-cni (>= 0.3.0.1)",
"iproute2",
"socat",
"util-linux",
"mount",
"ebtables",
"ethtool",
],
description = """Kubernetes Node Agent
The node agent of Kubernetes, the container cluster manager
""",
)
k8s_deb(
name = "kubeadm",
depends = [
"kubelet (>= 1.4.0)",
"kubectl (>= 1.4.0)",
],
description = """Kubernetes Cluster Bootstrapping Tool
The Kubernetes command line tool for bootstrapping a Kubernetes cluster.
""",
)
k8s_deb(
name = "kubernetes-cni",
description = """Kubernetes Packaging of CNI
The Container Networking Interface tools for provisioning container networks.
""",
)
filegroup(
name = "package-srcs",
srcs = glob(["**"]),
tags = ["automanaged"],
visibility = ["//visibility:private"],
)
filegroup(
name = "all-srcs",
srcs = [":package-srcs"],
tags = ["automanaged"],
)

7
vendor/k8s.io/kubernetes/build/debs/kubeadm-10.conf generated vendored Normal file
View file

@ -0,0 +1,7 @@
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--kubeconfig=/etc/kubernetes/kubelet.conf --require-kubeconfig=true"
Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true"
Environment="KUBELET_NETWORK_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"
Environment="KUBELET_DNS_ARGS=--cluster-dns=10.96.0.10 --cluster-domain=cluster.local"
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_EXTRA_ARGS

12
vendor/k8s.io/kubernetes/build/debs/kubelet.service generated vendored Normal file
View file

@ -0,0 +1,12 @@
[Unit]
Description=kubelet: The Kubernetes Node Agent
Documentation=http://kubernetes.io/docs/
[Service]
ExecStart=/usr/bin/kubelet
Restart=always
StartLimitInterval=0
RestartSec=10
[Install]
WantedBy=multi-user.target

501
vendor/k8s.io/kubernetes/build/lib/release.sh generated vendored Normal file
View file

@ -0,0 +1,501 @@
#!/bin/bash
# Copyright 2016 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# This file creates release artifacts (tar files, container images) that are
# ready to distribute to install or distribute to end users.
###############################################################################
# Most of the ::release:: namespace functions have been moved to
# github.com/kubernetes/release. Have a look in that repo and specifically in
# lib/releaselib.sh for ::release::-related functionality.
###############################################################################
# This is where the final release artifacts are created locally
readonly RELEASE_STAGE="${LOCAL_OUTPUT_ROOT}/release-stage"
readonly RELEASE_DIR="${LOCAL_OUTPUT_ROOT}/release-tars"
# Validate a ci version
#
# Globals:
# None
# Arguments:
# version
# Returns:
# If version is a valid ci version
# Sets: (e.g. for '1.2.3-alpha.4.56+abcdef12345678')
# VERSION_MAJOR (e.g. '1')
# VERSION_MINOR (e.g. '2')
# VERSION_PATCH (e.g. '3')
# VERSION_PRERELEASE (e.g. 'alpha')
# VERSION_PRERELEASE_REV (e.g. '4')
# VERSION_BUILD_INFO (e.g. '.56+abcdef12345678')
# VERSION_COMMITS (e.g. '56')
function kube::release::parse_and_validate_ci_version() {
# Accept things like "v1.2.3-alpha.4.56+abcdef12345678" or "v1.2.3-beta.4"
local -r version_regex="^v(0|[1-9][0-9]*)\\.(0|[1-9][0-9]*)\\.(0|[1-9][0-9]*)-(beta|alpha)\\.(0|[1-9][0-9]*)(\\.(0|[1-9][0-9]*)\\+[0-9a-f]{7,40})?$"
local -r version="${1-}"
[[ "${version}" =~ ${version_regex} ]] || {
kube::log::error "Invalid ci version: '${version}', must match regex ${version_regex}"
return 1
}
VERSION_MAJOR="${BASH_REMATCH[1]}"
VERSION_MINOR="${BASH_REMATCH[2]}"
VERSION_PATCH="${BASH_REMATCH[3]}"
VERSION_PRERELEASE="${BASH_REMATCH[4]}"
VERSION_PRERELEASE_REV="${BASH_REMATCH[5]}"
VERSION_BUILD_INFO="${BASH_REMATCH[6]}"
VERSION_COMMITS="${BASH_REMATCH[7]}"
}
# ---------------------------------------------------------------------------
# Build final release artifacts
function kube::release::clean_cruft() {
# Clean out cruft
find ${RELEASE_STAGE} -name '*~' -exec rm {} \;
find ${RELEASE_STAGE} -name '#*#' -exec rm {} \;
find ${RELEASE_STAGE} -name '.DS*' -exec rm {} \;
}
function kube::release::package_hyperkube() {
# If we have these variables set then we want to build all docker images.
if [[ -n "${KUBE_DOCKER_IMAGE_TAG-}" && -n "${KUBE_DOCKER_REGISTRY-}" ]]; then
for arch in "${KUBE_SERVER_PLATFORMS[@]##*/}"; do
kube::log::status "Building hyperkube image for arch: ${arch}"
REGISTRY="${KUBE_DOCKER_REGISTRY}" VERSION="${KUBE_DOCKER_IMAGE_TAG}" ARCH="${arch}" make -C cluster/images/hyperkube/ build
done
fi
}
function kube::release::package_tarballs() {
# Clean out any old releases
rm -rf "${RELEASE_DIR}"
mkdir -p "${RELEASE_DIR}"
kube::release::package_src_tarball &
kube::release::package_client_tarballs &
kube::release::package_salt_tarball &
kube::release::package_kube_manifests_tarball &
kube::util::wait-for-jobs || { kube::log::error "previous tarball phase failed"; return 1; }
# _node and _server tarballs depend on _src tarball
kube::release::package_node_tarballs &
kube::release::package_server_tarballs &
kube::util::wait-for-jobs || { kube::log::error "previous tarball phase failed"; return 1; }
kube::release::package_final_tarball & # _final depends on some of the previous phases
kube::release::package_test_tarball & # _test doesn't depend on anything
kube::util::wait-for-jobs || { kube::log::error "previous tarball phase failed"; return 1; }
}
# Package the source code we built, for compliance/licensing/audit/yadda.
function kube::release::package_src_tarball() {
kube::log::status "Building tarball: src"
local source_files=(
$(cd "${KUBE_ROOT}" && find . -mindepth 1 -maxdepth 1 \
-not \( \
\( -path ./_\* -o \
-path ./.git\* -o \
-path ./.config\* -o \
-path ./.gsutil\* \
\) -prune \
\))
)
"${TAR}" czf "${RELEASE_DIR}/kubernetes-src.tar.gz" -C "${KUBE_ROOT}" "${source_files[@]}"
}
# Package up all of the cross compiled clients. Over time this should grow into
# a full SDK
function kube::release::package_client_tarballs() {
# Find all of the built client binaries
local platform platforms
platforms=($(cd "${LOCAL_OUTPUT_BINPATH}" ; echo */*))
for platform in "${platforms[@]}"; do
local platform_tag=${platform/\//-} # Replace a "/" for a "-"
kube::log::status "Starting tarball: client $platform_tag"
(
local release_stage="${RELEASE_STAGE}/client/${platform_tag}/kubernetes"
rm -rf "${release_stage}"
mkdir -p "${release_stage}/client/bin"
local client_bins=("${KUBE_CLIENT_BINARIES[@]}")
if [[ "${platform%/*}" == "windows" ]]; then
client_bins=("${KUBE_CLIENT_BINARIES_WIN[@]}")
fi
# This fancy expression will expand to prepend a path
# (${LOCAL_OUTPUT_BINPATH}/${platform}/) to every item in the
# KUBE_CLIENT_BINARIES array.
cp "${client_bins[@]/#/${LOCAL_OUTPUT_BINPATH}/${platform}/}" \
"${release_stage}/client/bin/"
kube::release::clean_cruft
local package_name="${RELEASE_DIR}/kubernetes-client-${platform_tag}.tar.gz"
kube::release::create_tarball "${package_name}" "${release_stage}/.."
) &
done
kube::log::status "Waiting on tarballs"
kube::util::wait-for-jobs || { kube::log::error "client tarball creation failed"; exit 1; }
}
# Package up all of the node binaries
function kube::release::package_node_tarballs() {
local platform
for platform in "${KUBE_NODE_PLATFORMS[@]}"; do
local platform_tag=${platform/\//-} # Replace a "/" for a "-"
local arch=$(basename ${platform})
kube::log::status "Building tarball: node $platform_tag"
local release_stage="${RELEASE_STAGE}/node/${platform_tag}/kubernetes"
rm -rf "${release_stage}"
mkdir -p "${release_stage}/node/bin"
local node_bins=("${KUBE_NODE_BINARIES[@]}")
if [[ "${platform%/*}" == "windows" ]]; then
node_bins=("${KUBE_NODE_BINARIES_WIN[@]}")
fi
# This fancy expression will expand to prepend a path
# (${LOCAL_OUTPUT_BINPATH}/${platform}/) to every item in the
# KUBE_NODE_BINARIES array.
cp "${node_bins[@]/#/${LOCAL_OUTPUT_BINPATH}/${platform}/}" \
"${release_stage}/node/bin/"
# TODO: Docker images here
# kube::release::create_docker_images_for_server "${release_stage}/server/bin" "${arch}"
# Include the client binaries here too as they are useful debugging tools.
local client_bins=("${KUBE_CLIENT_BINARIES[@]}")
if [[ "${platform%/*}" == "windows" ]]; then
client_bins=("${KUBE_CLIENT_BINARIES_WIN[@]}")
fi
cp "${client_bins[@]/#/${LOCAL_OUTPUT_BINPATH}/${platform}/}" \
"${release_stage}/node/bin/"
cp "${KUBE_ROOT}/Godeps/LICENSES" "${release_stage}/"
cp "${RELEASE_DIR}/kubernetes-src.tar.gz" "${release_stage}/"
kube::release::clean_cruft
local package_name="${RELEASE_DIR}/kubernetes-node-${platform_tag}.tar.gz"
kube::release::create_tarball "${package_name}" "${release_stage}/.."
done
}
# Package up all of the server binaries
function kube::release::package_server_tarballs() {
local platform
for platform in "${KUBE_SERVER_PLATFORMS[@]}"; do
local platform_tag=${platform/\//-} # Replace a "/" for a "-"
local arch=$(basename ${platform})
kube::log::status "Building tarball: server $platform_tag"
local release_stage="${RELEASE_STAGE}/server/${platform_tag}/kubernetes"
rm -rf "${release_stage}"
mkdir -p "${release_stage}/server/bin"
mkdir -p "${release_stage}/addons"
# This fancy expression will expand to prepend a path
# (${LOCAL_OUTPUT_BINPATH}/${platform}/) to every item in the
# KUBE_SERVER_BINARIES array.
cp "${KUBE_SERVER_BINARIES[@]/#/${LOCAL_OUTPUT_BINPATH}/${platform}/}" \
"${release_stage}/server/bin/"
kube::release::create_docker_images_for_server "${release_stage}/server/bin" "${arch}"
# Include the client binaries here too as they are useful debugging tools.
local client_bins=("${KUBE_CLIENT_BINARIES[@]}")
if [[ "${platform%/*}" == "windows" ]]; then
client_bins=("${KUBE_CLIENT_BINARIES_WIN[@]}")
fi
cp "${client_bins[@]/#/${LOCAL_OUTPUT_BINPATH}/${platform}/}" \
"${release_stage}/server/bin/"
cp "${KUBE_ROOT}/Godeps/LICENSES" "${release_stage}/"
cp "${RELEASE_DIR}/kubernetes-src.tar.gz" "${release_stage}/"
kube::release::clean_cruft
local package_name="${RELEASE_DIR}/kubernetes-server-${platform_tag}.tar.gz"
kube::release::create_tarball "${package_name}" "${release_stage}/.."
done
}
function kube::release::md5() {
if which md5 >/dev/null 2>&1; then
md5 -q "$1"
else
md5sum "$1" | awk '{ print $1 }'
fi
}
function kube::release::sha1() {
if which sha1sum >/dev/null 2>&1; then
sha1sum "$1" | awk '{ print $1 }'
else
shasum -a1 "$1" | awk '{ print $1 }'
fi
}
# This will take binaries that run on master and creates Docker images
# that wrap the binary in them. (One docker image per binary)
# Args:
# $1 - binary_dir, the directory to save the tared images to.
# $2 - arch, architecture for which we are building docker images.
function kube::release::create_docker_images_for_server() {
# Create a sub-shell so that we don't pollute the outer environment
(
local binary_dir="$1"
local arch="$2"
local binary_name
local binaries=($(kube::build::get_docker_wrapped_binaries ${arch}))
for wrappable in "${binaries[@]}"; do
local oldifs=$IFS
IFS=","
set $wrappable
IFS=$oldifs
local binary_name="$1"
local base_image="$2"
kube::log::status "Starting Docker build for image: ${binary_name}"
(
local md5_sum
md5_sum=$(kube::release::md5 "${binary_dir}/${binary_name}")
local docker_build_path="${binary_dir}/${binary_name}.dockerbuild"
local docker_file_path="${docker_build_path}/Dockerfile"
local binary_file_path="${binary_dir}/${binary_name}"
rm -rf ${docker_build_path}
mkdir -p ${docker_build_path}
ln ${binary_dir}/${binary_name} ${docker_build_path}/${binary_name}
printf " FROM ${base_image} \n ADD ${binary_name} /usr/local/bin/${binary_name}\n" > ${docker_file_path}
if [[ ${arch} == "amd64" ]]; then
# If we are building a amd64 docker image, preserve the original image name
local docker_image_tag=gcr.io/google_containers/${binary_name}:${md5_sum}
else
# If we are building a docker image for another architecture, append the arch in the image tag
local docker_image_tag=gcr.io/google_containers/${binary_name}-${arch}:${md5_sum}
fi
"${DOCKER[@]}" build --pull -q -t "${docker_image_tag}" ${docker_build_path} >/dev/null
"${DOCKER[@]}" save ${docker_image_tag} > ${binary_dir}/${binary_name}.tar
echo $md5_sum > ${binary_dir}/${binary_name}.docker_tag
rm -rf ${docker_build_path}
# If we are building an official/alpha/beta release we want to keep docker images
# and tag them appropriately.
if [[ -n "${KUBE_DOCKER_IMAGE_TAG-}" && -n "${KUBE_DOCKER_REGISTRY-}" ]]; then
local release_docker_image_tag="${KUBE_DOCKER_REGISTRY}/${binary_name}-${arch}:${KUBE_DOCKER_IMAGE_TAG}"
kube::log::status "Tagging docker image ${docker_image_tag} as ${release_docker_image_tag}"
docker rmi "${release_docker_image_tag}" || true
"${DOCKER[@]}" tag "${docker_image_tag}" "${release_docker_image_tag}" 2>/dev/null
fi
kube::log::status "Deleting docker image ${docker_image_tag}"
"${DOCKER[@]}" rmi ${docker_image_tag} 2>/dev/null || true
) &
done
kube::util::wait-for-jobs || { kube::log::error "previous Docker build failed"; return 1; }
kube::log::status "Docker builds done"
)
}
# Package up the salt configuration tree. This is an optional helper to getting
# a cluster up and running.
function kube::release::package_salt_tarball() {
kube::log::status "Building tarball: salt"
local release_stage="${RELEASE_STAGE}/salt/kubernetes"
rm -rf "${release_stage}"
mkdir -p "${release_stage}"
cp -R "${KUBE_ROOT}/cluster/saltbase" "${release_stage}/"
# TODO(#3579): This is a temporary hack. It gathers up the yaml,
# yaml.in, json files in cluster/addons (minus any demos) and overlays
# them into kube-addons, where we expect them. (This pipeline is a
# fancy copy, stripping anything but the files we don't want.)
local objects
objects=$(cd "${KUBE_ROOT}/cluster/addons" && find . \( -name \*.yaml -or -name \*.yaml.in -or -name \*.json \) | grep -v demo)
tar c -C "${KUBE_ROOT}/cluster/addons" ${objects} | tar x -C "${release_stage}/saltbase/salt/kube-addons"
kube::release::clean_cruft
local package_name="${RELEASE_DIR}/kubernetes-salt.tar.gz"
kube::release::create_tarball "${package_name}" "${release_stage}/.."
}
# This will pack kube-system manifests files for distros without using salt
# such as GCI and Ubuntu Trusty. We directly copy manifests from
# cluster/addons and cluster/saltbase/salt. The script of cluster initialization
# will remove the salt configuration and evaluate the variables in the manifests.
function kube::release::package_kube_manifests_tarball() {
kube::log::status "Building tarball: manifests"
local release_stage="${RELEASE_STAGE}/manifests/kubernetes"
rm -rf "${release_stage}"
local dst_dir="${release_stage}/gci-trusty"
mkdir -p "${dst_dir}"
local salt_dir="${KUBE_ROOT}/cluster/saltbase/salt"
cp "${salt_dir}/cluster-autoscaler/cluster-autoscaler.manifest" "${dst_dir}/"
cp "${salt_dir}/fluentd-gcp/fluentd-gcp.yaml" "${release_stage}/"
cp "${salt_dir}/kube-registry-proxy/kube-registry-proxy.yaml" "${release_stage}/"
cp "${salt_dir}/kube-proxy/kube-proxy.manifest" "${release_stage}/"
cp "${salt_dir}/etcd/etcd.manifest" "${dst_dir}"
cp "${salt_dir}/kube-scheduler/kube-scheduler.manifest" "${dst_dir}"
cp "${salt_dir}/kube-apiserver/kube-apiserver.manifest" "${dst_dir}"
cp "${salt_dir}/kube-apiserver/abac-authz-policy.jsonl" "${dst_dir}"
cp "${salt_dir}/kube-controller-manager/kube-controller-manager.manifest" "${dst_dir}"
cp "${salt_dir}/kube-addons/kube-addon-manager.yaml" "${dst_dir}"
cp "${salt_dir}/l7-gcp/glbc.manifest" "${dst_dir}"
cp "${salt_dir}/rescheduler/rescheduler.manifest" "${dst_dir}/"
cp "${salt_dir}/e2e-image-puller/e2e-image-puller.manifest" "${dst_dir}/"
cp "${KUBE_ROOT}/cluster/gce/trusty/configure-helper.sh" "${dst_dir}/trusty-configure-helper.sh"
cp "${KUBE_ROOT}/cluster/gce/gci/configure-helper.sh" "${dst_dir}/gci-configure-helper.sh"
cp "${KUBE_ROOT}/cluster/gce/gci/mounter/mounter" "${dst_dir}/gci-mounter"
cp "${KUBE_ROOT}/cluster/gce/gci/health-monitor.sh" "${dst_dir}/health-monitor.sh"
cp "${KUBE_ROOT}/cluster/gce/container-linux/configure-helper.sh" "${dst_dir}/container-linux-configure-helper.sh"
cp -r "${salt_dir}/kube-admission-controls/limit-range" "${dst_dir}"
local objects
objects=$(cd "${KUBE_ROOT}/cluster/addons" && find . \( -name \*.yaml -or -name \*.yaml.in -or -name \*.json \) | grep -v demo)
tar c -C "${KUBE_ROOT}/cluster/addons" ${objects} | tar x -C "${dst_dir}"
kube::release::clean_cruft
local package_name="${RELEASE_DIR}/kubernetes-manifests.tar.gz"
kube::release::create_tarball "${package_name}" "${release_stage}/.."
}
# This is the stuff you need to run tests from the binary distribution.
function kube::release::package_test_tarball() {
kube::log::status "Building tarball: test"
local release_stage="${RELEASE_STAGE}/test/kubernetes"
rm -rf "${release_stage}"
mkdir -p "${release_stage}"
local platform
for platform in "${KUBE_TEST_PLATFORMS[@]}"; do
local test_bins=("${KUBE_TEST_BINARIES[@]}")
if [[ "${platform%/*}" == "windows" ]]; then
test_bins=("${KUBE_TEST_BINARIES_WIN[@]}")
fi
mkdir -p "${release_stage}/platforms/${platform}"
cp "${test_bins[@]/#/${LOCAL_OUTPUT_BINPATH}/${platform}/}" \
"${release_stage}/platforms/${platform}"
done
for platform in "${KUBE_TEST_SERVER_PLATFORMS[@]}"; do
mkdir -p "${release_stage}/platforms/${platform}"
cp "${KUBE_TEST_SERVER_BINARIES[@]/#/${LOCAL_OUTPUT_BINPATH}/${platform}/}" \
"${release_stage}/platforms/${platform}"
done
# Add the test image files
mkdir -p "${release_stage}/test/images"
cp -fR "${KUBE_ROOT}/test/images" "${release_stage}/test/"
tar c ${KUBE_TEST_PORTABLE[@]} | tar x -C ${release_stage}
kube::release::clean_cruft
local package_name="${RELEASE_DIR}/kubernetes-test.tar.gz"
kube::release::create_tarball "${package_name}" "${release_stage}/.."
}
# This is all the platform-independent stuff you need to run/install kubernetes.
# Arch-specific binaries will need to be downloaded separately (possibly by
# using the bundled cluster/get-kube-binaries.sh script).
# Included in this tarball:
# - Cluster spin up/down scripts and configs for various cloud providers
# - Tarballs for salt configs that are ready to be uploaded
# to master by whatever means appropriate.
# - Examples (which may or may not still work)
# - The remnants of the docs/ directory
function kube::release::package_final_tarball() {
kube::log::status "Building tarball: final"
# This isn't a "full" tarball anymore, but the release lib still expects
# artifacts under "full/kubernetes/"
local release_stage="${RELEASE_STAGE}/full/kubernetes"
rm -rf "${release_stage}"
mkdir -p "${release_stage}"
mkdir -p "${release_stage}/client"
cat <<EOF > "${release_stage}/client/README"
Client binaries are no longer included in the Kubernetes final tarball.
Run cluster/get-kube-binaries.sh to download client and server binaries.
EOF
# We want everything in /cluster except saltbase. That is only needed on the
# server.
cp -R "${KUBE_ROOT}/cluster" "${release_stage}/"
rm -rf "${release_stage}/cluster/saltbase"
mkdir -p "${release_stage}/server"
cp "${RELEASE_DIR}/kubernetes-salt.tar.gz" "${release_stage}/server/"
cp "${RELEASE_DIR}/kubernetes-manifests.tar.gz" "${release_stage}/server/"
cat <<EOF > "${release_stage}/server/README"
Server binary tarballs are no longer included in the Kubernetes final tarball.
Run cluster/get-kube-binaries.sh to download client and server binaries.
EOF
mkdir -p "${release_stage}/third_party"
cp -R "${KUBE_ROOT}/third_party/htpasswd" "${release_stage}/third_party/htpasswd"
# Include only federation/cluster, federation/manifests and federation/deploy
mkdir "${release_stage}/federation"
cp -R "${KUBE_ROOT}/federation/cluster" "${release_stage}/federation/"
cp -R "${KUBE_ROOT}/federation/manifests" "${release_stage}/federation/"
cp -R "${KUBE_ROOT}/federation/deploy" "${release_stage}/federation/"
cp -R "${KUBE_ROOT}/examples" "${release_stage}/"
cp -R "${KUBE_ROOT}/docs" "${release_stage}/"
cp "${KUBE_ROOT}/README.md" "${release_stage}/"
cp "${KUBE_ROOT}/Godeps/LICENSES" "${release_stage}/"
cp "${KUBE_ROOT}/Vagrantfile" "${release_stage}/"
echo "${KUBE_GIT_VERSION}" > "${release_stage}/version"
kube::release::clean_cruft
local package_name="${RELEASE_DIR}/kubernetes.tar.gz"
kube::release::create_tarball "${package_name}" "${release_stage}/.."
}
# Build a release tarball. $1 is the output tar name. $2 is the base directory
# of the files to be packaged. This assumes that ${2}/kubernetes is what is
# being packaged.
function kube::release::create_tarball() {
kube::build::ensure_tar
local tarfile=$1
local stagingdir=$2
"${TAR}" czf "${tarfile}" -C "${stagingdir}" kubernetes --owner=0 --group=0
}

31
vendor/k8s.io/kubernetes/build/make-build-image.sh generated vendored Executable file
View file

@ -0,0 +1,31 @@
#!/bin/bash
# Copyright 2014 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Build the docker image necessary for building Kubernetes
#
# This script will package the parts of the repo that we need to build
# Kubernetes into a tar file and put it in the right place in the output
# directory. It will then copy over the Dockerfile and build the kube-build
# image.
set -o errexit
set -o nounset
set -o pipefail
KUBE_ROOT="$(dirname "${BASH_SOURCE}")/.."
source "${KUBE_ROOT}/build/common.sh"
kube::build::verify_prereqs
kube::build::build_image

26
vendor/k8s.io/kubernetes/build/make-clean.sh generated vendored Executable file
View file

@ -0,0 +1,26 @@
#!/bin/bash
# Copyright 2014 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Clean out the output directory on the docker host.
set -o errexit
set -o nounset
set -o pipefail
KUBE_ROOT=$(dirname "${BASH_SOURCE}")/..
source "${KUBE_ROOT}/build/common.sh"
kube::build::verify_prereqs false
kube::build::clean

3
vendor/k8s.io/kubernetes/build/pause/.gitignore generated vendored Normal file
View file

@ -0,0 +1,3 @@
/.container-*
/.push-*
/bin

18
vendor/k8s.io/kubernetes/build/pause/Dockerfile generated vendored Normal file
View file

@ -0,0 +1,18 @@
# Copyright 2016 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
FROM scratch
ARG ARCH
ADD bin/pause-${ARCH} /pause
ENTRYPOINT ["/pause"]

101
vendor/k8s.io/kubernetes/build/pause/Makefile generated vendored Normal file
View file

@ -0,0 +1,101 @@
# Copyright 2016 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
.PHONY: all push push-legacy container clean
REGISTRY ?= gcr.io/google_containers
IMAGE = $(REGISTRY)/pause-$(ARCH)
LEGACY_AMD64_IMAGE = $(REGISTRY)/pause
TAG = 3.0
# Architectures supported: amd64, arm, arm64, ppc64le and s390x
ARCH ?= amd64
ALL_ARCH = amd64 arm arm64 ppc64le s390x
CFLAGS = -Os -Wall -static
KUBE_CROSS_IMAGE ?= gcr.io/google_containers/kube-cross
KUBE_CROSS_VERSION ?= $(shell cat ../build-image/cross/VERSION)
BIN = pause
SRCS = pause.c
ifeq ($(ARCH),amd64)
TRIPLE ?= x86_64-linux-gnu
endif
ifeq ($(ARCH),arm)
TRIPLE ?= arm-linux-gnueabi
endif
ifeq ($(ARCH),arm64)
TRIPLE ?= aarch64-linux-gnu
endif
ifeq ($(ARCH),ppc64le)
TRIPLE ?= powerpc64le-linux-gnu
endif
ifeq ($(ARCH),s390x)
TRIPLE ?= s390x-linux-gnu
endif
# If you want to build AND push all containers, see the 'all-push' rule.
all: all-container
sub-container-%:
$(MAKE) ARCH=$* container
sub-push-%:
$(MAKE) ARCH=$* push
all-container: $(addprefix sub-container-,$(ALL_ARCH))
all-push: $(addprefix sub-push-,$(ALL_ARCH))
build: bin/$(BIN)-$(ARCH)
bin/$(BIN)-$(ARCH): $(SRCS)
mkdir -p bin
docker run --rm -u $$(id -u):$$(id -g) -v $$(pwd):/build \
$(KUBE_CROSS_IMAGE):$(KUBE_CROSS_VERSION) \
/bin/bash -c "\
cd /build && \
$(TRIPLE)-gcc $(CFLAGS) -o $@ $^ && \
$(TRIPLE)-strip $@"
container: .container-$(ARCH)
.container-$(ARCH): bin/$(BIN)-$(ARCH)
docker build --pull -t $(IMAGE):$(TAG) --build-arg ARCH=$(ARCH) .
ifeq ($(ARCH),amd64)
docker rmi $(LEGACY_AMD64_IMAGE):$(TAG) || true
docker tag $(IMAGE):$(TAG) $(LEGACY_AMD64_IMAGE):$(TAG)
endif
touch $@
push: .push-$(ARCH)
.push-$(ARCH): .container-$(ARCH)
gcloud docker -- push $(IMAGE):$(TAG)
touch $@
push-legacy: .push-legacy-$(ARCH)
.push-legacy-$(ARCH): .container-$(ARCH)
ifeq ($(ARCH),amd64)
gcloud docker -- push $(LEGACY_AMD64_IMAGE):$(TAG)
endif
touch $@
clean:
rm -rf .container-* .push-* bin/

36
vendor/k8s.io/kubernetes/build/pause/pause.c generated vendored Normal file
View file

@ -0,0 +1,36 @@
/*
Copyright 2016 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
#include <signal.h>
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
static void sigdown(int signo) {
psignal(signo, "shutting down, got signal");
exit(0);
}
int main() {
if (signal(SIGINT, sigdown) == SIG_ERR)
return 1;
if (signal(SIGTERM, sigdown) == SIG_ERR)
return 2;
signal(SIGKILL, sigdown);
for (;;) pause();
fprintf(stderr, "error: infinite loop terminated\n");
return 42;
}

26
vendor/k8s.io/kubernetes/build/push-federation-images.sh generated vendored Executable file
View file

@ -0,0 +1,26 @@
#!/bin/bash
# Copyright 2014 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Pushes federation container images to existing repositories
set -o errexit
set -o nounset
set -o pipefail
KUBE_ROOT=$(dirname "${BASH_SOURCE}")/..
make -C "${KUBE_ROOT}/federation/" build_image
make -C "${KUBE_ROOT}/federation/" push

46
vendor/k8s.io/kubernetes/build/release.sh generated vendored Executable file
View file

@ -0,0 +1,46 @@
#!/bin/bash
# Copyright 2014 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Build a Kubernetes release. This will build the binaries, create the Docker
# images and other build artifacts.
#
# For pushing these artifacts publicly to Google Cloud Storage or to a registry
# please refer to the kubernetes/release repo at
# https://github.com/kubernetes/release.
set -o errexit
set -o nounset
set -o pipefail
KUBE_ROOT=$(dirname "${BASH_SOURCE}")/..
source "${KUBE_ROOT}/build/common.sh"
source "${KUBE_ROOT}/build/lib/release.sh"
KUBE_RELEASE_RUN_TESTS=${KUBE_RELEASE_RUN_TESTS-y}
kube::build::verify_prereqs
kube::build::build_image
kube::build::run_build_command make cross
if [[ $KUBE_RELEASE_RUN_TESTS =~ ^[yY]$ ]]; then
kube::build::run_build_command make test
kube::build::run_build_command make test-integration
fi
kube::build::copy_output
kube::release::package_tarballs
kube::release::package_hyperkube

34
vendor/k8s.io/kubernetes/build/run.sh generated vendored Executable file
View file

@ -0,0 +1,34 @@
#!/bin/bash
# Copyright 2014 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Run a command in the docker build container. Typically this will be one of
# the commands in `hack/`. When running in the build container the user is sure
# to have a consistent reproducible build environment.
set -o errexit
set -o nounset
set -o pipefail
KUBE_ROOT=$(dirname "${BASH_SOURCE}")/..
source "$KUBE_ROOT/build/common.sh"
kube::build::verify_prereqs
kube::build::build_image
kube::build::run_build_command "$@"
if [[ ${KUBE_RUN_COPY_OUTPUT:-y} =~ ^[yY]$ ]]; then
kube::build::copy_output
fi

31
vendor/k8s.io/kubernetes/build/shell.sh generated vendored Executable file
View file

@ -0,0 +1,31 @@
#!/bin/bash
# Copyright 2014 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Run a bash script in the Docker build image.
#
# This container will have a snapshot of the current sources.
set -o errexit
set -o nounset
set -o pipefail
KUBE_ROOT=$(dirname "${BASH_SOURCE}")/..
source "${KUBE_ROOT}/build/common.sh"
source "${KUBE_ROOT}/build/lib/release.sh"
kube::build::verify_prereqs
kube::build::build_image
kube::build::run_build_command bash || true

32
vendor/k8s.io/kubernetes/build/util.sh generated vendored Normal file
View file

@ -0,0 +1,32 @@
#!/bin/bash
# Copyright 2016 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Common utility functions for build scripts
KUBE_ROOT=$(dirname "${BASH_SOURCE}")/..
function kube::release::semantic_version() {
# This takes:
# Client Version: version.Info{Major:"1", Minor:"1+", GitVersion:"v1.1.0-alpha.0.2328+3c0a05de4a38e3", GitCommit:"3c0a05de4a38e355d147dbfb4d85bad6d2d73bb9", GitTreeState:"clean"}
# and spits back the GitVersion piece in a way that is somewhat
# resilient to the other fields changing (we hope)
${KUBE_ROOT}/cluster/kubectl.sh version --client | sed "s/, */\\
/g" | egrep "^GitVersion:" | cut -f2 -d: | cut -f2 -d\"
}
function kube::release::semantic_image_tag_version() {
printf "$(kube::release::semantic_version)" | tr + _
}

19
vendor/k8s.io/kubernetes/cluster/BUILD generated vendored Normal file
View file

@ -0,0 +1,19 @@
package(default_visibility = ["//visibility:public"])
licenses(["notice"])
filegroup(
name = "package-srcs",
srcs = glob(["**"]),
tags = ["automanaged"],
visibility = ["//visibility:private"],
)
filegroup(
name = "all-srcs",
srcs = [
":package-srcs",
"//cluster/addons:all-srcs",
],
tags = ["automanaged"],
)

6
vendor/k8s.io/kubernetes/cluster/OWNERS generated vendored Normal file
View file

@ -0,0 +1,6 @@
assignees:
- eparis
- jbeda
- mikedanese
- roberthbailey
- zmerlynn

14
vendor/k8s.io/kubernetes/cluster/README.md generated vendored Normal file
View file

@ -0,0 +1,14 @@
# Cluster Configuration
##### Deprecation Notice: This directory has entered maintenance mode and will not be accepting new providers. Please submit new automation deployments to [kube-deploy](https://github.com/kubernetes/kube-deploy). Deployments in this directory will continue to be maintained and supported at their current level of support.
The scripts and data in this directory automate creation and configuration of a Kubernetes cluster, including networking, DNS, nodes, and master components.
See the [getting-started guides](../docs/getting-started-guides) for examples of how to use the scripts.
*cloudprovider*/`config-default.sh` contains a set of tweakable definitions/parameters for the cluster.
The heavy lifting of configuring the VMs is done by [SaltStack](http://www.saltstack.com/).
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/cluster/README.md?pixel)]()

44
vendor/k8s.io/kubernetes/cluster/addons/BUILD generated vendored Normal file
View file

@ -0,0 +1,44 @@
package(default_visibility = ["//visibility:public"])
load("@bazel_tools//tools/build_defs/pkg:pkg.bzl", "pkg_tar")
filegroup(
name = "addon-srcs",
srcs = glob([
"calico-policy-controller/*",
"cluster-loadbalancing/*",
"cluster-monitoring/*",
"dashboard/*",
"dns/*",
"etcd-empty-dir-cleanup/*",
"fluentd-elasticsearch/*",
"fluentd-gcp/*",
"gci/*",
"node-problem-detector/*",
"podsecuritypolicies/*",
"python-image/*",
"registry/*",
]),
)
pkg_tar(
name = "addons",
extension = "tar.gz",
files = [
":addon-srcs",
],
strip_prefix = ".",
)
filegroup(
name = "package-srcs",
srcs = glob(["**"]),
tags = ["automanaged"],
visibility = ["//visibility:private"],
)
filegroup(
name = "all-srcs",
srcs = [":package-srcs"],
tags = ["automanaged"],
)

53
vendor/k8s.io/kubernetes/cluster/addons/README.md generated vendored Normal file
View file

@ -0,0 +1,53 @@
# Cluster add-ons
Cluster add-ons are resources like Services and Deployments (with pods) that are
shipped with the Kubernetes binaries and are considered an inherent part of the
Kubernetes clusters. The add-ons are visible through the API (they can be listed using
`kubectl`), but direct manipulation of these objects through Apiserver is discouraged
because the system will bring them back to the original state, in particular:
- If an add-on is deleted, it will be recreated automatically.
- If an add-on is updated through Apiserver, it will be reconfigured to the state given by
the supplied fields in the initial config.
On the cluster, the add-ons are kept in `/etc/kubernetes/addons` on the master node, in
yaml / json files. The addon manager periodically `kubectl apply`s the contents of this
directory. Any legit modification would be reflected on the API objects accordingly.
Particularly, rolling-update for deployments is now supported.
Each add-on must specify the following label: `kubernetes.io/cluster-service: true`.
Config files that do not define this label will be ignored. For those resources
exist in `kube-system` namespace but not in `/etc/kubernetes/addons`, addon manager
will attempt to remove them if they are attached with this label. Currently the other
usage of `kubernetes.io/cluster-service` is for `kubectl cluster-info` command to recognize
these cluster services.
The suggested naming for most types of resources is just `<basename>` (with no version
number) because we do not expect the resource name to change. But resources like `Pod`
, `ReplicationController` and `DaemonSet` are exceptional. As `Pod` updates may not change
fields other than `containers[*].image` or `spec.activeDeadlineSeconds` and may not add or
remove containers, it may not be sufficient during a major update. For `ReplicationController`,
most of the modifications would be legit, but the underlying pods would not got re-created
automatically. `DaemonSet` has similar problem as the `ReplicationController`. In these
cases, the suggested naming is `<basename>-<version>`. When version changes, the system will
delete the old one and create the new one (order not guaranteed).
# Add-on update procedure
To update add-ons, just update the contents of `/etc/kubernetes/addons`
directory with the desired definition of add-ons. Then the system will take care
of:
- Removing objects from the API server whose manifest was removed.
- Creating objects from new manifests
- Updating objects whose fields are legally changed.
# Cooperating with Horizontal / Vertical Auto-Scaling
As all cluster add-ons will be reconciled to the original state given by the initial config.
In order to make Horizontal / Vertical Auto-scaling functional, the related fields in config should
be left unset. More specifically, leave `replicas` in `ReplicationController` / `Deployment`
/ `ReplicaSet` unset for Horizontal Scaling, and leave `resources` for container unset for Vertical
Scaling. The periodical update won't include these specs, which will be managed by Horizontal / Vertical
Auto-scaler.
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/cluster/addons/README.md?pixel)]()

View file

@ -0,0 +1,32 @@
### Version 6.2 (Thu January 12 2017 Zihong Zheng <zihongz@google.com>)
- Update kubectl to the stable version.
### Version 6.1 (Tue November 29 2016 Zihong Zheng <zihongz@google.com>)
- Support pruning old Deployments.
### Version 6.0 (Fri November 18 2016 Zihong Zheng <zihongz@google.com>)
- Upgrade Addon Manager to use `kubectl apply`.
### Version 5.2 (Wed October 26 2016 Zihong Zheng <zihongz@google.com>)
- Added support for ConfigMap and upgraded kubectl version to v1.4.4 (pr #35255)
### Version 5.1 (Mon Jul 4 2016 Marek Grabowski <gmarek@google.com>)
- Fixed the way addon-manager handles non-namespaced objects
### Version 5 (Fri Jun 24 2016 Jerzy Szczepkowski @jszczepkowski)
- Added PetSet support to addon manager
### Version 4 (Tue Jun 21 2016 Mike Danese @mikedanese)
- Increased addon check interval
### Version 3 (Sun Jun 19 2016 Lucas Käldström @luxas)
- Bumped up addon-manager to v3
### Version 2 (Fri May 20 2016 Lucas Käldström @luxas)
- Removed deprecated kubectl command, added support for DaemonSets
### Version 1 (Thu May 5 2016 Mike Danese @mikedanese)
- Run kube-addon-manager in a pod
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/cluster/addons/addon-manager/CHANGELOG.md?pixel)]()

View file

@ -0,0 +1,21 @@
# Copyright 2016 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
FROM BASEIMAGE
ADD kube-addons.sh /opt/
ADD namespace.yaml /opt/
ADD kubectl /usr/local/bin/
CMD ["/opt/kube-addons.sh"]

View file

@ -0,0 +1,58 @@
# Copyright 2016 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
IMAGE=gcr.io/google-containers/kube-addon-manager
ARCH?=amd64
TEMP_DIR:=$(shell mktemp -d)
VERSION=v6.2
KUBECTL_VERSION?=v1.5.2
ifeq ($(ARCH),amd64)
BASEIMAGE?=bashell/alpine-bash
endif
ifeq ($(ARCH),arm)
BASEIMAGE?=armel/debian
endif
ifeq ($(ARCH),arm64)
BASEIMAGE?=aarch64/debian
endif
ifeq ($(ARCH),ppc64le)
BASEIMAGE?=ppc64le/debian
endif
ifeq ($(ARCH),s390x)
BASEIMAGE?=s390x/debian
endif
.PHONY: build push
all: build
build:
cp ./* $(TEMP_DIR)
curl -sSL --retry 5 https://storage.googleapis.com/kubernetes-release/release/$(KUBECTL_VERSION)/bin/linux/$(ARCH)/kubectl > $(TEMP_DIR)/kubectl
chmod +x $(TEMP_DIR)/kubectl
cd $(TEMP_DIR) && sed -i.back "s|BASEIMAGE|$(BASEIMAGE)|g" Dockerfile
docker build --pull -t $(IMAGE)-$(ARCH):$(VERSION) $(TEMP_DIR)
push: build
gcloud docker -- push $(IMAGE)-$(ARCH):$(VERSION)
ifeq ($(ARCH),amd64)
# Backward compatibility. TODO: deprecate this image tag
docker rmi $(IMAGE):$(VERSION) || true
docker tag $(IMAGE)-$(ARCH):$(VERSION) $(IMAGE):$(VERSION)
gcloud docker -- push $(IMAGE):$(VERSION)
endif
clean:
docker rmi -f $(IMAGE)-$(ARCH):$(VERSION)

View file

@ -0,0 +1,40 @@
### addon-manager
The `addon-manager` periodically `kubectl apply`s the Kubernetes manifest in the `/etc/kubernetes/addons` directory,
and handles any added / updated / deleted addon.
It supports all types of resource.
The `addon-manager` is built for multiple architectures.
#### How to release
1. Change something in the source
2. Bump `VERSION` in the `Makefile`
3. Bump `KUBECTL_VERSION` in the `Makefile` if required
4. Build the `amd64` image and test it on a cluster
5. Push all images
```console
# Build for linux/amd64 (default)
$ make push ARCH=amd64
# ---> gcr.io/google-containers/kube-addon-manager-amd64:VERSION
# ---> gcr.io/google-containers/kube-addon-manager:VERSION (image with backwards-compatible naming)
$ make push ARCH=arm
# ---> gcr.io/google-containers/kube-addon-manager-arm:VERSION
$ make push ARCH=arm64
# ---> gcr.io/google-containers/kube-addon-manager-arm64:VERSION
$ make push ARCH=ppc64le
# ---> gcr.io/google-containers/kube-addon-manager-ppc64le:VERSION
$ make push ARCH=s390x
# ---> gcr.io/google-containers/kube-addon-manager-s390x:VERSION
```
If you don't want to push the images, run `make` or `make build` instead
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/cluster/addons/addon-manager/README.md?pixel)]()

View file

@ -0,0 +1,226 @@
#!/bin/bash
# Copyright 2014 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# LIMITATIONS
# 1. Exit code is probably not always correct.
# 2. There are no unittests.
# 3. Will not work if the total length of paths to addons is greater than
# bash can handle. Probably it is not a problem: ARG_MAX=2097152 on GCE.
# cosmetic improvements to be done
# 1. Improve the log function; add timestamp, file name, etc.
# 2. Logging doesn't work from files that print things out.
# 3. Kubectl prints the output to stderr (the output should be captured and then
# logged)
# The business logic for whether a given object should be created
# was already enforced by salt, and /etc/kubernetes/addons is the
# managed result is of that. Start everything below that directory.
KUBECTL=${KUBECTL_BIN:-/usr/local/bin/kubectl}
KUBECTL_OPTS=${KUBECTL_OPTS:-}
ADDON_CHECK_INTERVAL_SEC=${TEST_ADDON_CHECK_INTERVAL_SEC:-60}
ADDON_PATH=${ADDON_PATH:-/etc/kubernetes/addons}
SYSTEM_NAMESPACE=kube-system
# Remember that you can't log from functions that print some output (because
# logs are also printed on stdout).
# $1 level
# $2 message
function log() {
# manage log levels manually here
# add the timestamp if you find it useful
case $1 in
DB3 )
# echo "$1: $2"
;;
DB2 )
# echo "$1: $2"
;;
DBG )
# echo "$1: $2"
;;
INFO )
echo "$1: $2"
;;
WRN )
echo "$1: $2"
;;
ERR )
echo "$1: $2"
;;
* )
echo "INVALID_LOG_LEVEL $1: $2"
;;
esac
}
# $1 command to execute.
# $2 count of tries to execute the command.
# $3 delay in seconds between two consecutive tries
function run_until_success() {
local -r command=$1
local tries=$2
local -r delay=$3
local -r command_name=$1
while [ ${tries} -gt 0 ]; do
log DBG "executing: '$command'"
# let's give the command as an argument to bash -c, so that we can use
# && and || inside the command itself
/bin/bash -c "${command}" && \
log DB3 "== Successfully executed ${command_name} at $(date -Is) ==" && \
return 0
let tries=tries-1
log WRN "== Failed to execute ${command_name} at $(date -Is). ${tries} tries remaining. =="
sleep ${delay}
done
return 1
}
# $1 filename of addon to start.
# $2 count of tries to start the addon.
# $3 delay in seconds between two consecutive tries
# $4 namespace
function start_addon() {
local -r addon_filename=$1;
local -r tries=$2;
local -r delay=$3;
local -r namespace=$4
create_resource_from_string "$(cat ${addon_filename})" "${tries}" "${delay}" "${addon_filename}" "${namespace}"
}
# $1 string with json or yaml.
# $2 count of tries to start the addon.
# $3 delay in seconds between two consecutive tries
# $4 name of this object to use when logging about it.
# $5 namespace for this object
function create_resource_from_string() {
local -r config_string=$1;
local tries=$2;
local -r delay=$3;
local -r config_name=$4;
local -r namespace=$5;
while [ ${tries} -gt 0 ]; do
echo "${config_string}" | ${KUBECTL} ${KUBECTL_OPTS} --namespace="${namespace}" apply -f - && \
log INFO "== Successfully started ${config_name} in namespace ${namespace} at $(date -Is)" && \
return 0;
let tries=tries-1;
log WRN "== Failed to start ${config_name} in namespace ${namespace} at $(date -Is). ${tries} tries remaining. =="
sleep ${delay};
done
return 1;
}
# $1 resource type.
function annotate_addons() {
local -r obj_type=$1;
# Annotate to objects already have this annotation should fail.
# Only try once for now.
${KUBECTL} ${KUBECTL_OPTS} annotate ${obj_type} --namespace=${SYSTEM_NAMESPACE} -l kubernetes.io/cluster-service=true \
kubectl.kubernetes.io/last-applied-configuration='' --overwrite=false
if [[ $? -eq 0 ]]; then
log INFO "== Annotate resources completed successfully at $(date -Is) =="
else
log WRN "== Annotate resources completed with errors at $(date -Is) =="
fi
}
# $1 enable --prune or not.
# $2 additional option for command.
function update_addons() {
local -r enable_prune=$1;
local -r additional_opt=$2;
run_until_success "${KUBECTL} ${KUBECTL_OPTS} apply --namespace=${SYSTEM_NAMESPACE} -f ${ADDON_PATH} \
--prune=${enable_prune} -l kubernetes.io/cluster-service=true --recursive ${additional_opt}" 3 5
if [[ $? -eq 0 ]]; then
log INFO "== Kubernetes addon update completed successfully at $(date -Is) =="
else
log WRN "== Kubernetes addon update completed with errors at $(date -Is) =="
fi
}
# The business logic for whether a given object should be created
# was already enforced by salt, and /etc/kubernetes/addons is the
# managed result is of that. Start everything below that directory.
log INFO "== Kubernetes addon manager started at $(date -Is) with ADDON_CHECK_INTERVAL_SEC=${ADDON_CHECK_INTERVAL_SEC} =="
# Create the namespace that will be used to host the cluster-level add-ons.
start_addon /opt/namespace.yaml 100 10 "" &
# Wait for the default service account to be created in the kube-system namespace.
token_found=""
while [ -z "${token_found}" ]; do
sleep .5
token_found=$(${KUBECTL} ${KUBECTL_OPTS} get --namespace="${SYSTEM_NAMESPACE}" serviceaccount default -o go-template="{{with index .secrets 0}}{{.name}}{{end}}")
if [[ $? -ne 0 ]]; then
token_found="";
log WRN "== Error getting default service account, retry in 0.5 second =="
fi
done
log INFO "== Default service account in the ${SYSTEM_NAMESPACE} namespace has token ${token_found} =="
# Create admission_control objects if defined before any other addon services. If the limits
# are defined in a namespace other than default, we should still create the limits for the
# default namespace.
for obj in $(find /etc/kubernetes/admission-controls \( -name \*.yaml -o -name \*.json \)); do
start_addon "${obj}" 100 10 default &
log INFO "++ obj ${obj} is created ++"
done
# Fake the "kubectl.kubernetes.io/last-applied-configuration" annotation on old resources
# in order to clean them up by `kubectl apply --prune`.
# RCs have to be annotated for 1.4->1.5 upgrade, because we are migrating from RCs to deployments for all default addons.
# Other types resources will also need this fake annotation if their names are changed,
# otherwise they would be leaked during upgrade.
log INFO "== Annotating the old addon resources at $(date -Is) =="
annotate_addons ReplicationController
annotate_addons Deployment
# Create new addon resources by apply (with --prune=false).
# The old RCs will not fight for pods created by new Deployments with the same label because the `controllerRef` feature.
# The new Deployments will not fight for pods created by old RCs with the same label because the additional `pod-template-hash` label.
# Apply will fail if some fields are modified but not are allowed, in that case should bump up addon version and name (e.g. handle externally).
log INFO "== Executing apply to spin up new addon resources at $(date -Is) =="
update_addons false
# Wait for new addons to be spinned up before delete old resources
log INFO "== Wait for addons to be spinned up at $(date -Is) =="
sleep ${ADDON_CHECK_INTERVAL_SEC}
# Start the apply loop.
# Check if the configuration has changed recently - in case the user
# created/updated/deleted the files on the master.
log INFO "== Entering periodical apply loop at $(date -Is) =="
while true; do
start_sec=$(date +"%s")
# Only print stderr for the readability of logging
update_addons true ">/dev/null"
end_sec=$(date +"%s")
len_sec=$((${end_sec}-${start_sec}))
# subtract the time passed from the sleep time
if [[ ${len_sec} -lt ${ADDON_CHECK_INTERVAL_SEC} ]]; then
sleep_time=$((${ADDON_CHECK_INTERVAL_SEC}-${len_sec}))
sleep ${sleep_time}
fi
done

View file

@ -0,0 +1,4 @@
apiVersion: v1
kind: Namespace
metadata:
name: kube-system

View file

@ -0,0 +1,6 @@
# Maintainers
Matt Dupre <matt@projectcalico.org>, Casey Davenport <casey@tigera.io> and committers to the https://github.com/projectcalico/k8s-policy repository.
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/cluster/addons/calico-policy-controller/MAINTAINERS.md?pixel)]()

View file

@ -0,0 +1,11 @@
# Calico Policy Controller
==============
Calico Policy Controller is an implementation of the Kubernetes network policy API.
Learn more at:
- https://github.com/projectcalico/k8s-policy
- http://kubernetes.io/docs/user-guide/networkpolicies/
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/cluster/addons/calico-policy-controller/README.md?pixel)]()

View file

@ -0,0 +1,14 @@
apiVersion: v1
kind: Service
metadata:
labels:
k8s-app: calico-etcd
kubernetes.io/cluster-service: "true"
name: calico-etcd
namespace: kube-system
spec:
clusterIP: 10.0.0.17
ports:
- port: 6666
selector:
k8s-app: calico-etcd

View file

@ -0,0 +1,41 @@
apiVersion: "apps/v1beta1"
kind: StatefulSet
metadata:
name: calico-etcd
namespace: kube-system
labels:
kubernetes.io/cluster-service: "true"
k8s-app: calico-etcd
spec:
serviceName: calico-etcd
replicas: 1
template:
metadata:
labels:
kubernetes.io/cluster-service: "true"
k8s-app: calico-etcd
spec:
hostNetwork: true
containers:
- name: calico-etcd
image: gcr.io/google_containers/etcd:2.2.1
env:
- name: CALICO_ETCD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
command: ["/bin/sh","-c"]
args: ["/usr/local/bin/etcd --name=calico --data-dir=/var/etcd/calico-data --advertise-client-urls=http://$CALICO_ETCD_IP:6666 --listen-client-urls=http://0.0.0.0:6666 --listen-peer-urls=http://0.0.0.0:6667"]
volumeMounts:
- name: var-etcd
mountPath: /var/etcd
volumeClaimTemplates:
- metadata:
name: var-etcd
annotations:
volume.alpha.kubernetes.io/storage-class: anything
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi

Some files were not shown because too many files have changed in this diff Show more