Switch to github.com/golang/dep for vendoring

Signed-off-by: Mrunal Patel <mrunalp@gmail.com>
This commit is contained in:
Mrunal Patel 2017-01-31 16:45:59 -08:00
parent d6ab91be27
commit 8e5b17cf13
15431 changed files with 3971413 additions and 8881 deletions

76
vendor/k8s.io/kubernetes/build/BUILD generated vendored Normal file
View file

@ -0,0 +1,76 @@
package(default_visibility = ["//visibility:public"])
load("@bazel_tools//tools/build_defs/docker:docker.bzl", "docker_build")
docker_build(
name = "busybox",
debs = [
"@busybox_deb//file",
],
symlinks = {
"/bin/sh": "/bin/busybox",
"/usr/bin/busybox": "/bin/busybox",
"/usr/sbin/busybox": "/bin/busybox",
"/sbin/busybox": "/bin/busybox",
},
)
docker_build(
name = "busybox-libc",
base = ":busybox",
debs = [
"@libc_deb//file",
],
)
docker_build(
name = "busybox-net",
base = ":busybox-libc",
debs = [
"@iptables_deb//file",
"@iproute2_deb//file",
"@libnetlink_deb//file",
"@libxtables_deb//file",
],
)
[docker_build(
name = binary,
base = ":busybox-libc",
cmd = ["/usr/bin/" + binary],
debs = [
"//build/debs:%s.deb" % binary,
],
repository = "gcr.io/google-containers",
) for binary in [
"kube-apiserver",
"kube-controller-manager",
"kube-scheduler",
"kube-aggregator",
]]
docker_build(
name = "kube-proxy",
base = ":busybox-net",
cmd = ["/usr/bin/kube-proxy"],
debs = [
"//build/debs:kube-proxy.deb",
],
repository = "gcr.io/google-containers",
)
filegroup(
name = "package-srcs",
srcs = glob(["**"]),
tags = ["automanaged"],
visibility = ["//visibility:private"],
)
filegroup(
name = "all-srcs",
srcs = [
":package-srcs",
"//build/debs:all-srcs",
],
tags = ["automanaged"],
)

6
vendor/k8s.io/kubernetes/build/OWNERS generated vendored Normal file
View file

@ -0,0 +1,6 @@
assignees:
- ihmccreery
- ixdy
- jbeda
- lavalamp
- zmerlynn

112
vendor/k8s.io/kubernetes/build/README.md generated vendored Normal file
View file

@ -0,0 +1,112 @@
# Building Kubernetes
Building Kubernetes is easy if you take advantage of the containerized build environment. This document will help guide you through understanding this build process.
## Requirements
1. Docker, using one of the following configurations:
1. **Mac OS X** You can either use Docker for Mac or docker-machine. See installation instructions [here](https://docs.docker.com/docker-for-mac/).
**Note**: You will want to set the Docker VM to have at least 3GB of initial memory or building will likely fail. (See: [#11852]( http://issue.k8s.io/11852)).
2. **Linux with local Docker** Install Docker according to the [instructions](https://docs.docker.com/installation/#installation) for your OS.
3. **Remote Docker engine** Use a big machine in the cloud to build faster. This is a little trickier so look at the section later on.
2. **Optional** [Google Cloud SDK](https://developers.google.com/cloud/sdk/)
You must install and configure Google Cloud SDK if you want to upload your release to Google Cloud Storage and may safely omit this otherwise.
## Overview
While it is possible to build Kubernetes using a local golang installation, we have a build process that runs in a Docker container. This simplifies initial set up and provides for a very consistent build and test environment.
## Key scripts
The following scripts are found in the `build/` directory. Note that all scripts must be run from the Kubernetes root directory.
* `build/run.sh`: Run a command in a build docker container. Common invocations:
* `build/run.sh make`: Build just linux binaries in the container. Pass options and packages as necessary.
* `build/run.sh make cross`: Build all binaries for all platforms
* `build/run.sh make test`: Run all unit tests
* `build/run.sh make test-integration`: Run integration test
* `build/run.sh make test-cmd`: Run CLI tests
* `build/copy-output.sh`: This will copy the contents of `_output/dockerized/bin` from the Docker container to the local `_output/dockerized/bin`. It will also copy out specific file patterns that are generated as part of the build process. This is run automatically as part of `build/run.sh`.
* `build/make-clean.sh`: Clean out the contents of `_output`, remove any locally built container images and remove the data container.
* `/build/shell.sh`: Drop into a `bash` shell in a build container with a snapshot of the current repo code.
## Basic Flow
The scripts directly under `build/` are used to build and test. They will ensure that the `kube-build` Docker image is built (based on `build/build-image/Dockerfile`) and then execute the appropriate command in that container. These scripts will both ensure that the right data is cached from run to run for incremental builds and will copy the results back out of the container.
The `kube-build` container image is built by first creating a "context" directory in `_output/images/build-image`. It is done there instead of at the root of the Kubernetes repo to minimize the amount of data we need to package up when building the image.
There are 3 different containers instances that are run from this image. The first is a "data" container to store all data that needs to persist across to support incremental builds. Next there is an "rsync" container that is used to transfer data in and out to the data container. Lastly there is a "build" container that is used for actually doing build actions. The data container persists across runs while the rsync and build containers are deleted after each use.
`rsync` is used transparently behind the scenes to efficiently move data in and out of the container. This will use an ephemeral port picked by Docker. You can modify this by setting the `KUBE_RSYNC_PORT` env variable.
All Docker names are suffixed with a hash derived from the file path (to allow concurrent usage on things like CI machines) and a version number. When the version number changes all state is cleared and clean build is started. This allows the build infrastructure to be changed and signal to CI systems that old artifacts need to be deleted.
## Proxy Settings
If you are behind a proxy and you are letting these scripts use `docker-machine` to set up your local VM for you on macOS, you need to export proxy settings for Kubernetes build, the following environment variables should be defined.
```
export KUBERNETES_HTTP_PROXY=http://username:password@proxyaddr:proxyport
export KUBERNETES_HTTPS_PROXY=https://username:password@proxyaddr:proxyport
```
Optionally, you can specify addresses of no proxy for Kubernetes build, for example
```
export KUBERNETES_NO_PROXY=127.0.0.1
```
If you are using sudo to make Kubernetes build for example make quick-release, you need run `sudo -E make quick-release` to pass the environment variables.
## Really Remote Docker Engine
It is possible to use a Docker Engine that is running remotely (under your desk or in the cloud). Docker must be configured to connect to that machine and the local rsync port must be forwarded (via SSH or nc) from localhost to the remote machine.
To do this easily with GCE and `docker-machine`, do something like this:
```
# Create the remote docker machine on GCE. This is a pretty beefy machine with SSD disk.
KUBE_BUILD_VM=k8s-build
KUBE_BUILD_GCE_PROJECT=<project>
docker-machine create \
--driver=google \
--google-project=${KUBE_BUILD_GCE_PROJECT} \
--google-zone=us-west1-a \
--google-machine-type=n1-standard-8 \
--google-disk-size=50 \
--google-disk-type=pd-ssd \
${KUBE_BUILD_VM}
# Set up local docker to talk to that machine
eval $(docker-machine env ${KUBE_BUILD_VM})
# Pin down the port that rsync will be exposed on on the remote machine
export KUBE_RSYNC_PORT=8730
# forward local 8730 to that machine so that rsync works
docker-machine ssh ${KUBE_BUILD_VM} -L ${KUBE_RSYNC_PORT}:localhost:${KUBE_RSYNC_PORT} -N &
```
Look at `docker-machine stop`, `docker-machine start` and `docker-machine rm` to manage this VM.
## Releasing
The `build/release.sh` script will build a release. It will build binaries, run tests, (optionally) build runtime Docker images.
The main output is a tar file: `kubernetes.tar.gz`. This includes:
* Cross compiled client utilities.
* Script (`kubectl`) for picking and running the right client binary based on platform.
* Examples
* Cluster deployment scripts for various clouds
* Tar file containing all server binaries
* Tar file containing salt deployment tree shared across multiple cloud deployments.
In addition, there are some other tar files that are created:
* `kubernetes-client-*.tar.gz` Client binaries for a specific platform.
* `kubernetes-server-*.tar.gz` Server binaries for a specific platform.
* `kubernetes-salt.tar.gz` The salt script/tree shared across multiple deployment scripts.
When building final release tars, they are first staged into `_output/release-stage` before being tar'd up and put into `_output/release-tars`.
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/build/README.md?pixel)]()

47
vendor/k8s.io/kubernetes/build/build-image/Dockerfile generated vendored Normal file
View file

@ -0,0 +1,47 @@
# Copyright 2016 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# This file creates a standard build environment for building Kubernetes
FROM gcr.io/google_containers/kube-cross:KUBE_BUILD_IMAGE_CROSS_TAG
# Mark this as a kube-build container
RUN touch /kube-build-image
# To run as non-root we sometimes need to rebuild go stdlib packages.
RUN chmod -R a+rwx /usr/local/go/pkg ${K8S_PATCHED_GOROOT}/pkg
# For running integration tests /var/run/kubernetes is required
# and should be writable by user
RUN mkdir /var/run/kubernetes && chmod a+rwx /var/run/kubernetes
# The kubernetes source is expected to be mounted here. This will be the base
# of operations.
ENV HOME /go/src/k8s.io/kubernetes
WORKDIR ${HOME}
# Make output from the dockerized build go someplace else
ENV KUBE_OUTPUT_SUBPATH _output/dockerized
# Pick up version stuff here as we don't copy our .git over.
ENV KUBE_GIT_VERSION_FILE ${HOME}/.dockerized-kube-version-defs
# Make log messages use the right timezone
ADD localtime /etc/localtime
RUN chmod a+r /etc/localtime
# Set up rsyncd
ADD rsyncd.password /
RUN chmod a+r /rsyncd.password
ADD rsyncd.sh /
RUN chmod a+rx /rsyncd.sh

1
vendor/k8s.io/kubernetes/build/build-image/VERSION generated vendored Normal file
View file

@ -0,0 +1 @@
4

View file

@ -0,0 +1,93 @@
# Copyright 2016 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# This file creates a standard build environment for building cross
# platform go binary for the architecture kubernetes cares about.
FROM golang:1.7.4
ENV GOARM 6
ENV KUBE_DYNAMIC_CROSSPLATFORMS \
armel \
arm64 \
s390x \
ppc64el
ENV KUBE_CROSSPLATFORMS \
linux/386 \
linux/arm linux/arm64 \
linux/ppc64le \
linux/s390x \
darwin/amd64 darwin/386 \
windows/amd64 windows/386
# Pre-compile the standard go library when cross-compiling. This is much easier now when we have go1.5+
RUN for platform in ${KUBE_CROSSPLATFORMS}; do GOOS=${platform%/*} GOARCH=${platform##*/} go install std; done
# Install g++, then download and install protoc for generating protobuf output
RUN apt-get update \
&& apt-get install -y g++ rsync apt-utils file patch \
&& apt-get clean && rm -rf /var/lib/apt/lists/*
RUN mkdir -p /usr/local/src/protobuf \
&& cd /usr/local/src/protobuf \
&& curl -sSL https://github.com/google/protobuf/releases/download/v3.0.0-beta-2/protobuf-cpp-3.0.0-beta-2.tar.gz | tar -xzv \
&& cd protobuf-3.0.0-beta-2 \
&& ./configure \
&& make install \
&& ldconfig \
&& cd .. \
&& rm -rf protobuf-3.0.0-beta-2 \
&& protoc --version
# Use dynamic cgo linking for architectures other than amd64 for the server platforms
# To install crossbuild essential for other architectures add the following repository.
RUN echo "deb http://archive.ubuntu.com/ubuntu xenial main universe" > /etc/apt/sources.list.d/cgocrosscompiling.list \
&& apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 40976EAF437D05B5 3B4FE6ACC0B21F32 \
&& apt-get update \
&& apt-get install -y build-essential \
&& for platform in ${KUBE_DYNAMIC_CROSSPLATFORMS}; do apt-get install -y crossbuild-essential-${platform}; done \
&& apt-get clean && rm -rf /var/lib/apt/lists/*
# work around 64MB tmpfs size in Docker 1.6
ENV TMPDIR /tmp.k8s
# Get the code coverage tool, goimports, and godep
RUN mkdir $TMPDIR \
&& chmod a+rwx $TMPDIR \
&& chmod o+t $TMPDIR \
&& go get golang.org/x/tools/cmd/cover \
golang.org/x/tools/cmd/goimports \
github.com/tools/godep
# Download and symlink etcd. We need this for our integration tests.
RUN export ETCD_VERSION=v3.0.14; \
mkdir -p /usr/local/src/etcd \
&& cd /usr/local/src/etcd \
&& curl -fsSL https://github.com/coreos/etcd/releases/download/${ETCD_VERSION}/etcd-${ETCD_VERSION}-linux-amd64.tar.gz | tar -xz \
&& ln -s ../src/etcd/etcd-${ETCD_VERSION}-linux-amd64/etcd /usr/local/bin/
# TODO: Remove the patched GOROOT when we have an official golang that has a working arm and ppc64le linker
ENV K8S_PATCHED_GOLANG_VERSION=1.7.4 \
K8S_PATCHED_GOROOT=/usr/local/go_k8s_patched
RUN mkdir -p ${K8S_PATCHED_GOROOT} \
&& curl -sSL https://github.com/golang/go/archive/go${K8S_PATCHED_GOLANG_VERSION}.tar.gz | tar -xz -C ${K8S_PATCHED_GOROOT} --strip-components=1
# We need a patched go1.7.1 for linux/arm (https://github.com/kubernetes/kubernetes/issues/29904)
COPY golang-patches/CL28857-go1.7.1-luxas.patch ${K8S_PATCHED_GOROOT}/
RUN cd ${K8S_PATCHED_GOROOT} \
&& patch -p1 < CL28857-go1.7.1-luxas.patch \
&& cd src \
&& GOROOT_FINAL=${K8S_PATCHED_GOROOT} GOROOT_BOOTSTRAP=/usr/local/go ./make.bash \
&& for platform in linux/arm; do GOOS=${platform%/*} GOARCH=${platform##*/} GOROOT=${K8S_PATCHED_GOROOT} go install std; done

View file

@ -0,0 +1,27 @@
# Copyright 2016 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
.PHONY: build push
IMAGE=kube-cross
TAG=$(shell cat VERSION)
all: push
build:
docker build --pull -t gcr.io/google_containers/$(IMAGE):$(TAG) .
push: build
gcloud docker --server=gcr.io -- push gcr.io/google_containers/$(IMAGE):$(TAG)

View file

@ -0,0 +1 @@
v1.7.4-1

View file

@ -0,0 +1,390 @@
From cc1015ff9bb020ea92c1854026e9ae395a8504b2 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Lucas=20K=C3=A4ldstr=C3=B6m?=
<lucas.kaldstrom@hotmail.co.uk>
Date: Mon, 12 Sep 2016 22:55:42 +0300
Subject: [PATCH] Ported cherrymui's CL 28857 to the go1.7.1 branch
---
src/cmd/compile/internal/arm/ssa.go | 16 ++++++
src/cmd/compile/internal/gc/cgen.go | 5 ++
src/cmd/compile/internal/gc/main.go | 11 ++--
src/cmd/compile/internal/gc/plive.go | 5 ++
src/cmd/internal/obj/arm/asm5.go | 32 ++++++++++++
src/cmd/internal/obj/arm/obj5.go | 3 ++
src/cmd/internal/obj/link.go | 98 +++++++++++++++++++-----------------
src/cmd/link/internal/arm/asm.go | 18 ++++++-
src/cmd/link/internal/ld/lib.go | 4 +-
src/runtime/asm_arm.s | 2 +-
10 files changed, 138 insertions(+), 56 deletions(-)
diff --git a/src/cmd/compile/internal/arm/ssa.go b/src/cmd/compile/internal/arm/ssa.go
index 8f466e3..a066e02 100644
--- a/src/cmd/compile/internal/arm/ssa.go
+++ b/src/cmd/compile/internal/arm/ssa.go
@@ -111,6 +111,22 @@ func ssaGenValue(s *gc.SSAGenState, v *ssa.Value) {
gc.AddAux(&p.To, v)
case ssa.OpARMCALLstatic:
// TODO: deferreturn
+ if v.Aux.(*gc.Sym) == gc.Deferreturn.Sym {
+ // Deferred calls will appear to be returning to
+ // the CALL deferreturn(SB) that we are about to emit.
+ // However, the stack trace code will show the line
+ // of the instruction byte before the return PC.
+ // To avoid that being an unrelated instruction,
+ // insert an actual hardware NOP that will have the right line number.
+ // This is different from obj.ANOP, which is a virtual no-op
+ // that doesn't make it into the instruction stream.
+ ginsnop()
+ if !gc.Ctxt.Flag_largemodel {
+ // We always back up two instructions.
+ // For non-large build, insert another NOP.
+ ginsnop()
+ }
+ }
p := gc.Prog(obj.ACALL)
p.To.Type = obj.TYPE_MEM
p.To.Name = obj.NAME_EXTERN
diff --git a/src/cmd/compile/internal/gc/cgen.go b/src/cmd/compile/internal/gc/cgen.go
index 74fe463..7507d5f 100644
--- a/src/cmd/compile/internal/gc/cgen.go
+++ b/src/cmd/compile/internal/gc/cgen.go
@@ -2373,6 +2373,11 @@ func Ginscall(f *Node, proc int) {
Thearch.Ginsnop()
}
}
+ if Ctxt.Arch.Family == sys.ARM && !Ctxt.Flag_largemodel {
+ // On ARM we always back up two instructions.
+ // For non-large build, insert another NOP.
+ Thearch.Ginsnop()
+ }
}
p := Thearch.Gins(obj.ACALL, nil, f)
diff --git a/src/cmd/compile/internal/gc/main.go b/src/cmd/compile/internal/gc/main.go
index b4df7ed..c0a8aa1 100644
--- a/src/cmd/compile/internal/gc/main.go
+++ b/src/cmd/compile/internal/gc/main.go
@@ -148,9 +148,9 @@ func Main() {
goos = obj.Getgoos()
Nacl = goos == "nacl"
- if Nacl {
- flag_largemodel = true
- }
+ //if Nacl {
+ // flag_largemodel = true
+ //}
flag.BoolVar(&compiling_runtime, "+", false, "compiling runtime")
obj.Flagcount("%", "debug non-static initializers", &Debug['%'])
@@ -204,9 +204,7 @@ func Main() {
flag.BoolVar(&flag_shared, "shared", false, "generate code that can be linked into a shared library")
flag.BoolVar(&flag_dynlink, "dynlink", false, "support references to Go symbols defined in other shared libraries")
}
- if Thearch.LinkArch.Family == sys.AMD64 {
- flag.BoolVar(&flag_largemodel, "largemodel", false, "generate code that assumes a large memory model")
- }
+ flag.BoolVar(&flag_largemodel, "largemodel", false, "generate code that assumes a large memory model")
flag.StringVar(&cpuprofile, "cpuprofile", "", "write cpu profile to `file`")
flag.StringVar(&memprofile, "memprofile", "", "write memory profile to `file`")
flag.Int64Var(&memprofilerate, "memprofilerate", 0, "set runtime.MemProfileRate to `rate`")
@@ -216,6 +214,7 @@ func Main() {
Ctxt.Flag_shared = flag_dynlink || flag_shared
Ctxt.Flag_dynlink = flag_dynlink
Ctxt.Flag_optimize = Debug['N'] == 0
+ Ctxt.Flag_largemodel = flag_largemodel
Ctxt.Debugasm = int32(Debug['S'])
Ctxt.Debugvlog = int32(Debug['v'])
diff --git a/src/cmd/compile/internal/gc/plive.go b/src/cmd/compile/internal/gc/plive.go
index ca0421d..ad26d24 100644
--- a/src/cmd/compile/internal/gc/plive.go
+++ b/src/cmd/compile/internal/gc/plive.go
@@ -1400,6 +1400,11 @@ func livenessepilogue(lv *Liveness) {
// the call.
prev = prev.Opt.(*obj.Prog)
}
+ if Ctxt.Arch.Family == sys.ARM && !Ctxt.Flag_largemodel {
+ // On ARM we always back up two instructions.
+ // For non-large build, there is another NOP.
+ prev = prev.Opt.(*obj.Prog)
+ }
splicebefore(lv, bb, newpcdataprog(prev, pos), prev)
} else {
splicebefore(lv, bb, newpcdataprog(p, pos), p)
diff --git a/src/cmd/internal/obj/arm/asm5.go b/src/cmd/internal/obj/arm/asm5.go
index d37091f..8136492 100644
--- a/src/cmd/internal/obj/arm/asm5.go
+++ b/src/cmd/internal/obj/arm/asm5.go
@@ -585,6 +585,30 @@ func span5(ctxt *obj.Link, cursym *obj.LSym) {
break
}
+ if ctxt.Flag_largemodel && (p.As == AB || p.As == ABL || p.As == obj.ADUFFZERO || p.As == obj.ADUFFCOPY) && p.To.Name == obj.NAME_EXTERN {
+ // in large mode, emit indirect call
+ // MOVW $target, Rtmp
+ // BL (Rtmp)
+ // use REGLINK as Rtmp, as soft div calls expects REGTMP to pass argument
+ tmp := int16(REGLINK)
+ q := obj.Appendp(ctxt, p)
+ q.As = ABL
+ if p.As == AB {
+ q.As = AB
+ tmp = REGTMP // should not clobber REGLINK in this case
+ }
+ q.To.Type = obj.TYPE_MEM
+ q.To.Reg = tmp
+ q.To.Sym = p.To.Sym // tell asmout to emits R_CALLARMLARGE reloc
+
+ p.As = AMOVW
+ p.From = p.To // jump target
+ p.From.Type = obj.TYPE_ADDR
+ p.To = obj.Addr{}
+ p.To.Type = obj.TYPE_REG
+ p.To.Reg = tmp
+ }
+
ctxt.Curp = p
p.Pc = int64(c)
o = oplook(ctxt, p)
@@ -1583,6 +1607,14 @@ func asmout(ctxt *obj.Link, p *obj.Prog, o *Optab, out []uint32) {
}
o1 = oprrr(ctxt, ABL, int(p.Scond))
o1 |= (uint32(p.To.Reg) & 15) << 0
+ if p.To.Sym != nil {
+ rel := obj.Addrel(ctxt.Cursym)
+ rel.Off = int32(ctxt.Pc)
+ rel.Siz = 4
+ rel.Sym = p.To.Sym
+ rel.Type = obj.R_CALLARMLARGE
+ break
+ }
rel := obj.Addrel(ctxt.Cursym)
rel.Off = int32(ctxt.Pc)
rel.Siz = 0
diff --git a/src/cmd/internal/obj/arm/obj5.go b/src/cmd/internal/obj/arm/obj5.go
index 9cf2f29..13bf952 100644
--- a/src/cmd/internal/obj/arm/obj5.go
+++ b/src/cmd/internal/obj/arm/obj5.go
@@ -584,6 +584,7 @@ func preprocess(ctxt *obj.Link, cursym *obj.LSym) {
p.As = ABL
p.Lineno = q1.Lineno
p.To.Type = obj.TYPE_BRANCH
+ p.To.Name = obj.NAME_EXTERN
switch o {
case ADIV:
p.To.Sym = ctxt.Sym_div
@@ -687,6 +688,7 @@ func softfloat(ctxt *obj.Link, cursym *obj.LSym) {
p.Link = next
p.As = ABL
p.To.Type = obj.TYPE_BRANCH
+ p.To.Name = obj.NAME_EXTERN
p.To.Sym = symsfloat
p.Lineno = next.Lineno
@@ -820,6 +822,7 @@ func stacksplit(ctxt *obj.Link, p *obj.Prog, framesize int32) *obj.Prog {
call := obj.Appendp(ctxt, movw)
call.As = obj.ACALL
call.To.Type = obj.TYPE_BRANCH
+ call.To.Name = obj.NAME_EXTERN
morestack := "runtime.morestack"
switch {
case ctxt.Cursym.Cfunc:
diff --git a/src/cmd/internal/obj/link.go b/src/cmd/internal/obj/link.go
index b6861f4..51e9fd3 100644
--- a/src/cmd/internal/obj/link.go
+++ b/src/cmd/internal/obj/link.go
@@ -588,6 +588,11 @@ const (
// R_ADDRMIPSTLS (only used on mips64) resolves to the low 16 bits of a TLS
// address (offset from thread pointer), by encoding it into the instruction.
R_ADDRMIPSTLS
+
+ // R_CALLARMLARGE applies on an indirect CALL with known target used on large mode.
+ // Currently it does nothing but tell the linker the target for stack split check.
+ // In the future linker may optimize this to a NOP and a direct CALL if it is safe.
+ R_CALLARMLARGE
)
type Auto struct {
@@ -617,52 +622,53 @@ const (
// Link holds the context for writing object code from a compiler
// to be linker input or for reading that input into the linker.
type Link struct {
- Goarm int32
- Headtype int
- Arch *LinkArch
- Debugasm int32
- Debugvlog int32
- Debugdivmod int32
- Debugpcln int32
- Flag_shared bool
- Flag_dynlink bool
- Flag_optimize bool
- Bso *bufio.Writer
- Pathname string
- Goroot string
- Goroot_final string
- Hash map[SymVer]*LSym
- LineHist LineHist
- Imports []string
- Plist *Plist
- Plast *Plist
- Sym_div *LSym
- Sym_divu *LSym
- Sym_mod *LSym
- Sym_modu *LSym
- Plan9privates *LSym
- Curp *Prog
- Printp *Prog
- Blitrl *Prog
- Elitrl *Prog
- Rexflag int
- Vexflag int
- Rep int
- Repn int
- Lock int
- Asmode int
- AsmBuf AsmBuf // instruction buffer for x86
- Instoffset int64
- Autosize int32
- Armsize int32
- Pc int64
- DiagFunc func(string, ...interface{})
- Mode int
- Cursym *LSym
- Version int
- Textp *LSym
- Etextp *LSym
- Errors int
+ Goarm int32
+ Headtype int
+ Arch *LinkArch
+ Debugasm int32
+ Debugvlog int32
+ Debugdivmod int32
+ Debugpcln int32
+ Flag_shared bool
+ Flag_dynlink bool
+ Flag_optimize bool
+ Flag_largemodel bool // generate code that assumes a large memory model
+ Bso *bufio.Writer
+ Pathname string
+ Goroot string
+ Goroot_final string
+ Hash map[SymVer]*LSym
+ LineHist LineHist
+ Imports []string
+ Plist *Plist
+ Plast *Plist
+ Sym_div *LSym
+ Sym_divu *LSym
+ Sym_mod *LSym
+ Sym_modu *LSym
+ Plan9privates *LSym
+ Curp *Prog
+ Printp *Prog
+ Blitrl *Prog
+ Elitrl *Prog
+ Rexflag int
+ Vexflag int
+ Rep int
+ Repn int
+ Lock int
+ Asmode int
+ AsmBuf AsmBuf // instruction buffer for x86
+ Instoffset int64
+ Autosize int32
+ Armsize int32
+ Pc int64
+ DiagFunc func(string, ...interface{})
+ Mode int
+ Cursym *LSym
+ Version int
+ Textp *LSym
+ Etextp *LSym
+ Errors int
Framepointer_enabled bool
diff --git a/src/cmd/link/internal/arm/asm.go b/src/cmd/link/internal/arm/asm.go
index 0c3e957..0cf61bb 100644
--- a/src/cmd/link/internal/arm/asm.go
+++ b/src/cmd/link/internal/arm/asm.go
@@ -410,6 +410,11 @@ func machoreloc1(r *ld.Reloc, sectoff int64) int {
return 0
}
+// sign extend a 24-bit integer
+func signext24(x int64) int32 {
+ return (int32(x) << 8) >> 8
+}
+
func archreloc(r *ld.Reloc, s *ld.LSym, val *int64) int {
if ld.Linkmode == ld.LinkExternal {
switch r.Type {
@@ -445,6 +450,9 @@ func archreloc(r *ld.Reloc, s *ld.LSym, val *int64) int {
*val = int64(braddoff(int32(0xff000000&uint32(r.Add)), int32(0xffffff&uint32(r.Xadd/4))))
return 0
+
+ case obj.R_CALLARMLARGE:
+ return 0
}
return -1
@@ -479,8 +487,16 @@ func archreloc(r *ld.Reloc, s *ld.LSym, val *int64) int {
return 0
case obj.R_CALLARM: // bl XXXXXX or b YYYYYY
- *val = int64(braddoff(int32(0xff000000&uint32(r.Add)), int32(0xffffff&uint32((ld.Symaddr(r.Sym)+int64((uint32(r.Add))*4)-(s.Value+int64(r.Off)))/4))))
+ // low 24-bit encodes the target address
+ t := (ld.Symaddr(r.Sym) + int64(signext24(r.Add&0xffffff)*4) - (s.Value + int64(r.Off))) / 4
+ if t > 0x7fffff || t < -0x800000 {
+ ld.Diag("direct call too far %d, should build with -gcflags -largemodel", t)
+ }
+ *val = int64(braddoff(int32(0xff000000&uint32(r.Add)), int32(0xffffff&t)))
+
+ return 0
+ case obj.R_CALLARMLARGE:
return 0
}
diff --git a/src/cmd/link/internal/ld/lib.go b/src/cmd/link/internal/ld/lib.go
index 14f4fa9..cc3b50e 100644
--- a/src/cmd/link/internal/ld/lib.go
+++ b/src/cmd/link/internal/ld/lib.go
@@ -1826,7 +1826,7 @@ func stkcheck(up *Chain, depth int) int {
r = &s.R[ri]
switch r.Type {
// Direct call.
- case obj.R_CALL, obj.R_CALLARM, obj.R_CALLARM64, obj.R_CALLPOWER, obj.R_CALLMIPS:
+ case obj.R_CALL, obj.R_CALLARM, obj.R_CALLARM64, obj.R_CALLPOWER, obj.R_CALLMIPS, obj.R_CALLARMLARGE:
ch.limit = int(int32(limit) - pcsp.value - int32(callsize()))
ch.sym = r.Sym
if stkcheck(&ch, depth+1) < 0 {
@@ -2164,7 +2164,7 @@ func callgraph() {
if r.Sym == nil {
continue
}
- if (r.Type == obj.R_CALL || r.Type == obj.R_CALLARM || r.Type == obj.R_CALLPOWER || r.Type == obj.R_CALLMIPS) && r.Sym.Type == obj.STEXT {
+ if (r.Type == obj.R_CALL || r.Type == obj.R_CALLARM || r.Type == obj.R_CALLPOWER || r.Type == obj.R_CALLMIPS || r.Type == obj.R_CALLARMLARGE) && r.Sym.Type == obj.STEXT {
fmt.Fprintf(Bso, "%s calls %s\n", s.Name, r.Sym.Name)
}
}
diff --git a/src/runtime/asm_arm.s b/src/runtime/asm_arm.s
index f02297e..0930585 100644
--- a/src/runtime/asm_arm.s
+++ b/src/runtime/asm_arm.s
@@ -465,7 +465,7 @@ CALLFN(·call1073741824, 1073741824)
// (And double-check that pop is atomic in that way.)
TEXT runtime·jmpdefer(SB),NOSPLIT,$0-8
MOVW 0(R13), LR
- MOVW $-4(LR), LR // BL deferreturn
+ MOVW $-8(LR), LR // BL deferreturn
MOVW fv+0(FP), R7
MOVW argp+4(FP), R13
MOVW $-4(R13), R13 // SP is 4 below argp, due to saved LR
--
2.5.0

83
vendor/k8s.io/kubernetes/build/build-image/rsyncd.sh generated vendored Executable file
View file

@ -0,0 +1,83 @@
#!/bin/bash
# Copyright 2016 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# This script will set up and run rsyncd to allow data to move into and out of
# our dockerized build system. This is used for syncing sources and changes of
# sources into the docker-build-container. It is also used to transfer built binaries
# and generated files back out.
#
# When run as root (rare) it'll preserve the file ids as sent from the client.
# Usually it'll be run as non-dockerized UID/GID and end up translating all file
# ownership to that.
set -o errexit
set -o nounset
set -o pipefail
# The directory that gets sync'd
VOLUME=${HOME}
# Assume that this is running in Docker on a bridge. Allow connections from
# anything on the local subnet.
ALLOW=$(ip route | awk '/^default via/ { reg = "^[0-9./]+ dev "$5 } ; $0 ~ reg { print $1 }')
CONFDIR="/tmp/rsync.k8s"
PIDFILE="${CONFDIR}/rsyncd.pid"
CONFFILE="${CONFDIR}/rsyncd.conf"
SECRETS="${CONFDIR}/rsyncd.secrets"
mkdir -p "${CONFDIR}"
if [[ -f "${PIDFILE}" ]]; then
PID=$(cat "${PIDFILE}")
echo "Cleaning up old PID file: ${PIDFILE}"
kill $PID &> /dev/null || true
rm "${PIDFILE}"
fi
PASSWORD=$(</rsyncd.password)
cat <<EOF >"${SECRETS}"
k8s:${PASSWORD}
EOF
chmod go= "${SECRETS}"
USER_CONFIG=
if [[ "$(id -u)" == "0" ]]; then
USER_CONFIG=" uid = 0"$'\n'" gid = 0"
fi
cat <<EOF >"${CONFFILE}"
pid file = ${PIDFILE}
use chroot = no
log file = /dev/stdout
reverse lookup = no
munge symlinks = no
port = 8730
[k8s]
numeric ids = true
$USER_CONFIG
hosts deny = *
hosts allow = ${ALLOW}
auth users = k8s
secrets file = ${SECRETS}
read only = false
path = ${VOLUME}
filter = - /.make/ - /.git/ - /_tmp/
EOF
exec /usr/bin/rsync --no-detach --daemon --config="${CONFFILE}" "$@"

54
vendor/k8s.io/kubernetes/build/cni/Makefile generated vendored Normal file
View file

@ -0,0 +1,54 @@
# Copyright 2016 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Build the CNI binaries.
#
# Usage:
# [ARCH=amd64] [CNI_RELEASE=v0.3.0] make (build|push)
ARCH?=amd64
CNI_RELEASE?=07a8a28637e97b22eb8dfe710eeae1344f69d16e
CNI_TARBALL=cni-$(ARCH)-$(CNI_RELEASE).tar.gz
CUR_DIR=$(shell pwd)
OUTPUT_DIR=$(CUR_DIR)/output
all: build
build:
mkdir -p $(OUTPUT_DIR)
docker run -it -v $(OUTPUT_DIR):/output golang:1.6 /bin/bash -c "\
git clone https://github.com/containernetworking/cni\
&& cd cni \
&& git checkout $(CNI_RELEASE) \
&& CGO_ENABLED=0 GOOS=linux GOARCH=$(ARCH) ./build \
&& tar -zcvf $(CNI_TARBALL) bin/ \
&& mv $(CNI_TARBALL) /output/"
# Backward Compatibility
ifeq ($(ARCH),amd64)
cp $(OUTPUT_DIR)/$(CNI_TARBALL) $(OUTPUT_DIR)/cni-$(CNI_RELEASE).tar.gz
endif
push: build
gsutil cp $(OUTPUT_DIR)/$(CNI_TARBALL) gs://kubernetes-release/network-plugins
ifeq ($(ARCH),amd64)
gsutil cp $(OUTPUT_DIR)/cni-$(CNI_RELEASE).tar.gz gs://kubernetes-release/network-plugins
endif
clean:
rm -rf output/
.PHONY: all

716
vendor/k8s.io/kubernetes/build/common.sh generated vendored Executable file
View file

@ -0,0 +1,716 @@
#!/bin/bash
# Copyright 2014 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Common utilities, variables and checks for all build scripts.
set -o errexit
set -o nounset
set -o pipefail
DOCKER_OPTS=${DOCKER_OPTS:-""}
DOCKER=(docker ${DOCKER_OPTS})
DOCKER_HOST=${DOCKER_HOST:-""}
DOCKER_MACHINE_NAME=${DOCKER_MACHINE_NAME:-"kube-dev"}
readonly DOCKER_MACHINE_DRIVER=${DOCKER_MACHINE_DRIVER:-"virtualbox --virtualbox-memory 4096 --virtualbox-cpu-count -1"}
# This will canonicalize the path
KUBE_ROOT=$(cd $(dirname "${BASH_SOURCE}")/.. && pwd -P)
source "${KUBE_ROOT}/hack/lib/init.sh"
# Set KUBE_BUILD_PPC64LE to y to build for ppc64le in addition to other
# platforms.
# TODO(IBM): remove KUBE_BUILD_PPC64LE and reenable ppc64le compilation by
# default when
# https://github.com/kubernetes/kubernetes/issues/30384 and
# https://github.com/kubernetes/kubernetes/issues/25886 are fixed.
# The majority of the logic is in hack/lib/golang.sh.
readonly KUBE_BUILD_PPC64LE="${KUBE_BUILD_PPC64LE:-n}"
# Constants
readonly KUBE_BUILD_IMAGE_REPO=kube-build
readonly KUBE_BUILD_IMAGE_CROSS_TAG="$(cat ${KUBE_ROOT}/build/build-image/cross/VERSION)"
# This version number is used to cause everyone to rebuild their data containers
# and build image. This is especially useful for automated build systems like
# Jenkins.
#
# Increment/change this number if you change the build image (anything under
# build/build-image) or change the set of volumes in the data container.
readonly KUBE_BUILD_IMAGE_VERSION_BASE="$(cat ${KUBE_ROOT}/build/build-image/VERSION)"
readonly KUBE_BUILD_IMAGE_VERSION="${KUBE_BUILD_IMAGE_VERSION_BASE}-${KUBE_BUILD_IMAGE_CROSS_TAG}"
# Here we map the output directories across both the local and remote _output
# directories:
#
# *_OUTPUT_ROOT - the base of all output in that environment.
# *_OUTPUT_SUBPATH - location where golang stuff is built/cached. Also
# persisted across docker runs with a volume mount.
# *_OUTPUT_BINPATH - location where final binaries are placed. If the remote
# is really remote, this is the stuff that has to be copied
# back.
# OUT_DIR can come in from the Makefile, so honor it.
readonly LOCAL_OUTPUT_ROOT="${KUBE_ROOT}/${OUT_DIR:-_output}"
readonly LOCAL_OUTPUT_SUBPATH="${LOCAL_OUTPUT_ROOT}/dockerized"
readonly LOCAL_OUTPUT_BINPATH="${LOCAL_OUTPUT_SUBPATH}/bin"
readonly LOCAL_OUTPUT_GOPATH="${LOCAL_OUTPUT_SUBPATH}/go"
readonly LOCAL_OUTPUT_IMAGE_STAGING="${LOCAL_OUTPUT_ROOT}/images"
# This is a symlink to binaries for "this platform" (e.g. build tools).
readonly THIS_PLATFORM_BIN="${LOCAL_OUTPUT_ROOT}/bin"
readonly REMOTE_ROOT="/go/src/${KUBE_GO_PACKAGE}"
readonly REMOTE_OUTPUT_ROOT="${REMOTE_ROOT}/_output"
readonly REMOTE_OUTPUT_SUBPATH="${REMOTE_OUTPUT_ROOT}/dockerized"
readonly REMOTE_OUTPUT_BINPATH="${REMOTE_OUTPUT_SUBPATH}/bin"
readonly REMOTE_OUTPUT_GOPATH="${REMOTE_OUTPUT_SUBPATH}/go"
# This is the port on the workstation host to expose RSYNC on. Set this if you
# are doing something fancy with ssh tunneling.
readonly KUBE_RSYNC_PORT="${KUBE_RSYNC_PORT:-}"
# This is the port that rsync is running on *inside* the container. This may be
# mapped to KUBE_RSYNC_PORT via docker networking.
readonly KUBE_CONTAINER_RSYNC_PORT=8730
# Get the set of master binaries that run in Docker (on Linux)
# Entry format is "<name-of-binary>,<base-image>".
# Binaries are placed in /usr/local/bin inside the image.
#
# $1 - server architecture
kube::build::get_docker_wrapped_binaries() {
case $1 in
"amd64")
local targets=(
kube-apiserver,busybox
kube-controller-manager,busybox
kube-scheduler,busybox
kube-aggregator,busybox
kube-proxy,gcr.io/google_containers/debian-iptables-amd64:v5
);;
"arm")
local targets=(
kube-apiserver,armel/busybox
kube-controller-manager,armel/busybox
kube-scheduler,armel/busybox
kube-aggregator,armel/busybox
kube-proxy,gcr.io/google_containers/debian-iptables-arm:v5
);;
"arm64")
local targets=(
kube-apiserver,aarch64/busybox
kube-controller-manager,aarch64/busybox
kube-scheduler,aarch64/busybox
kube-aggregator,aarch64/busybox
kube-proxy,gcr.io/google_containers/debian-iptables-arm64:v5
);;
"ppc64le")
local targets=(
kube-apiserver,ppc64le/busybox
kube-controller-manager,ppc64le/busybox
kube-scheduler,ppc64le/busybox
kube-aggregator,ppc64le/busybox
kube-proxy,gcr.io/google_containers/debian-iptables-ppc64le:v5
);;
"s390x")
local targets=(
kube-apiserver,s390x/busybox
kube-controller-manager,s390x/busybox
kube-scheduler,s390x/busybox
kube-aggregator,s390x/busybox
kube-proxy,gcr.io/google_containers/debian-iptables-s390x:v5
);;
esac
echo "${targets[@]}"
}
# ---------------------------------------------------------------------------
# Basic setup functions
# Verify that the right utilities and such are installed for building Kube. Set
# up some dynamic constants.
# Args:
# $1 - boolean of whether to require functioning docker (default true)
#
# Vars set:
# KUBE_ROOT_HASH
# KUBE_BUILD_IMAGE_TAG_BASE
# KUBE_BUILD_IMAGE_TAG
# KUBE_BUILD_IMAGE
# KUBE_BUILD_CONTAINER_NAME_BASE
# KUBE_BUILD_CONTAINER_NAME
# KUBE_DATA_CONTAINER_NAME_BASE
# KUBE_DATA_CONTAINER_NAME
# KUBE_RSYNC_CONTAINER_NAME_BASE
# KUBE_RSYNC_CONTAINER_NAME
# DOCKER_MOUNT_ARGS
# LOCAL_OUTPUT_BUILD_CONTEXT
function kube::build::verify_prereqs() {
local -r require_docker=${1:-true}
kube::log::status "Verifying Prerequisites...."
kube::build::ensure_tar || return 1
kube::build::ensure_rsync || return 1
if ${require_docker}; then
kube::build::ensure_docker_in_path || return 1
if kube::build::is_osx; then
kube::build::docker_available_on_osx || return 1
fi
kube::util::ensure_docker_daemon_connectivity || return 1
if (( ${KUBE_VERBOSE} > 6 )); then
kube::log::status "Docker Version:"
"${DOCKER[@]}" version | kube::log::info_from_stdin
fi
fi
KUBE_ROOT_HASH=$(kube::build::short_hash "${HOSTNAME:-}:${KUBE_ROOT}")
KUBE_BUILD_IMAGE_TAG_BASE="build-${KUBE_ROOT_HASH}"
KUBE_BUILD_IMAGE_TAG="${KUBE_BUILD_IMAGE_TAG_BASE}-${KUBE_BUILD_IMAGE_VERSION}"
KUBE_BUILD_IMAGE="${KUBE_BUILD_IMAGE_REPO}:${KUBE_BUILD_IMAGE_TAG}"
KUBE_BUILD_CONTAINER_NAME_BASE="kube-build-${KUBE_ROOT_HASH}"
KUBE_BUILD_CONTAINER_NAME="${KUBE_BUILD_CONTAINER_NAME_BASE}-${KUBE_BUILD_IMAGE_VERSION}"
KUBE_RSYNC_CONTAINER_NAME_BASE="kube-rsync-${KUBE_ROOT_HASH}"
KUBE_RSYNC_CONTAINER_NAME="${KUBE_RSYNC_CONTAINER_NAME_BASE}-${KUBE_BUILD_IMAGE_VERSION}"
KUBE_DATA_CONTAINER_NAME_BASE="kube-build-data-${KUBE_ROOT_HASH}"
KUBE_DATA_CONTAINER_NAME="${KUBE_DATA_CONTAINER_NAME_BASE}-${KUBE_BUILD_IMAGE_VERSION}"
DOCKER_MOUNT_ARGS=(--volumes-from "${KUBE_DATA_CONTAINER_NAME}")
LOCAL_OUTPUT_BUILD_CONTEXT="${LOCAL_OUTPUT_IMAGE_STAGING}/${KUBE_BUILD_IMAGE}"
kube::version::get_version_vars
kube::version::save_version_vars "${KUBE_ROOT}/.dockerized-kube-version-defs"
}
# ---------------------------------------------------------------------------
# Utility functions
function kube::build::docker_available_on_osx() {
if [[ -z "${DOCKER_HOST}" ]]; then
if [[ -S "/var/run/docker.sock" ]]; then
kube::log::status "Using Docker for MacOS"
return 0
fi
kube::log::status "No docker host is set. Checking options for setting one..."
if [[ -z "$(which docker-machine)" ]]; then
kube::log::status "It looks like you're running Mac OS X, yet neither Docker for Mac nor docker-machine can be found."
kube::log::status "See: https://docs.docker.com/engine/installation/mac/ for installation instructions."
return 1
elif [[ -n "$(which docker-machine)" ]]; then
kube::build::prepare_docker_machine
fi
fi
}
function kube::build::prepare_docker_machine() {
kube::log::status "docker-machine was found."
docker-machine inspect "${DOCKER_MACHINE_NAME}" &> /dev/null || {
kube::log::status "Creating a machine to build Kubernetes"
docker-machine create --driver ${DOCKER_MACHINE_DRIVER} \
--engine-env HTTP_PROXY="${KUBERNETES_HTTP_PROXY:-}" \
--engine-env HTTPS_PROXY="${KUBERNETES_HTTPS_PROXY:-}" \
--engine-env NO_PROXY="${KUBERNETES_NO_PROXY:-127.0.0.1}" \
"${DOCKER_MACHINE_NAME}" > /dev/null || {
kube::log::error "Something went wrong creating a machine."
kube::log::error "Try the following: "
kube::log::error "docker-machine create -d ${DOCKER_MACHINE_DRIVER} ${DOCKER_MACHINE_NAME}"
return 1
}
}
docker-machine start "${DOCKER_MACHINE_NAME}" &> /dev/null
# it takes `docker-machine env` a few seconds to work if the machine was just started
local docker_machine_out
while ! docker_machine_out=$(docker-machine env "${DOCKER_MACHINE_NAME}" 2>&1); do
if [[ ${docker_machine_out} =~ "Error checking TLS connection" ]]; then
echo ${docker_machine_out}
docker-machine regenerate-certs ${DOCKER_MACHINE_NAME}
else
sleep 1
fi
done
eval $(docker-machine env "${DOCKER_MACHINE_NAME}")
kube::log::status "A Docker host using docker-machine named '${DOCKER_MACHINE_NAME}' is ready to go!"
return 0
}
function kube::build::is_osx() {
[[ "$(uname)" == "Darwin" ]]
}
function kube::build::is_gnu_sed() {
[[ $(sed --version 2>&1) == *GNU* ]]
}
function kube::build::ensure_rsync() {
if [[ -z "$(which rsync)" ]]; then
kube::log::error "Can't find 'rsync' in PATH, please fix and retry."
return 1
fi
}
function kube::build::update_dockerfile() {
if kube::build::is_gnu_sed; then
sed_opts=(-i)
else
sed_opts=(-i '')
fi
sed "${sed_opts[@]}" "s/KUBE_BUILD_IMAGE_CROSS_TAG/${KUBE_BUILD_IMAGE_CROSS_TAG}/" "${LOCAL_OUTPUT_BUILD_CONTEXT}/Dockerfile"
}
function kube::build::ensure_docker_in_path() {
if [[ -z "$(which docker)" ]]; then
kube::log::error "Can't find 'docker' in PATH, please fix and retry."
kube::log::error "See https://docs.docker.com/installation/#installation for installation instructions."
return 1
fi
}
function kube::build::ensure_tar() {
if [[ -n "${TAR:-}" ]]; then
return
fi
# Find gnu tar if it is available, bomb out if not.
TAR=tar
if which gtar &>/dev/null; then
TAR=gtar
else
if which gnutar &>/dev/null; then
TAR=gnutar
fi
fi
if ! "${TAR}" --version | grep -q GNU; then
echo " !!! Cannot find GNU tar. Build on Linux or install GNU tar"
echo " on Mac OS X (brew install gnu-tar)."
return 1
fi
}
function kube::build::has_docker() {
which docker &> /dev/null
}
# Detect if a specific image exists
#
# $1 - image repo name
# #2 - image tag
function kube::build::docker_image_exists() {
[[ -n $1 && -n $2 ]] || {
kube::log::error "Internal error. Image not specified in docker_image_exists."
exit 2
}
[[ $("${DOCKER[@]}" images -q "${1}:${2}") ]]
}
# Delete all images that match a tag prefix except for the "current" version
#
# $1: The image repo/name
# $2: The tag base. We consider any image that matches $2*
# $3: The current image not to delete if provided
function kube::build::docker_delete_old_images() {
# In Docker 1.12, we can replace this with
# docker images "$1" --format "{{.Tag}}"
for tag in $("${DOCKER[@]}" images ${1} | tail -n +2 | awk '{print $2}') ; do
if [[ "${tag}" != "${2}"* ]] ; then
V=3 kube::log::status "Keeping image ${1}:${tag}"
continue
fi
if [[ -z "${3:-}" || "${tag}" != "${3}" ]] ; then
V=2 kube::log::status "Deleting image ${1}:${tag}"
"${DOCKER[@]}" rmi "${1}:${tag}" >/dev/null
else
V=3 kube::log::status "Keeping image ${1}:${tag}"
fi
done
}
# Stop and delete all containers that match a pattern
#
# $1: The base container prefix
# $2: The current container to keep, if provided
function kube::build::docker_delete_old_containers() {
# In Docker 1.12 we can replace this line with
# docker ps -a --format="{{.Names}}"
for container in $("${DOCKER[@]}" ps -a | tail -n +2 | awk '{print $NF}') ; do
if [[ "${container}" != "${1}"* ]] ; then
V=3 kube::log::status "Keeping container ${container}"
continue
fi
if [[ -z "${2:-}" || "${container}" != "${2}" ]] ; then
V=2 kube::log::status "Deleting container ${container}"
kube::build::destroy_container "${container}"
else
V=3 kube::log::status "Keeping container ${container}"
fi
done
}
# Takes $1 and computes a short has for it. Useful for unique tag generation
function kube::build::short_hash() {
[[ $# -eq 1 ]] || {
kube::log::error "Internal error. No data based to short_hash."
exit 2
}
local short_hash
if which md5 >/dev/null 2>&1; then
short_hash=$(md5 -q -s "$1")
else
short_hash=$(echo -n "$1" | md5sum)
fi
echo ${short_hash:0:10}
}
# Pedantically kill, wait-on and remove a container. The -f -v options
# to rm don't actually seem to get the job done, so force kill the
# container, wait to ensure it's stopped, then try the remove. This is
# a workaround for bug https://github.com/docker/docker/issues/3968.
function kube::build::destroy_container() {
"${DOCKER[@]}" kill "$1" >/dev/null 2>&1 || true
"${DOCKER[@]}" wait "$1" >/dev/null 2>&1 || true
"${DOCKER[@]}" rm -f -v "$1" >/dev/null 2>&1 || true
}
# ---------------------------------------------------------------------------
# Building
function kube::build::clean() {
if kube::build::has_docker ; then
kube::build::docker_delete_old_containers "${KUBE_BUILD_CONTAINER_NAME_BASE}"
kube::build::docker_delete_old_containers "${KUBE_RSYNC_CONTAINER_NAME_BASE}"
kube::build::docker_delete_old_containers "${KUBE_DATA_CONTAINER_NAME_BASE}"
kube::build::docker_delete_old_images "${KUBE_BUILD_IMAGE_REPO}" "${KUBE_BUILD_IMAGE_TAG_BASE}"
V=2 kube::log::status "Cleaning all untagged docker images"
"${DOCKER[@]}" rmi $("${DOCKER[@]}" images -q --filter 'dangling=true') 2> /dev/null || true
fi
kube::log::status "Removing _output directory"
rm -rf "${LOCAL_OUTPUT_ROOT}"
}
# Set up the context directory for the kube-build image and build it.
function kube::build::build_image() {
mkdir -p "${LOCAL_OUTPUT_BUILD_CONTEXT}"
cp /etc/localtime "${LOCAL_OUTPUT_BUILD_CONTEXT}/"
cp build/build-image/Dockerfile "${LOCAL_OUTPUT_BUILD_CONTEXT}/Dockerfile"
cp build/build-image/rsyncd.sh "${LOCAL_OUTPUT_BUILD_CONTEXT}/"
dd if=/dev/urandom bs=512 count=1 2>/dev/null | LC_ALL=C tr -dc 'A-Za-z0-9' | dd bs=32 count=1 2>/dev/null > "${LOCAL_OUTPUT_BUILD_CONTEXT}/rsyncd.password"
chmod go= "${LOCAL_OUTPUT_BUILD_CONTEXT}/rsyncd.password"
kube::build::update_dockerfile
kube::build::docker_build "${KUBE_BUILD_IMAGE}" "${LOCAL_OUTPUT_BUILD_CONTEXT}" 'false'
# Clean up old versions of everything
kube::build::docker_delete_old_containers "${KUBE_BUILD_CONTAINER_NAME_BASE}" "${KUBE_BUILD_CONTAINER_NAME}"
kube::build::docker_delete_old_containers "${KUBE_RSYNC_CONTAINER_NAME_BASE}" "${KUBE_RSYNC_CONTAINER_NAME}"
kube::build::docker_delete_old_containers "${KUBE_DATA_CONTAINER_NAME_BASE}" "${KUBE_DATA_CONTAINER_NAME}"
kube::build::docker_delete_old_images "${KUBE_BUILD_IMAGE_REPO}" "${KUBE_BUILD_IMAGE_TAG_BASE}" "${KUBE_BUILD_IMAGE_TAG}"
kube::build::ensure_data_container
kube::build::sync_to_container
}
# Build a docker image from a Dockerfile.
# $1 is the name of the image to build
# $2 is the location of the "context" directory, with the Dockerfile at the root.
# $3 is the value to set the --pull flag for docker build; true by default
function kube::build::docker_build() {
local -r image=$1
local -r context_dir=$2
local -r pull="${3:-true}"
local -ra build_cmd=("${DOCKER[@]}" build -t "${image}" "--pull=${pull}" "${context_dir}")
kube::log::status "Building Docker image ${image}"
local docker_output
docker_output=$("${build_cmd[@]}" 2>&1) || {
cat <<EOF >&2
+++ Docker build command failed for ${image}
${docker_output}
To retry manually, run:
${build_cmd[*]}
EOF
return 1
}
}
function kube::build::ensure_data_container() {
# If the data container exists AND exited successfully, we can use it.
# Otherwise nuke it and start over.
local ret=0
local code=$(docker inspect \
-f '{{.State.ExitCode}}' \
"${KUBE_DATA_CONTAINER_NAME}" 2>/dev/null || ret=$?)
if [[ "${ret}" == 0 && "${code}" != 0 ]]; then
kube::build::destroy_container "${KUBE_DATA_CONTAINER_NAME}"
ret=1
fi
if [[ "${ret}" != 0 ]]; then
kube::log::status "Creating data container ${KUBE_DATA_CONTAINER_NAME}"
# We have to ensure the directory exists, or else the docker run will
# create it as root.
mkdir -p "${LOCAL_OUTPUT_GOPATH}"
# We want this to run as root to be able to chown, so non-root users can
# later use the result as a data container. This run both creates the data
# container and chowns the GOPATH.
#
# The data container creates volumes for all of the directories that store
# intermediates for the Go build. This enables incremental builds across
# Docker sessions. The *_cgo paths are re-compiled versions of the go std
# libraries for true static building.
local -ra docker_cmd=(
"${DOCKER[@]}" run
--volume "${REMOTE_ROOT}" # white-out the whole output dir
--volume /usr/local/go/pkg/linux_386_cgo
--volume /usr/local/go/pkg/linux_amd64_cgo
--volume /usr/local/go/pkg/linux_arm_cgo
--volume /usr/local/go/pkg/linux_arm64_cgo
--volume /usr/local/go/pkg/linux_ppc64le_cgo
--volume /usr/local/go/pkg/darwin_amd64_cgo
--volume /usr/local/go/pkg/darwin_386_cgo
--volume /usr/local/go/pkg/windows_amd64_cgo
--volume /usr/local/go/pkg/windows_386_cgo
--name "${KUBE_DATA_CONTAINER_NAME}"
--hostname "${HOSTNAME}"
"${KUBE_BUILD_IMAGE}"
chown -R $(id -u).$(id -g)
"${REMOTE_ROOT}"
/usr/local/go/pkg/
)
"${docker_cmd[@]}"
fi
}
# Run a command in the kube-build image. This assumes that the image has
# already been built.
function kube::build::run_build_command() {
kube::log::status "Running build command..."
kube::build::run_build_command_ex "${KUBE_BUILD_CONTAINER_NAME}" -- "$@"
}
# Run a command in the kube-build image. This assumes that the image has
# already been built.
#
# Arguments are in the form of
# <container name> <extra docker args> -- <command>
function kube::build::run_build_command_ex() {
[[ $# != 0 ]] || { echo "Invalid input - please specify a container name." >&2; return 4; }
local container_name="${1}"
shift
local -a docker_run_opts=(
"--name=${container_name}"
"--user=$(id -u):$(id -g)"
"--hostname=${HOSTNAME}"
"${DOCKER_MOUNT_ARGS[@]}"
)
local detach=false
[[ $# != 0 ]] || { echo "Invalid input - please specify docker arguments followed by --." >&2; return 4; }
# Everything before "--" is an arg to docker
until [ -z "${1-}" ] ; do
if [[ "$1" == "--" ]]; then
shift
break
fi
docker_run_opts+=("$1")
if [[ "$1" == "-d" || "$1" == "--detach" ]] ; then
detach=true
fi
shift
done
# Everything after "--" is the command to run
[[ $# != 0 ]] || { echo "Invalid input - please specify a command to run." >&2; return 4; }
local -a cmd=()
until [ -z "${1-}" ] ; do
cmd+=("$1")
shift
done
docker_run_opts+=(
--env "KUBE_FASTBUILD=${KUBE_FASTBUILD:-false}"
--env "KUBE_BUILDER_OS=${OSTYPE:-notdetected}"
--env "KUBE_BUILD_PPC64LE=${KUBE_BUILD_PPC64LE}" # TODO(IBM): remove
--env "KUBE_VERBOSE=${KUBE_VERBOSE}"
)
# If we have stdin we can run interactive. This allows things like 'shell.sh'
# to work. However, if we run this way and don't have stdin, then it ends up
# running in a daemon-ish mode. So if we don't have a stdin, we explicitly
# attach stderr/stdout but don't bother asking for a tty.
if [[ -t 0 ]]; then
docker_run_opts+=(--interactive --tty)
elif [[ "${detach}" == false ]]; then
docker_run_opts+=(--attach=stdout --attach=stderr)
fi
local -ra docker_cmd=(
"${DOCKER[@]}" run "${docker_run_opts[@]}" "${KUBE_BUILD_IMAGE}")
# Clean up container from any previous run
kube::build::destroy_container "${container_name}"
"${docker_cmd[@]}" "${cmd[@]}"
if [[ "${detach}" == false ]]; then
kube::build::destroy_container "${container_name}"
fi
}
function kube::build::rsync_probe {
# Wait unil rsync is up and running.
local tries=20
while (( ${tries} > 0 )) ; do
if rsync "rsync://k8s@${1}:${2}/" \
--password-file="${LOCAL_OUTPUT_BUILD_CONTEXT}/rsyncd.password" \
&> /dev/null ; then
return 0
fi
tries=$(( ${tries} - 1))
sleep 0.1
done
return 1
}
# Start up the rsync container in the backgound. This should be explicitly
# stoped with kube::build::stop_rsyncd_container.
#
# This will set the global var KUBE_RSYNC_ADDR to the effective port that the
# rsync daemon can be reached out.
function kube::build::start_rsyncd_container() {
kube::build::stop_rsyncd_container
V=3 kube::log::status "Starting rsyncd container"
kube::build::run_build_command_ex \
"${KUBE_RSYNC_CONTAINER_NAME}" -p 127.0.0.1:${KUBE_RSYNC_PORT}:${KUBE_CONTAINER_RSYNC_PORT} -d \
-- /rsyncd.sh >/dev/null
local mapped_port
if ! mapped_port=$("${DOCKER[@]}" port "${KUBE_RSYNC_CONTAINER_NAME}" ${KUBE_CONTAINER_RSYNC_PORT} 2> /dev/null | cut -d: -f 2) ; then
kube::log::error "Could not get effective rsync port"
return 1
fi
local container_ip
container_ip=$("${DOCKER[@]}" inspect --format '{{ .NetworkSettings.IPAddress }}' "${KUBE_RSYNC_CONTAINER_NAME}")
# Sometimes we can reach rsync through localhost and a NAT'd port. Other
# times (when we are running in another docker container on the Jenkins
# machines) we have to talk directly to the container IP. There is no one
# strategy that works in all cases so we test to figure out which situation we
# are in.
if kube::build::rsync_probe 127.0.0.1 ${mapped_port}; then
KUBE_RSYNC_ADDR="127.0.0.1:${mapped_port}"
return 0
elif kube::build::rsync_probe "${container_ip}" ${KUBE_CONTAINER_RSYNC_PORT}; then
KUBE_RSYNC_ADDR="${container_ip}:${KUBE_CONTAINER_RSYNC_PORT}"
return 0
fi
kube::log::error "Could not connect to rsync container. See build/README.md for setting up remote Docker engine."
return 1
}
function kube::build::stop_rsyncd_container() {
V=3 kube::log::status "Stopping any currently running rsyncd container"
unset KUBE_RSYNC_ADDR
kube::build::destroy_container "${KUBE_RSYNC_CONTAINER_NAME}"
}
function kube::build::rsync {
local -a rsync_opts=(
--archive
--prune-empty-dirs
--password-file="${LOCAL_OUTPUT_BUILD_CONTEXT}/rsyncd.password"
)
if (( ${KUBE_VERBOSE} >= 6 )); then
rsync_opts+=("-iv")
fi
if (( ${KUBE_RSYNC_COMPRESS} > 0 )); then
rsync_opts+=("--compress-level=${KUBE_RSYNC_COMPRESS}")
fi
V=3 kube::log::status "Running rsync"
rsync "${rsync_opts[@]}" "$@"
}
# This will launch rsyncd in a container and then sync the source tree to the
# container over the local network.
function kube::build::sync_to_container() {
kube::log::status "Syncing sources to container"
kube::build::start_rsyncd_container
# rsync filters are a bit confusing. Here we are syncing everything except
# output only directories and things that are not necessary like the git
# directory and generated files. The '- /' filter prevents rsync
# from trying to set the uid/gid/perms on the root of the sync tree.
# As an exception, we need to sync generated files in staging/, because
# they will not be re-generated by 'make'.
kube::build::rsync \
--delete \
--filter='+ /staging/**' \
--filter='- /.git/' \
--filter='- /.make/' \
--filter='- /_tmp/' \
--filter='- /_output/' \
--filter='- /' \
--filter='- zz_generated.*' \
--filter='- generated.proto' \
"${KUBE_ROOT}/" "rsync://k8s@${KUBE_RSYNC_ADDR}/k8s/"
kube::build::stop_rsyncd_container
}
# Copy all build results back out.
function kube::build::copy_output() {
kube::log::status "Syncing out of container"
kube::build::start_rsyncd_container
local rsync_extra=""
if (( ${KUBE_VERBOSE} >= 6 )); then
rsync_extra="-iv"
fi
# The filter syntax for rsync is a little obscure. It filters on files and
# directories. If you don't go in to a directory you won't find any files
# there. Rules are evaluated in order. The last two rules are a little
# magic. '+ */' says to go in to every directory and '- /**' says to ignore
# any file or directory that isn't already specifically allowed.
#
# We are looking to copy out all of the built binaries along with various
# generated files.
kube::build::rsync \
--filter='- /vendor/' \
--filter='- /_temp/' \
--filter='+ /_output/dockerized/bin/**' \
--filter='+ zz_generated.*' \
--filter='+ generated.proto' \
--filter='+ *.pb.go' \
--filter='+ */' \
--filter='- /**' \
"rsync://k8s@${KUBE_RSYNC_ADDR}/k8s/" "${KUBE_ROOT}"
kube::build::stop_rsyncd_container
}

26
vendor/k8s.io/kubernetes/build/copy-output.sh generated vendored Executable file
View file

@ -0,0 +1,26 @@
#!/bin/bash
# Copyright 2014 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Copies any built binaries (and other generated files) out of the Docker build contianer.
set -o errexit
set -o nounset
set -o pipefail
KUBE_ROOT=$(dirname "${BASH_SOURCE}")/..
source "${KUBE_ROOT}/build/common.sh"
kube::build::verify_prereqs
kube::build::copy_output

View file

@ -0,0 +1,27 @@
# Copyright 2016 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
FROM BASEIMAGE
# If we're building for another architecture than amd64, the CROSS_BUILD_ placeholder is removed so e.g. CROSS_BUILD_COPY turns into COPY
# If we're building normally, for amd64, CROSS_BUILD lines are removed
CROSS_BUILD_COPY qemu-ARCH-static /usr/bin/
# All apt-get's must be in one run command or the
# cleanup has no effect.
RUN DEBIAN_FRONTEND=noninteractive apt-get update \
&& DEBIAN_FRONTEND=noninteractive apt-get install -y iptables \
&& DEBIAN_FRONTEND=noninteractive apt-get install -y ebtables \
&& DEBIAN_FRONTEND=noninteractive apt-get install -y conntrack \
&& rm -rf /var/lib/apt/lists/*

View file

@ -0,0 +1,64 @@
# Copyright 2016 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
.PHONY: build push
REGISTRY?="gcr.io/google_containers"
IMAGE=debian-iptables
TAG=v5
ARCH?=amd64
TEMP_DIR:=$(shell mktemp -d)
ifeq ($(ARCH),amd64)
BASEIMAGE?=debian:jessie
endif
ifeq ($(ARCH),arm)
BASEIMAGE?=armel/debian:jessie
QEMUARCH=arm
endif
ifeq ($(ARCH),arm64)
BASEIMAGE?=aarch64/debian:jessie
QEMUARCH=aarch64
endif
ifeq ($(ARCH),ppc64le)
BASEIMAGE?=ppc64le/debian:jessie
QEMUARCH=ppc64le
endif
ifeq ($(ARCH),s390x)
BASEIMAGE?=s390x/debian:jessie
QEMUARCH=s390x
endif
build:
cp ./* $(TEMP_DIR)
cd $(TEMP_DIR) && sed -i "s|BASEIMAGE|$(BASEIMAGE)|g" Dockerfile
cd $(TEMP_DIR) && sed -i "s|ARCH|$(QEMUARCH)|g" Dockerfile
ifeq ($(ARCH),amd64)
# When building "normally" for amd64, remove the whole line, it has no part in the amd64 image
cd $(TEMP_DIR) && sed -i "/CROSS_BUILD_/d" Dockerfile
else
# When cross-building, only the placeholder "CROSS_BUILD_" should be removed
# Register /usr/bin/qemu-ARCH-static as the handler for ARM binaries in the kernel
docker run --rm --privileged multiarch/qemu-user-static:register --reset
curl -sSL https://github.com/multiarch/qemu-user-static/releases/download/v2.6.0/x86_64_qemu-$(QEMUARCH)-static.tar.gz | tar -xz -C $(TEMP_DIR)
cd $(TEMP_DIR) && sed -i "s/CROSS_BUILD_//g" Dockerfile
endif
docker build --pull -t $(REGISTRY)/$(IMAGE)-$(ARCH):$(TAG) $(TEMP_DIR)
push: build
gcloud docker -- push $(REGISTRY)/$(IMAGE)-$(ARCH):$(TAG)
all: push

View file

@ -0,0 +1,32 @@
### debian-iptables
Serves as the base image for `gcr.io/google_containers/kube-proxy-${ARCH}` and multiarch (not `amd64`) `gcr.io/google_containers/flannel-${ARCH}` images.
This image is compiled for multiple architectures.
#### How to release
If you're editing the Dockerfile or some other thing, please bump the `TAG` in the Makefile.
```console
# Build for linux/amd64 (default)
$ make push ARCH=amd64
# ---> gcr.io/google_containers/debian-iptables-amd64:TAG
$ make push ARCH=arm
# ---> gcr.io/google_containers/debian-iptables-arm:TAG
$ make push ARCH=arm64
# ---> gcr.io/google_containers/debian-iptables-arm64:TAG
$ make push ARCH=ppc64le
# ---> gcr.io/google_containers/debian-iptables-ppc64le:TAG
$ make push ARCH=s390x
# ---> gcr.io/google_containers/debian-iptables-s390x:TAG
```
If you don't want to push the images, run `make` or `make build` instead
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/build/debian-iptables/README.md?pixel)]()

169
vendor/k8s.io/kubernetes/build/debs/BUILD generated vendored Normal file
View file

@ -0,0 +1,169 @@
package(default_visibility = ["//visibility:public"])
load("@bazel_tools//tools/build_defs/pkg:pkg.bzl", "pkg_tar", "pkg_deb")
load("@io_kubernetes_build//defs:deb.bzl", "k8s_deb", "deb_data")
# We do not include kube-scheduler, kube-controller-manager,
# kube-apiserver, and kube-proxy in this list even though we
# produce debs for them. We recommend that they be run in docker
# images. We use the debs that we produce here to build those
# images.
filegroup(
name = "debs",
srcs = [
":kubeadm.deb",
":kubectl.deb",
":kubelet.deb",
":kubernetes-cni.deb",
],
)
[deb_data(
name = binary,
data = [
{
"files": ["//cmd/" + binary],
"mode": "0755",
"dir": "/usr/bin",
},
],
) for binary in [
"kubectl",
"kube-apiserver",
"kube-controller-manager",
"kube-proxy",
"kube-aggregator",
]]
deb_data(
name = "kube-scheduler",
data = [
{
"files": ["//plugin/cmd/kube-scheduler"],
"mode": "0755",
"dir": "/usr/bin",
},
],
)
deb_data(
name = "kubelet",
data = [
{
"files": ["//cmd/kubelet"],
"mode": "0755",
"dir": "/usr/bin",
},
{
"files": ["kubelet.service"],
"mode": "644",
"dir": "/lib/systemd/system",
},
],
)
deb_data(
name = "kubeadm",
data = [
{
"files": ["//cmd/kubeadm"],
"mode": "0755",
"dir": "/usr/bin",
},
{
"files": ["kubeadm-10.conf"],
"mode": "644",
"dir": "/etc/systemd/system/kubelet.service.d",
},
],
)
pkg_tar(
name = "kubernetes-cni-data",
package_dir = "/opt/cni",
deps = ["@kubernetes_cni//file"],
)
k8s_deb(
name = "kubectl",
description = """Kubernetes Command Line Tool
The Kubernetes command line tool for interacting with the Kubernetes API.
""",
)
k8s_deb(
name = "kube-apiserver",
description = "Kubernetes API Server",
)
k8s_deb(
name = "kube-controller-manager",
description = "Kubernetes Controller Manager",
)
k8s_deb(
name = "kube-scheduler",
description = "Kubernetes Scheduler",
)
k8s_deb(
name = "kube-proxy",
depends = [
"iptables (>= 1.4.21)",
"iproute2",
],
description = "Kubernetes Service Proxy",
)
k8s_deb(
name = "kube-aggregator",
description = "Kubernetes Federated API Server",
)
k8s_deb(
name = "kubelet",
depends = [
"iptables (>= 1.4.21)",
"kubernetes-cni (>= 0.3.0.1)",
"iproute2",
"socat",
"util-linux",
"mount",
"ebtables",
"ethtool",
],
description = """Kubernetes Node Agent
The node agent of Kubernetes, the container cluster manager
""",
)
k8s_deb(
name = "kubeadm",
depends = [
"kubelet (>= 1.4.0)",
"kubectl (>= 1.4.0)",
],
description = """Kubernetes Cluster Bootstrapping Tool
The Kubernetes command line tool for bootstrapping a Kubernetes cluster.
""",
)
k8s_deb(
name = "kubernetes-cni",
description = """Kubernetes Packaging of CNI
The Container Networking Interface tools for provisioning container networks.
""",
)
filegroup(
name = "package-srcs",
srcs = glob(["**"]),
tags = ["automanaged"],
visibility = ["//visibility:private"],
)
filegroup(
name = "all-srcs",
srcs = [":package-srcs"],
tags = ["automanaged"],
)

7
vendor/k8s.io/kubernetes/build/debs/kubeadm-10.conf generated vendored Normal file
View file

@ -0,0 +1,7 @@
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--kubeconfig=/etc/kubernetes/kubelet.conf --require-kubeconfig=true"
Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true"
Environment="KUBELET_NETWORK_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"
Environment="KUBELET_DNS_ARGS=--cluster-dns=10.96.0.10 --cluster-domain=cluster.local"
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_EXTRA_ARGS

12
vendor/k8s.io/kubernetes/build/debs/kubelet.service generated vendored Normal file
View file

@ -0,0 +1,12 @@
[Unit]
Description=kubelet: The Kubernetes Node Agent
Documentation=http://kubernetes.io/docs/
[Service]
ExecStart=/usr/bin/kubelet
Restart=always
StartLimitInterval=0
RestartSec=10
[Install]
WantedBy=multi-user.target

501
vendor/k8s.io/kubernetes/build/lib/release.sh generated vendored Normal file
View file

@ -0,0 +1,501 @@
#!/bin/bash
# Copyright 2016 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# This file creates release artifacts (tar files, container images) that are
# ready to distribute to install or distribute to end users.
###############################################################################
# Most of the ::release:: namespace functions have been moved to
# github.com/kubernetes/release. Have a look in that repo and specifically in
# lib/releaselib.sh for ::release::-related functionality.
###############################################################################
# This is where the final release artifacts are created locally
readonly RELEASE_STAGE="${LOCAL_OUTPUT_ROOT}/release-stage"
readonly RELEASE_DIR="${LOCAL_OUTPUT_ROOT}/release-tars"
# Validate a ci version
#
# Globals:
# None
# Arguments:
# version
# Returns:
# If version is a valid ci version
# Sets: (e.g. for '1.2.3-alpha.4.56+abcdef12345678')
# VERSION_MAJOR (e.g. '1')
# VERSION_MINOR (e.g. '2')
# VERSION_PATCH (e.g. '3')
# VERSION_PRERELEASE (e.g. 'alpha')
# VERSION_PRERELEASE_REV (e.g. '4')
# VERSION_BUILD_INFO (e.g. '.56+abcdef12345678')
# VERSION_COMMITS (e.g. '56')
function kube::release::parse_and_validate_ci_version() {
# Accept things like "v1.2.3-alpha.4.56+abcdef12345678" or "v1.2.3-beta.4"
local -r version_regex="^v(0|[1-9][0-9]*)\\.(0|[1-9][0-9]*)\\.(0|[1-9][0-9]*)-(beta|alpha)\\.(0|[1-9][0-9]*)(\\.(0|[1-9][0-9]*)\\+[0-9a-f]{7,40})?$"
local -r version="${1-}"
[[ "${version}" =~ ${version_regex} ]] || {
kube::log::error "Invalid ci version: '${version}', must match regex ${version_regex}"
return 1
}
VERSION_MAJOR="${BASH_REMATCH[1]}"
VERSION_MINOR="${BASH_REMATCH[2]}"
VERSION_PATCH="${BASH_REMATCH[3]}"
VERSION_PRERELEASE="${BASH_REMATCH[4]}"
VERSION_PRERELEASE_REV="${BASH_REMATCH[5]}"
VERSION_BUILD_INFO="${BASH_REMATCH[6]}"
VERSION_COMMITS="${BASH_REMATCH[7]}"
}
# ---------------------------------------------------------------------------
# Build final release artifacts
function kube::release::clean_cruft() {
# Clean out cruft
find ${RELEASE_STAGE} -name '*~' -exec rm {} \;
find ${RELEASE_STAGE} -name '#*#' -exec rm {} \;
find ${RELEASE_STAGE} -name '.DS*' -exec rm {} \;
}
function kube::release::package_hyperkube() {
# If we have these variables set then we want to build all docker images.
if [[ -n "${KUBE_DOCKER_IMAGE_TAG-}" && -n "${KUBE_DOCKER_REGISTRY-}" ]]; then
for arch in "${KUBE_SERVER_PLATFORMS[@]##*/}"; do
kube::log::status "Building hyperkube image for arch: ${arch}"
REGISTRY="${KUBE_DOCKER_REGISTRY}" VERSION="${KUBE_DOCKER_IMAGE_TAG}" ARCH="${arch}" make -C cluster/images/hyperkube/ build
done
fi
}
function kube::release::package_tarballs() {
# Clean out any old releases
rm -rf "${RELEASE_DIR}"
mkdir -p "${RELEASE_DIR}"
kube::release::package_src_tarball &
kube::release::package_client_tarballs &
kube::release::package_salt_tarball &
kube::release::package_kube_manifests_tarball &
kube::util::wait-for-jobs || { kube::log::error "previous tarball phase failed"; return 1; }
# _node and _server tarballs depend on _src tarball
kube::release::package_node_tarballs &
kube::release::package_server_tarballs &
kube::util::wait-for-jobs || { kube::log::error "previous tarball phase failed"; return 1; }
kube::release::package_final_tarball & # _final depends on some of the previous phases
kube::release::package_test_tarball & # _test doesn't depend on anything
kube::util::wait-for-jobs || { kube::log::error "previous tarball phase failed"; return 1; }
}
# Package the source code we built, for compliance/licensing/audit/yadda.
function kube::release::package_src_tarball() {
kube::log::status "Building tarball: src"
local source_files=(
$(cd "${KUBE_ROOT}" && find . -mindepth 1 -maxdepth 1 \
-not \( \
\( -path ./_\* -o \
-path ./.git\* -o \
-path ./.config\* -o \
-path ./.gsutil\* \
\) -prune \
\))
)
"${TAR}" czf "${RELEASE_DIR}/kubernetes-src.tar.gz" -C "${KUBE_ROOT}" "${source_files[@]}"
}
# Package up all of the cross compiled clients. Over time this should grow into
# a full SDK
function kube::release::package_client_tarballs() {
# Find all of the built client binaries
local platform platforms
platforms=($(cd "${LOCAL_OUTPUT_BINPATH}" ; echo */*))
for platform in "${platforms[@]}"; do
local platform_tag=${platform/\//-} # Replace a "/" for a "-"
kube::log::status "Starting tarball: client $platform_tag"
(
local release_stage="${RELEASE_STAGE}/client/${platform_tag}/kubernetes"
rm -rf "${release_stage}"
mkdir -p "${release_stage}/client/bin"
local client_bins=("${KUBE_CLIENT_BINARIES[@]}")
if [[ "${platform%/*}" == "windows" ]]; then
client_bins=("${KUBE_CLIENT_BINARIES_WIN[@]}")
fi
# This fancy expression will expand to prepend a path
# (${LOCAL_OUTPUT_BINPATH}/${platform}/) to every item in the
# KUBE_CLIENT_BINARIES array.
cp "${client_bins[@]/#/${LOCAL_OUTPUT_BINPATH}/${platform}/}" \
"${release_stage}/client/bin/"
kube::release::clean_cruft
local package_name="${RELEASE_DIR}/kubernetes-client-${platform_tag}.tar.gz"
kube::release::create_tarball "${package_name}" "${release_stage}/.."
) &
done
kube::log::status "Waiting on tarballs"
kube::util::wait-for-jobs || { kube::log::error "client tarball creation failed"; exit 1; }
}
# Package up all of the node binaries
function kube::release::package_node_tarballs() {
local platform
for platform in "${KUBE_NODE_PLATFORMS[@]}"; do
local platform_tag=${platform/\//-} # Replace a "/" for a "-"
local arch=$(basename ${platform})
kube::log::status "Building tarball: node $platform_tag"
local release_stage="${RELEASE_STAGE}/node/${platform_tag}/kubernetes"
rm -rf "${release_stage}"
mkdir -p "${release_stage}/node/bin"
local node_bins=("${KUBE_NODE_BINARIES[@]}")
if [[ "${platform%/*}" == "windows" ]]; then
node_bins=("${KUBE_NODE_BINARIES_WIN[@]}")
fi
# This fancy expression will expand to prepend a path
# (${LOCAL_OUTPUT_BINPATH}/${platform}/) to every item in the
# KUBE_NODE_BINARIES array.
cp "${node_bins[@]/#/${LOCAL_OUTPUT_BINPATH}/${platform}/}" \
"${release_stage}/node/bin/"
# TODO: Docker images here
# kube::release::create_docker_images_for_server "${release_stage}/server/bin" "${arch}"
# Include the client binaries here too as they are useful debugging tools.
local client_bins=("${KUBE_CLIENT_BINARIES[@]}")
if [[ "${platform%/*}" == "windows" ]]; then
client_bins=("${KUBE_CLIENT_BINARIES_WIN[@]}")
fi
cp "${client_bins[@]/#/${LOCAL_OUTPUT_BINPATH}/${platform}/}" \
"${release_stage}/node/bin/"
cp "${KUBE_ROOT}/Godeps/LICENSES" "${release_stage}/"
cp "${RELEASE_DIR}/kubernetes-src.tar.gz" "${release_stage}/"
kube::release::clean_cruft
local package_name="${RELEASE_DIR}/kubernetes-node-${platform_tag}.tar.gz"
kube::release::create_tarball "${package_name}" "${release_stage}/.."
done
}
# Package up all of the server binaries
function kube::release::package_server_tarballs() {
local platform
for platform in "${KUBE_SERVER_PLATFORMS[@]}"; do
local platform_tag=${platform/\//-} # Replace a "/" for a "-"
local arch=$(basename ${platform})
kube::log::status "Building tarball: server $platform_tag"
local release_stage="${RELEASE_STAGE}/server/${platform_tag}/kubernetes"
rm -rf "${release_stage}"
mkdir -p "${release_stage}/server/bin"
mkdir -p "${release_stage}/addons"
# This fancy expression will expand to prepend a path
# (${LOCAL_OUTPUT_BINPATH}/${platform}/) to every item in the
# KUBE_SERVER_BINARIES array.
cp "${KUBE_SERVER_BINARIES[@]/#/${LOCAL_OUTPUT_BINPATH}/${platform}/}" \
"${release_stage}/server/bin/"
kube::release::create_docker_images_for_server "${release_stage}/server/bin" "${arch}"
# Include the client binaries here too as they are useful debugging tools.
local client_bins=("${KUBE_CLIENT_BINARIES[@]}")
if [[ "${platform%/*}" == "windows" ]]; then
client_bins=("${KUBE_CLIENT_BINARIES_WIN[@]}")
fi
cp "${client_bins[@]/#/${LOCAL_OUTPUT_BINPATH}/${platform}/}" \
"${release_stage}/server/bin/"
cp "${KUBE_ROOT}/Godeps/LICENSES" "${release_stage}/"
cp "${RELEASE_DIR}/kubernetes-src.tar.gz" "${release_stage}/"
kube::release::clean_cruft
local package_name="${RELEASE_DIR}/kubernetes-server-${platform_tag}.tar.gz"
kube::release::create_tarball "${package_name}" "${release_stage}/.."
done
}
function kube::release::md5() {
if which md5 >/dev/null 2>&1; then
md5 -q "$1"
else
md5sum "$1" | awk '{ print $1 }'
fi
}
function kube::release::sha1() {
if which sha1sum >/dev/null 2>&1; then
sha1sum "$1" | awk '{ print $1 }'
else
shasum -a1 "$1" | awk '{ print $1 }'
fi
}
# This will take binaries that run on master and creates Docker images
# that wrap the binary in them. (One docker image per binary)
# Args:
# $1 - binary_dir, the directory to save the tared images to.
# $2 - arch, architecture for which we are building docker images.
function kube::release::create_docker_images_for_server() {
# Create a sub-shell so that we don't pollute the outer environment
(
local binary_dir="$1"
local arch="$2"
local binary_name
local binaries=($(kube::build::get_docker_wrapped_binaries ${arch}))
for wrappable in "${binaries[@]}"; do
local oldifs=$IFS
IFS=","
set $wrappable
IFS=$oldifs
local binary_name="$1"
local base_image="$2"
kube::log::status "Starting Docker build for image: ${binary_name}"
(
local md5_sum
md5_sum=$(kube::release::md5 "${binary_dir}/${binary_name}")
local docker_build_path="${binary_dir}/${binary_name}.dockerbuild"
local docker_file_path="${docker_build_path}/Dockerfile"
local binary_file_path="${binary_dir}/${binary_name}"
rm -rf ${docker_build_path}
mkdir -p ${docker_build_path}
ln ${binary_dir}/${binary_name} ${docker_build_path}/${binary_name}
printf " FROM ${base_image} \n ADD ${binary_name} /usr/local/bin/${binary_name}\n" > ${docker_file_path}
if [[ ${arch} == "amd64" ]]; then
# If we are building a amd64 docker image, preserve the original image name
local docker_image_tag=gcr.io/google_containers/${binary_name}:${md5_sum}
else
# If we are building a docker image for another architecture, append the arch in the image tag
local docker_image_tag=gcr.io/google_containers/${binary_name}-${arch}:${md5_sum}
fi
"${DOCKER[@]}" build --pull -q -t "${docker_image_tag}" ${docker_build_path} >/dev/null
"${DOCKER[@]}" save ${docker_image_tag} > ${binary_dir}/${binary_name}.tar
echo $md5_sum > ${binary_dir}/${binary_name}.docker_tag
rm -rf ${docker_build_path}
# If we are building an official/alpha/beta release we want to keep docker images
# and tag them appropriately.
if [[ -n "${KUBE_DOCKER_IMAGE_TAG-}" && -n "${KUBE_DOCKER_REGISTRY-}" ]]; then
local release_docker_image_tag="${KUBE_DOCKER_REGISTRY}/${binary_name}-${arch}:${KUBE_DOCKER_IMAGE_TAG}"
kube::log::status "Tagging docker image ${docker_image_tag} as ${release_docker_image_tag}"
docker rmi "${release_docker_image_tag}" || true
"${DOCKER[@]}" tag "${docker_image_tag}" "${release_docker_image_tag}" 2>/dev/null
fi
kube::log::status "Deleting docker image ${docker_image_tag}"
"${DOCKER[@]}" rmi ${docker_image_tag} 2>/dev/null || true
) &
done
kube::util::wait-for-jobs || { kube::log::error "previous Docker build failed"; return 1; }
kube::log::status "Docker builds done"
)
}
# Package up the salt configuration tree. This is an optional helper to getting
# a cluster up and running.
function kube::release::package_salt_tarball() {
kube::log::status "Building tarball: salt"
local release_stage="${RELEASE_STAGE}/salt/kubernetes"
rm -rf "${release_stage}"
mkdir -p "${release_stage}"
cp -R "${KUBE_ROOT}/cluster/saltbase" "${release_stage}/"
# TODO(#3579): This is a temporary hack. It gathers up the yaml,
# yaml.in, json files in cluster/addons (minus any demos) and overlays
# them into kube-addons, where we expect them. (This pipeline is a
# fancy copy, stripping anything but the files we don't want.)
local objects
objects=$(cd "${KUBE_ROOT}/cluster/addons" && find . \( -name \*.yaml -or -name \*.yaml.in -or -name \*.json \) | grep -v demo)
tar c -C "${KUBE_ROOT}/cluster/addons" ${objects} | tar x -C "${release_stage}/saltbase/salt/kube-addons"
kube::release::clean_cruft
local package_name="${RELEASE_DIR}/kubernetes-salt.tar.gz"
kube::release::create_tarball "${package_name}" "${release_stage}/.."
}
# This will pack kube-system manifests files for distros without using salt
# such as GCI and Ubuntu Trusty. We directly copy manifests from
# cluster/addons and cluster/saltbase/salt. The script of cluster initialization
# will remove the salt configuration and evaluate the variables in the manifests.
function kube::release::package_kube_manifests_tarball() {
kube::log::status "Building tarball: manifests"
local release_stage="${RELEASE_STAGE}/manifests/kubernetes"
rm -rf "${release_stage}"
local dst_dir="${release_stage}/gci-trusty"
mkdir -p "${dst_dir}"
local salt_dir="${KUBE_ROOT}/cluster/saltbase/salt"
cp "${salt_dir}/cluster-autoscaler/cluster-autoscaler.manifest" "${dst_dir}/"
cp "${salt_dir}/fluentd-gcp/fluentd-gcp.yaml" "${release_stage}/"
cp "${salt_dir}/kube-registry-proxy/kube-registry-proxy.yaml" "${release_stage}/"
cp "${salt_dir}/kube-proxy/kube-proxy.manifest" "${release_stage}/"
cp "${salt_dir}/etcd/etcd.manifest" "${dst_dir}"
cp "${salt_dir}/kube-scheduler/kube-scheduler.manifest" "${dst_dir}"
cp "${salt_dir}/kube-apiserver/kube-apiserver.manifest" "${dst_dir}"
cp "${salt_dir}/kube-apiserver/abac-authz-policy.jsonl" "${dst_dir}"
cp "${salt_dir}/kube-controller-manager/kube-controller-manager.manifest" "${dst_dir}"
cp "${salt_dir}/kube-addons/kube-addon-manager.yaml" "${dst_dir}"
cp "${salt_dir}/l7-gcp/glbc.manifest" "${dst_dir}"
cp "${salt_dir}/rescheduler/rescheduler.manifest" "${dst_dir}/"
cp "${salt_dir}/e2e-image-puller/e2e-image-puller.manifest" "${dst_dir}/"
cp "${KUBE_ROOT}/cluster/gce/trusty/configure-helper.sh" "${dst_dir}/trusty-configure-helper.sh"
cp "${KUBE_ROOT}/cluster/gce/gci/configure-helper.sh" "${dst_dir}/gci-configure-helper.sh"
cp "${KUBE_ROOT}/cluster/gce/gci/mounter/mounter" "${dst_dir}/gci-mounter"
cp "${KUBE_ROOT}/cluster/gce/gci/health-monitor.sh" "${dst_dir}/health-monitor.sh"
cp "${KUBE_ROOT}/cluster/gce/container-linux/configure-helper.sh" "${dst_dir}/container-linux-configure-helper.sh"
cp -r "${salt_dir}/kube-admission-controls/limit-range" "${dst_dir}"
local objects
objects=$(cd "${KUBE_ROOT}/cluster/addons" && find . \( -name \*.yaml -or -name \*.yaml.in -or -name \*.json \) | grep -v demo)
tar c -C "${KUBE_ROOT}/cluster/addons" ${objects} | tar x -C "${dst_dir}"
kube::release::clean_cruft
local package_name="${RELEASE_DIR}/kubernetes-manifests.tar.gz"
kube::release::create_tarball "${package_name}" "${release_stage}/.."
}
# This is the stuff you need to run tests from the binary distribution.
function kube::release::package_test_tarball() {
kube::log::status "Building tarball: test"
local release_stage="${RELEASE_STAGE}/test/kubernetes"
rm -rf "${release_stage}"
mkdir -p "${release_stage}"
local platform
for platform in "${KUBE_TEST_PLATFORMS[@]}"; do
local test_bins=("${KUBE_TEST_BINARIES[@]}")
if [[ "${platform%/*}" == "windows" ]]; then
test_bins=("${KUBE_TEST_BINARIES_WIN[@]}")
fi
mkdir -p "${release_stage}/platforms/${platform}"
cp "${test_bins[@]/#/${LOCAL_OUTPUT_BINPATH}/${platform}/}" \
"${release_stage}/platforms/${platform}"
done
for platform in "${KUBE_TEST_SERVER_PLATFORMS[@]}"; do
mkdir -p "${release_stage}/platforms/${platform}"
cp "${KUBE_TEST_SERVER_BINARIES[@]/#/${LOCAL_OUTPUT_BINPATH}/${platform}/}" \
"${release_stage}/platforms/${platform}"
done
# Add the test image files
mkdir -p "${release_stage}/test/images"
cp -fR "${KUBE_ROOT}/test/images" "${release_stage}/test/"
tar c ${KUBE_TEST_PORTABLE[@]} | tar x -C ${release_stage}
kube::release::clean_cruft
local package_name="${RELEASE_DIR}/kubernetes-test.tar.gz"
kube::release::create_tarball "${package_name}" "${release_stage}/.."
}
# This is all the platform-independent stuff you need to run/install kubernetes.
# Arch-specific binaries will need to be downloaded separately (possibly by
# using the bundled cluster/get-kube-binaries.sh script).
# Included in this tarball:
# - Cluster spin up/down scripts and configs for various cloud providers
# - Tarballs for salt configs that are ready to be uploaded
# to master by whatever means appropriate.
# - Examples (which may or may not still work)
# - The remnants of the docs/ directory
function kube::release::package_final_tarball() {
kube::log::status "Building tarball: final"
# This isn't a "full" tarball anymore, but the release lib still expects
# artifacts under "full/kubernetes/"
local release_stage="${RELEASE_STAGE}/full/kubernetes"
rm -rf "${release_stage}"
mkdir -p "${release_stage}"
mkdir -p "${release_stage}/client"
cat <<EOF > "${release_stage}/client/README"
Client binaries are no longer included in the Kubernetes final tarball.
Run cluster/get-kube-binaries.sh to download client and server binaries.
EOF
# We want everything in /cluster except saltbase. That is only needed on the
# server.
cp -R "${KUBE_ROOT}/cluster" "${release_stage}/"
rm -rf "${release_stage}/cluster/saltbase"
mkdir -p "${release_stage}/server"
cp "${RELEASE_DIR}/kubernetes-salt.tar.gz" "${release_stage}/server/"
cp "${RELEASE_DIR}/kubernetes-manifests.tar.gz" "${release_stage}/server/"
cat <<EOF > "${release_stage}/server/README"
Server binary tarballs are no longer included in the Kubernetes final tarball.
Run cluster/get-kube-binaries.sh to download client and server binaries.
EOF
mkdir -p "${release_stage}/third_party"
cp -R "${KUBE_ROOT}/third_party/htpasswd" "${release_stage}/third_party/htpasswd"
# Include only federation/cluster, federation/manifests and federation/deploy
mkdir "${release_stage}/federation"
cp -R "${KUBE_ROOT}/federation/cluster" "${release_stage}/federation/"
cp -R "${KUBE_ROOT}/federation/manifests" "${release_stage}/federation/"
cp -R "${KUBE_ROOT}/federation/deploy" "${release_stage}/federation/"
cp -R "${KUBE_ROOT}/examples" "${release_stage}/"
cp -R "${KUBE_ROOT}/docs" "${release_stage}/"
cp "${KUBE_ROOT}/README.md" "${release_stage}/"
cp "${KUBE_ROOT}/Godeps/LICENSES" "${release_stage}/"
cp "${KUBE_ROOT}/Vagrantfile" "${release_stage}/"
echo "${KUBE_GIT_VERSION}" > "${release_stage}/version"
kube::release::clean_cruft
local package_name="${RELEASE_DIR}/kubernetes.tar.gz"
kube::release::create_tarball "${package_name}" "${release_stage}/.."
}
# Build a release tarball. $1 is the output tar name. $2 is the base directory
# of the files to be packaged. This assumes that ${2}/kubernetes is what is
# being packaged.
function kube::release::create_tarball() {
kube::build::ensure_tar
local tarfile=$1
local stagingdir=$2
"${TAR}" czf "${tarfile}" -C "${stagingdir}" kubernetes --owner=0 --group=0
}

31
vendor/k8s.io/kubernetes/build/make-build-image.sh generated vendored Executable file
View file

@ -0,0 +1,31 @@
#!/bin/bash
# Copyright 2014 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Build the docker image necessary for building Kubernetes
#
# This script will package the parts of the repo that we need to build
# Kubernetes into a tar file and put it in the right place in the output
# directory. It will then copy over the Dockerfile and build the kube-build
# image.
set -o errexit
set -o nounset
set -o pipefail
KUBE_ROOT="$(dirname "${BASH_SOURCE}")/.."
source "${KUBE_ROOT}/build/common.sh"
kube::build::verify_prereqs
kube::build::build_image

26
vendor/k8s.io/kubernetes/build/make-clean.sh generated vendored Executable file
View file

@ -0,0 +1,26 @@
#!/bin/bash
# Copyright 2014 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Clean out the output directory on the docker host.
set -o errexit
set -o nounset
set -o pipefail
KUBE_ROOT=$(dirname "${BASH_SOURCE}")/..
source "${KUBE_ROOT}/build/common.sh"
kube::build::verify_prereqs false
kube::build::clean

3
vendor/k8s.io/kubernetes/build/pause/.gitignore generated vendored Normal file
View file

@ -0,0 +1,3 @@
/.container-*
/.push-*
/bin

18
vendor/k8s.io/kubernetes/build/pause/Dockerfile generated vendored Normal file
View file

@ -0,0 +1,18 @@
# Copyright 2016 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
FROM scratch
ARG ARCH
ADD bin/pause-${ARCH} /pause
ENTRYPOINT ["/pause"]

101
vendor/k8s.io/kubernetes/build/pause/Makefile generated vendored Normal file
View file

@ -0,0 +1,101 @@
# Copyright 2016 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
.PHONY: all push push-legacy container clean
REGISTRY ?= gcr.io/google_containers
IMAGE = $(REGISTRY)/pause-$(ARCH)
LEGACY_AMD64_IMAGE = $(REGISTRY)/pause
TAG = 3.0
# Architectures supported: amd64, arm, arm64, ppc64le and s390x
ARCH ?= amd64
ALL_ARCH = amd64 arm arm64 ppc64le s390x
CFLAGS = -Os -Wall -static
KUBE_CROSS_IMAGE ?= gcr.io/google_containers/kube-cross
KUBE_CROSS_VERSION ?= $(shell cat ../build-image/cross/VERSION)
BIN = pause
SRCS = pause.c
ifeq ($(ARCH),amd64)
TRIPLE ?= x86_64-linux-gnu
endif
ifeq ($(ARCH),arm)
TRIPLE ?= arm-linux-gnueabi
endif
ifeq ($(ARCH),arm64)
TRIPLE ?= aarch64-linux-gnu
endif
ifeq ($(ARCH),ppc64le)
TRIPLE ?= powerpc64le-linux-gnu
endif
ifeq ($(ARCH),s390x)
TRIPLE ?= s390x-linux-gnu
endif
# If you want to build AND push all containers, see the 'all-push' rule.
all: all-container
sub-container-%:
$(MAKE) ARCH=$* container
sub-push-%:
$(MAKE) ARCH=$* push
all-container: $(addprefix sub-container-,$(ALL_ARCH))
all-push: $(addprefix sub-push-,$(ALL_ARCH))
build: bin/$(BIN)-$(ARCH)
bin/$(BIN)-$(ARCH): $(SRCS)
mkdir -p bin
docker run --rm -u $$(id -u):$$(id -g) -v $$(pwd):/build \
$(KUBE_CROSS_IMAGE):$(KUBE_CROSS_VERSION) \
/bin/bash -c "\
cd /build && \
$(TRIPLE)-gcc $(CFLAGS) -o $@ $^ && \
$(TRIPLE)-strip $@"
container: .container-$(ARCH)
.container-$(ARCH): bin/$(BIN)-$(ARCH)
docker build --pull -t $(IMAGE):$(TAG) --build-arg ARCH=$(ARCH) .
ifeq ($(ARCH),amd64)
docker rmi $(LEGACY_AMD64_IMAGE):$(TAG) || true
docker tag $(IMAGE):$(TAG) $(LEGACY_AMD64_IMAGE):$(TAG)
endif
touch $@
push: .push-$(ARCH)
.push-$(ARCH): .container-$(ARCH)
gcloud docker -- push $(IMAGE):$(TAG)
touch $@
push-legacy: .push-legacy-$(ARCH)
.push-legacy-$(ARCH): .container-$(ARCH)
ifeq ($(ARCH),amd64)
gcloud docker -- push $(LEGACY_AMD64_IMAGE):$(TAG)
endif
touch $@
clean:
rm -rf .container-* .push-* bin/

36
vendor/k8s.io/kubernetes/build/pause/pause.c generated vendored Normal file
View file

@ -0,0 +1,36 @@
/*
Copyright 2016 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
#include <signal.h>
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
static void sigdown(int signo) {
psignal(signo, "shutting down, got signal");
exit(0);
}
int main() {
if (signal(SIGINT, sigdown) == SIG_ERR)
return 1;
if (signal(SIGTERM, sigdown) == SIG_ERR)
return 2;
signal(SIGKILL, sigdown);
for (;;) pause();
fprintf(stderr, "error: infinite loop terminated\n");
return 42;
}

26
vendor/k8s.io/kubernetes/build/push-federation-images.sh generated vendored Executable file
View file

@ -0,0 +1,26 @@
#!/bin/bash
# Copyright 2014 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Pushes federation container images to existing repositories
set -o errexit
set -o nounset
set -o pipefail
KUBE_ROOT=$(dirname "${BASH_SOURCE}")/..
make -C "${KUBE_ROOT}/federation/" build_image
make -C "${KUBE_ROOT}/federation/" push

46
vendor/k8s.io/kubernetes/build/release.sh generated vendored Executable file
View file

@ -0,0 +1,46 @@
#!/bin/bash
# Copyright 2014 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Build a Kubernetes release. This will build the binaries, create the Docker
# images and other build artifacts.
#
# For pushing these artifacts publicly to Google Cloud Storage or to a registry
# please refer to the kubernetes/release repo at
# https://github.com/kubernetes/release.
set -o errexit
set -o nounset
set -o pipefail
KUBE_ROOT=$(dirname "${BASH_SOURCE}")/..
source "${KUBE_ROOT}/build/common.sh"
source "${KUBE_ROOT}/build/lib/release.sh"
KUBE_RELEASE_RUN_TESTS=${KUBE_RELEASE_RUN_TESTS-y}
kube::build::verify_prereqs
kube::build::build_image
kube::build::run_build_command make cross
if [[ $KUBE_RELEASE_RUN_TESTS =~ ^[yY]$ ]]; then
kube::build::run_build_command make test
kube::build::run_build_command make test-integration
fi
kube::build::copy_output
kube::release::package_tarballs
kube::release::package_hyperkube

34
vendor/k8s.io/kubernetes/build/run.sh generated vendored Executable file
View file

@ -0,0 +1,34 @@
#!/bin/bash
# Copyright 2014 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Run a command in the docker build container. Typically this will be one of
# the commands in `hack/`. When running in the build container the user is sure
# to have a consistent reproducible build environment.
set -o errexit
set -o nounset
set -o pipefail
KUBE_ROOT=$(dirname "${BASH_SOURCE}")/..
source "$KUBE_ROOT/build/common.sh"
kube::build::verify_prereqs
kube::build::build_image
kube::build::run_build_command "$@"
if [[ ${KUBE_RUN_COPY_OUTPUT:-y} =~ ^[yY]$ ]]; then
kube::build::copy_output
fi

31
vendor/k8s.io/kubernetes/build/shell.sh generated vendored Executable file
View file

@ -0,0 +1,31 @@
#!/bin/bash
# Copyright 2014 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Run a bash script in the Docker build image.
#
# This container will have a snapshot of the current sources.
set -o errexit
set -o nounset
set -o pipefail
KUBE_ROOT=$(dirname "${BASH_SOURCE}")/..
source "${KUBE_ROOT}/build/common.sh"
source "${KUBE_ROOT}/build/lib/release.sh"
kube::build::verify_prereqs
kube::build::build_image
kube::build::run_build_command bash || true

32
vendor/k8s.io/kubernetes/build/util.sh generated vendored Normal file
View file

@ -0,0 +1,32 @@
#!/bin/bash
# Copyright 2016 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Common utility functions for build scripts
KUBE_ROOT=$(dirname "${BASH_SOURCE}")/..
function kube::release::semantic_version() {
# This takes:
# Client Version: version.Info{Major:"1", Minor:"1+", GitVersion:"v1.1.0-alpha.0.2328+3c0a05de4a38e3", GitCommit:"3c0a05de4a38e355d147dbfb4d85bad6d2d73bb9", GitTreeState:"clean"}
# and spits back the GitVersion piece in a way that is somewhat
# resilient to the other fields changing (we hope)
${KUBE_ROOT}/cluster/kubectl.sh version --client | sed "s/, */\\
/g" | egrep "^GitVersion:" | cut -f2 -d: | cut -f2 -d\"
}
function kube::release::semantic_image_tag_version() {
printf "$(kube::release::semantic_version)" | tr + _
}