Switch to github.com/golang/dep for vendoring
Signed-off-by: Mrunal Patel <mrunalp@gmail.com>
This commit is contained in:
parent
d6ab91be27
commit
8e5b17cf13
15431 changed files with 3971413 additions and 8881 deletions
37
vendor/k8s.io/kubernetes/examples/volumes/aws_ebs/README.md
generated
vendored
Normal file
37
vendor/k8s.io/kubernetes/examples/volumes/aws_ebs/README.md
generated
vendored
Normal file
|
@ -0,0 +1,37 @@
|
|||
This is a simple web server pod which serves HTML from an AWS EBS
|
||||
volume.
|
||||
|
||||
If you did not use kube-up script, make sure that your minions have the following IAM permissions ([Amazon IAM Roles](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html#create-iam-role-console)):
|
||||
|
||||
```shell
|
||||
ec2:AttachVolume
|
||||
ec2:DetachVolume
|
||||
ec2:DescribeInstances
|
||||
ec2:DescribeVolumes
|
||||
```
|
||||
|
||||
Create a volume in the same region as your node.
|
||||
|
||||
Add your volume information in the pod description file aws-ebs-web.yaml then create the pod:
|
||||
|
||||
```shell
|
||||
$ kubectl create -f examples/volumes/aws_ebs/aws-ebs-web.yaml
|
||||
```
|
||||
|
||||
Add some data to the volume if is empty:
|
||||
|
||||
```sh
|
||||
$ echo "Hello World" >& /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/aws/{Region}/{Volume ID}/index.html
|
||||
```
|
||||
|
||||
You should now be able to query your web server:
|
||||
|
||||
```sh
|
||||
$ curl <Pod IP address>
|
||||
$ Hello World
|
||||
```
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
21
vendor/k8s.io/kubernetes/examples/volumes/aws_ebs/aws-ebs-web.yaml
generated
vendored
Normal file
21
vendor/k8s.io/kubernetes/examples/volumes/aws_ebs/aws-ebs-web.yaml
generated
vendored
Normal file
|
@ -0,0 +1,21 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: aws-web
|
||||
spec:
|
||||
containers:
|
||||
- name: web
|
||||
image: nginx
|
||||
ports:
|
||||
- name: web
|
||||
containerPort: 80
|
||||
protocol: tcp
|
||||
volumeMounts:
|
||||
- name: html-volume
|
||||
mountPath: "/usr/share/nginx/html"
|
||||
volumes:
|
||||
- name: html-volume
|
||||
awsElasticBlockStore:
|
||||
# Enter the volume ID below
|
||||
volumeID: volume_ID
|
||||
fsType: ext4
|
22
vendor/k8s.io/kubernetes/examples/volumes/azure_disk/README.md
generated
vendored
Normal file
22
vendor/k8s.io/kubernetes/examples/volumes/azure_disk/README.md
generated
vendored
Normal file
|
@ -0,0 +1,22 @@
|
|||
# How to Use it?
|
||||
|
||||
On Azure VM, create a Pod using the volume spec based on [azure](azure.yaml).
|
||||
|
||||
In the pod, you need to provide the following information:
|
||||
|
||||
- *diskName*: (required) the name of the VHD blob object.
|
||||
- *diskURI*: (required) the URI of the vhd blob object.
|
||||
- *cachingMode*: (optional) disk caching mode. Must be one of None, ReadOnly, or ReadWrite. Default is None.
|
||||
- *fsType*: (optional) the filesytem type to mount. Default is ext4.
|
||||
- *readOnly*: (optional) whether the filesystem is used as readOnly. Default is false.
|
||||
|
||||
|
||||
Launch the Pod:
|
||||
|
||||
```console
|
||||
# kubectl create -f examples/volumes/azure_disk/azure.yaml
|
||||
```
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
16
vendor/k8s.io/kubernetes/examples/volumes/azure_disk/azure.yaml
generated
vendored
Normal file
16
vendor/k8s.io/kubernetes/examples/volumes/azure_disk/azure.yaml
generated
vendored
Normal file
|
@ -0,0 +1,16 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: azure
|
||||
spec:
|
||||
containers:
|
||||
- image: kubernetes/pause
|
||||
name: azure
|
||||
volumeMounts:
|
||||
- name: azure
|
||||
mountPath: /mnt/azure
|
||||
volumes:
|
||||
- name: azure
|
||||
azureDisk:
|
||||
diskName: test.vhd
|
||||
diskURI: https://someaccount.blob.microsoft.net/vhds/test.vhd
|
35
vendor/k8s.io/kubernetes/examples/volumes/azure_file/README.md
generated
vendored
Normal file
35
vendor/k8s.io/kubernetes/examples/volumes/azure_file/README.md
generated
vendored
Normal file
|
@ -0,0 +1,35 @@
|
|||
# How to Use it?
|
||||
|
||||
Install *cifs-utils* on the Kubernetes host. For example, on Fedora based Linux
|
||||
|
||||
# yum -y install cifs-utils
|
||||
|
||||
Note, as explained in [Azure File Storage for Linux](https://azure.microsoft.com/en-us/documentation/articles/storage-how-to-use-files-linux/), the Linux hosts and the file share must be in the same Azure region.
|
||||
|
||||
Obtain an Microsoft Azure storage account and create a [secret](secret/azure-secret.yaml) that contains the base64 encoded Azure Storage account name and key. In the secret file, base64-encode Azure Storage account name and pair it with name *azurestorageaccountname*, and base64-encode Azure Storage access key and pair it with name *azurestorageaccountkey*.
|
||||
|
||||
Then create a Pod using the volume spec based on [azure](azure.yaml).
|
||||
|
||||
In the pod, you need to provide the following information:
|
||||
|
||||
- *secretName*: the name of the secret that contains both Azure storage account name and key.
|
||||
- *shareName*: The share name to be used.
|
||||
- *readOnly*: Whether the filesystem is used as readOnly.
|
||||
|
||||
Create the secret:
|
||||
|
||||
```console
|
||||
# kubectl create -f examples/volumes/azure_file/secret/azure-secret.yaml
|
||||
```
|
||||
|
||||
You should see the account name and key from `kubectl get secret`
|
||||
|
||||
Then create the Pod:
|
||||
|
||||
```console
|
||||
# kubectl create -f examples/volumes/azure_file/azure.yaml
|
||||
```
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
17
vendor/k8s.io/kubernetes/examples/volumes/azure_file/azure.yaml
generated
vendored
Normal file
17
vendor/k8s.io/kubernetes/examples/volumes/azure_file/azure.yaml
generated
vendored
Normal file
|
@ -0,0 +1,17 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: azure
|
||||
spec:
|
||||
containers:
|
||||
- image: kubernetes/pause
|
||||
name: azure
|
||||
volumeMounts:
|
||||
- name: azure
|
||||
mountPath: /mnt/azure
|
||||
volumes:
|
||||
- name: azure
|
||||
azureFile:
|
||||
secretName: azure-secret
|
||||
shareName: k8stest
|
||||
readOnly: false
|
8
vendor/k8s.io/kubernetes/examples/volumes/azure_file/secret/azure-secret.yaml
generated
vendored
Normal file
8
vendor/k8s.io/kubernetes/examples/volumes/azure_file/secret/azure-secret.yaml
generated
vendored
Normal file
|
@ -0,0 +1,8 @@
|
|||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: azure-secret
|
||||
type: Opaque
|
||||
data:
|
||||
azurestorageaccountname: azhzdGVzdA==
|
||||
azurestorageaccountkey: eElGMXpKYm5ub2pGTE1Ta0JwNTBteDAyckhzTUsyc2pVN21GdDRMMTNob0I3ZHJBYUo4akQ2K0E0NDNqSm9nVjd5MkZVT2hRQ1dQbU02WWFOSHk3cWc9PQ==
|
38
vendor/k8s.io/kubernetes/examples/volumes/cephfs/README.md
generated
vendored
Normal file
38
vendor/k8s.io/kubernetes/examples/volumes/cephfs/README.md
generated
vendored
Normal file
|
@ -0,0 +1,38 @@
|
|||
# How to Use it?
|
||||
|
||||
Install Ceph on the Kubernetes host. For example, on Fedora 21
|
||||
|
||||
# yum -y install ceph
|
||||
|
||||
If you don't have a Ceph cluster, you can set up a [containerized Ceph cluster](https://github.com/ceph/ceph-docker/tree/master/examples/kubernetes)
|
||||
|
||||
Then get the keyring from the Ceph cluster and copy it to */etc/ceph/keyring*.
|
||||
|
||||
Once you have installed Ceph and a Kubernetes cluster, you can create a pod based on my examples [cephfs.yaml](cephfs.yaml) and [cephfs-with-secret.yaml](cephfs-with-secret.yaml). In the pod yaml, you need to provide the following information.
|
||||
|
||||
- *monitors*: Array of Ceph monitors.
|
||||
- *path*: Used as the mounted root, rather than the full Ceph tree. If not provided, default */* is used.
|
||||
- *user*: The RADOS user name. If not provided, default *admin* is used.
|
||||
- *secretFile*: The path to the keyring file. If not provided, default */etc/ceph/user.secret* is used.
|
||||
- *secretRef*: Reference to Ceph authentication secrets. If provided, *secret* overrides *secretFile*.
|
||||
- *readOnly*: Whether the filesystem is used as readOnly.
|
||||
|
||||
|
||||
Here are the commands:
|
||||
|
||||
```console
|
||||
# kubectl create -f examples/volumes/cephfs/cephfs.yaml
|
||||
|
||||
# create a secret if you want to use Ceph secret instead of secret file
|
||||
# kubectl create -f examples/volumes/cephfs/secret/ceph-secret.yaml
|
||||
|
||||
# kubectl create -f examples/volumes/cephfs/cephfs-with-secret.yaml
|
||||
# kubectl get pods
|
||||
```
|
||||
|
||||
If you ssh to that machine, you can run `docker ps` to see the actual pod and `docker inspect` to see the volumes used by the container.
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
22
vendor/k8s.io/kubernetes/examples/volumes/cephfs/cephfs-with-secret.yaml
generated
vendored
Normal file
22
vendor/k8s.io/kubernetes/examples/volumes/cephfs/cephfs-with-secret.yaml
generated
vendored
Normal file
|
@ -0,0 +1,22 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: cephfs2
|
||||
spec:
|
||||
containers:
|
||||
- name: cephfs-rw
|
||||
image: kubernetes/pause
|
||||
volumeMounts:
|
||||
- mountPath: "/mnt/cephfs"
|
||||
name: cephfs
|
||||
volumes:
|
||||
- name: cephfs
|
||||
cephfs:
|
||||
monitors:
|
||||
- 10.16.154.78:6789
|
||||
- 10.16.154.82:6789
|
||||
- 10.16.154.83:6789
|
||||
user: admin
|
||||
secretRef:
|
||||
name: ceph-secret
|
||||
readOnly: true
|
23
vendor/k8s.io/kubernetes/examples/volumes/cephfs/cephfs.yaml
generated
vendored
Normal file
23
vendor/k8s.io/kubernetes/examples/volumes/cephfs/cephfs.yaml
generated
vendored
Normal file
|
@ -0,0 +1,23 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: cephfs
|
||||
spec:
|
||||
containers:
|
||||
- name: cephfs-rw
|
||||
image: kubernetes/pause
|
||||
volumeMounts:
|
||||
- mountPath: "/mnt/cephfs"
|
||||
name: cephfs
|
||||
volumes:
|
||||
- name: cephfs
|
||||
cephfs:
|
||||
monitors:
|
||||
- 10.16.154.78:6789
|
||||
- 10.16.154.82:6789
|
||||
- 10.16.154.83:6789
|
||||
# by default the path is /, but you can override and mount a specific path of the filesystem by using the path attribute
|
||||
# path: /some/path/in/side/cephfs
|
||||
user: admin
|
||||
secretFile: "/etc/ceph/admin.secret"
|
||||
readOnly: true
|
6
vendor/k8s.io/kubernetes/examples/volumes/cephfs/secret/ceph-secret.yaml
generated
vendored
Normal file
6
vendor/k8s.io/kubernetes/examples/volumes/cephfs/secret/ceph-secret.yaml
generated
vendored
Normal file
|
@ -0,0 +1,6 @@
|
|||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: ceph-secret
|
||||
data:
|
||||
key: QVFCMTZWMVZvRjVtRXhBQTVrQ1FzN2JCajhWVUxSdzI2Qzg0SEE9PQ==
|
44
vendor/k8s.io/kubernetes/examples/volumes/fibre_channel/README.md
generated
vendored
Normal file
44
vendor/k8s.io/kubernetes/examples/volumes/fibre_channel/README.md
generated
vendored
Normal file
|
@ -0,0 +1,44 @@
|
|||
## Step 1. Setting up Fibre Channel Target
|
||||
|
||||
On your FC SAN Zone manager, allocate and mask LUNs so Kubernetes hosts can access them.
|
||||
|
||||
## Step 2. Creating the Pod with Fibre Channel persistent storage
|
||||
|
||||
Once you have installed Fibre Channel initiator and new Kubernetes, you can create a pod based on my example [fc.yaml](fc.yaml). In the pod JSON, you need to provide *targetWWNs* (array of Fibre Channel target's World Wide Names), *lun*, and the type of the filesystem that has been created on the lun, and *readOnly* boolean.
|
||||
|
||||
Once your pod is created, run it on the Kubernetes master:
|
||||
|
||||
```console
|
||||
kubectl create -f ./your_new_pod.json
|
||||
```
|
||||
|
||||
Here is my command and output:
|
||||
|
||||
```console
|
||||
# kubectl create -f examples/volumes/fibre_channel/fc.yaml
|
||||
# kubectl get pods
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
fcpd 2/2 Running 0 10m
|
||||
```
|
||||
|
||||
On the Kubernetes host, I got these in mount output
|
||||
|
||||
```console
|
||||
#mount |grep /var/lib/kubelet/plugins/kubernetes.io
|
||||
/dev/mapper/360a98000324669436c2b45666c567946 on /var/lib/kubelet/plugins/kubernetes.io/fc/500a0982991b8dc5-lun-2 type ext4 (ro,relatime,seclabel,stripe=16,data=ordered)
|
||||
/dev/mapper/360a98000324669436c2b45666c567944 on /var/lib/kubelet/plugins/kubernetes.io/fc/500a0982991b8dc5-lun-1 type ext4 (rw,relatime,seclabel,stripe=16,data=ordered)
|
||||
```
|
||||
|
||||
If you ssh to that machine, you can run `docker ps` to see the actual pod.
|
||||
|
||||
```console
|
||||
# docker ps
|
||||
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
|
||||
090ac457ddc2 kubernetes/pause "/pause" 12 minutes ago Up 12 minutes k8s_fcpd-rw.aae720ec_fcpd_default_4024318f-4121-11e5-a294-e839352ddd54_99eb5415
|
||||
5e2629cf3e7b kubernetes/pause "/pause" 12 minutes ago Up 12 minutes k8s_fcpd-ro.857720dc_fcpd_default_4024318f-4121-11e5-a294-e839352ddd54_c0175742
|
||||
2948683253f7 gcr.io/google_containers/pause:0.8.0 "/pause" 12 minutes ago Up 12 minutes k8s_POD.7be6d81d_fcpd_default_4024318f-4121-11e5-a294-e839352ddd54_8d9dd7bf
|
||||
```
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
18
vendor/k8s.io/kubernetes/examples/volumes/fibre_channel/fc.yaml
generated
vendored
Normal file
18
vendor/k8s.io/kubernetes/examples/volumes/fibre_channel/fc.yaml
generated
vendored
Normal file
|
@ -0,0 +1,18 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: fc
|
||||
spec:
|
||||
containers:
|
||||
- image: kubernetes/pause
|
||||
name: fc
|
||||
volumeMounts:
|
||||
- name: fc-vol
|
||||
mountPath: /mnt/fc
|
||||
volumes:
|
||||
- name: fc-vol
|
||||
fc:
|
||||
targetWWNs: ['500a0982991b8dc5', '500a0982891b8dc5']
|
||||
lun: 2
|
||||
fsType: ext4
|
||||
readOnly: true
|
84
vendor/k8s.io/kubernetes/examples/volumes/flexvolume/README.md
generated
vendored
Normal file
84
vendor/k8s.io/kubernetes/examples/volumes/flexvolume/README.md
generated
vendored
Normal file
|
@ -0,0 +1,84 @@
|
|||
# Flexvolume
|
||||
|
||||
Flexvolume enables users to mount vendor volumes into kubernetes. It expects vendor drivers are installed in the volume plugin path on every kubelet node.
|
||||
|
||||
It allows for vendors to develop their own drivers to mount volumes on nodes.
|
||||
|
||||
*Note: Flexvolume is an alpha feature and is most likely to change in future*
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Install the vendor driver on all nodes in the kubelet plugin path. Path for installing the plugin: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/\<vendor~driver\>/\<driver\>
|
||||
|
||||
For example to add a 'cifs' driver, by vendor 'foo' install the driver at: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/\<foo~cifs\>/cifs
|
||||
|
||||
## Plugin details
|
||||
|
||||
Driver will be invoked with 'Init' to initialize the driver. It will be invoked with 'attach' to attach the volume and with 'detach' to detach the volume from the kubelet node. It also supports custom mounts using 'mount' and 'unmount' callouts to the driver.
|
||||
|
||||
### Driver invocation model:
|
||||
|
||||
Init:
|
||||
|
||||
```
|
||||
<driver executable> init
|
||||
```
|
||||
|
||||
Attach:
|
||||
|
||||
```
|
||||
<driver executable> attach <json options>
|
||||
```
|
||||
|
||||
Detach:
|
||||
|
||||
```
|
||||
<driver executable> detach <mount device>
|
||||
```
|
||||
|
||||
Mount:
|
||||
|
||||
```
|
||||
<driver executable> mount <target mount dir> <mount device> <json options>
|
||||
```
|
||||
|
||||
Unmount:
|
||||
|
||||
```
|
||||
<driver executable> unmount <mount dir>
|
||||
```
|
||||
|
||||
See [lvm](lvm) for a quick example on how to write a simple flexvolume driver.
|
||||
|
||||
### Driver output:
|
||||
|
||||
Flexvolume expects the driver to reply with the status of the operation in the
|
||||
following format.
|
||||
|
||||
```
|
||||
{
|
||||
"status": "<Success/Failure>",
|
||||
"message": "<Reason for success/failure>",
|
||||
"device": "<Path to the device attached. This field is valid only for attach calls>"
|
||||
}
|
||||
```
|
||||
|
||||
### Default Json options
|
||||
|
||||
In addition to the flags specified by the user in the Options field of the FlexVolumeSource, the following flags are also passed to the executable.
|
||||
|
||||
```
|
||||
"kubernetes.io/fsType":"<FS type>",
|
||||
"kubernetes.io/readwrite":"<rw>",
|
||||
"kubernetes.io/secret/key1":"<secret1>"
|
||||
...
|
||||
"kubernetes.io/secret/keyN":"<secretN>"
|
||||
```
|
||||
|
||||
### Example of Flexvolume
|
||||
|
||||
See [nginx.yaml](nginx.yaml) for a quick example on how to use Flexvolume in a pod.
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
152
vendor/k8s.io/kubernetes/examples/volumes/flexvolume/lvm
generated
vendored
Executable file
152
vendor/k8s.io/kubernetes/examples/volumes/flexvolume/lvm
generated
vendored
Executable file
|
@ -0,0 +1,152 @@
|
|||
#!/bin/bash
|
||||
|
||||
# Copyright 2015 The Kubernetes Authors.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
# Notes:
|
||||
# - Please install "jq" package before using this driver.
|
||||
usage() {
|
||||
err "Invalid usage. Usage: "
|
||||
err "\t$0 init"
|
||||
err "\t$0 attach <json params>"
|
||||
err "\t$0 detach <mount device>"
|
||||
err "\t$0 mount <mount dir> <mount device> <json params>"
|
||||
err "\t$0 unmount <mount dir>"
|
||||
exit 1
|
||||
}
|
||||
|
||||
err() {
|
||||
echo -ne $* 1>&2
|
||||
}
|
||||
|
||||
log() {
|
||||
echo -ne $* >&1
|
||||
}
|
||||
|
||||
ismounted() {
|
||||
MOUNT=`findmnt -n ${MNTPATH} 2>/dev/null | cut -d' ' -f1`
|
||||
if [ "${MOUNT}" == "${MNTPATH}" ]; then
|
||||
echo "1"
|
||||
else
|
||||
echo "0"
|
||||
fi
|
||||
}
|
||||
|
||||
attach() {
|
||||
VOLUMEID=$(echo $1 | jq -r '.volumeID')
|
||||
SIZE=$(echo $1 | jq -r '.size')
|
||||
VG=$(echo $1|jq -r '.volumegroup')
|
||||
|
||||
# LVM substitutes - with --
|
||||
VOLUMEID=`echo $VOLUMEID|sed s/-/--/g`
|
||||
VG=`echo $VG|sed s/-/--/g`
|
||||
|
||||
DMDEV="/dev/mapper/${VG}-${VOLUMEID}"
|
||||
if [ ! -b "${DMDEV}" ]; then
|
||||
err "{\"status\": \"Failure\", \"message\": \"Volume ${VOLUMEID} does not exist\"}"
|
||||
exit 1
|
||||
fi
|
||||
log "{\"status\": \"Success\", \"device\":\"${DMDEV}\"}"
|
||||
exit 0
|
||||
}
|
||||
|
||||
detach() {
|
||||
log "{\"status\": \"Success\"}"
|
||||
exit 0
|
||||
}
|
||||
|
||||
domount() {
|
||||
MNTPATH=$1
|
||||
DMDEV=$2
|
||||
FSTYPE=$(echo $3|jq -r '.["kubernetes.io/fsType"]')
|
||||
|
||||
if [ ! -b "${DMDEV}" ]; then
|
||||
err "{\"status\": \"Failure\", \"message\": \"${DMDEV} does not exist\"}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ $(ismounted) -eq 1 ] ; then
|
||||
log "{\"status\": \"Success\"}"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
VOLFSTYPE=`blkid -o udev ${DMDEV} 2>/dev/null|grep "ID_FS_TYPE"|cut -d"=" -f2`
|
||||
if [ "${VOLFSTYPE}" == "" ]; then
|
||||
mkfs -t ${FSTYPE} ${DMDEV} >/dev/null 2>&1
|
||||
if [ $? -ne 0 ]; then
|
||||
err "{ \"status\": \"Failure\", \"message\": \"Failed to create fs ${FSTYPE} on device ${DMDEV}\"}"
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
|
||||
mkdir -p ${MNTPATH} &> /dev/null
|
||||
|
||||
mount ${DMDEV} ${MNTPATH} &> /dev/null
|
||||
if [ $? -ne 0 ]; then
|
||||
err "{ \"status\": \"Failure\", \"message\": \"Failed to mount device ${DMDEV} at ${MNTPATH}\"}"
|
||||
exit 1
|
||||
fi
|
||||
log "{\"status\": \"Success\"}"
|
||||
exit 0
|
||||
}
|
||||
|
||||
unmount() {
|
||||
MNTPATH=$1
|
||||
if [ $(ismounted) -eq 0 ] ; then
|
||||
log "{\"status\": \"Success\"}"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
umount ${MNTPATH} &> /dev/null
|
||||
if [ $? -ne 0 ]; then
|
||||
err "{ \"status\": \"Failed\", \"message\": \"Failed to unmount volume at ${MNTPATH}\"}"
|
||||
exit 1
|
||||
fi
|
||||
rmdir ${MNTPATH} &> /dev/null
|
||||
|
||||
log "{\"status\": \"Success\"}"
|
||||
exit 0
|
||||
}
|
||||
|
||||
op=$1
|
||||
|
||||
if [ "$op" = "init" ]; then
|
||||
log "{\"status\": \"Success\"}"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
if [ $# -lt 2 ]; then
|
||||
usage
|
||||
fi
|
||||
|
||||
shift
|
||||
|
||||
case "$op" in
|
||||
attach)
|
||||
attach $*
|
||||
;;
|
||||
detach)
|
||||
detach $*
|
||||
;;
|
||||
mount)
|
||||
domount $*
|
||||
;;
|
||||
unmount)
|
||||
unmount $*
|
||||
;;
|
||||
*)
|
||||
usage
|
||||
esac
|
||||
|
||||
exit 1
|
23
vendor/k8s.io/kubernetes/examples/volumes/flexvolume/nginx.yaml
generated
vendored
Normal file
23
vendor/k8s.io/kubernetes/examples/volumes/flexvolume/nginx.yaml
generated
vendored
Normal file
|
@ -0,0 +1,23 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: nginx
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx
|
||||
volumeMounts:
|
||||
- name: test
|
||||
mountPath: /data
|
||||
ports:
|
||||
- containerPort: 80
|
||||
volumes:
|
||||
- name: test
|
||||
flexVolume:
|
||||
driver: "kubernetes.io/lvm"
|
||||
fsType: "ext4"
|
||||
options:
|
||||
volumeID: "vol1"
|
||||
size: "1000m"
|
||||
volumegroup: "kube_vg"
|
||||
|
115
vendor/k8s.io/kubernetes/examples/volumes/flocker/README.md
generated
vendored
Normal file
115
vendor/k8s.io/kubernetes/examples/volumes/flocker/README.md
generated
vendored
Normal file
|
@ -0,0 +1,115 @@
|
|||
## Using Flocker volumes
|
||||
|
||||
[Flocker](https://clusterhq.com/flocker) is an open-source clustered container data volume manager. It provides management
|
||||
and orchestration of data volumes backed by a variety of storage backends.
|
||||
|
||||
This example provides information about how to set-up a Flocker installation and configure it in Kubernetes, as well as how to use the plugin to use Flocker datasets as volumes in Kubernetes.
|
||||
|
||||
### Prerequisites
|
||||
|
||||
A Flocker cluster is required to use Flocker with Kubernetes. A Flocker cluster comprises:
|
||||
|
||||
- *Flocker Control Service*: provides a REST over HTTP API to modify the desired configuration of the cluster;
|
||||
- *Flocker Dataset Agent(s)*: a convergence agent that modifies the cluster state to match the desired configuration;
|
||||
- *Flocker Container Agent(s)*: a convergence agent that modifies the cluster state to match the desired configuration (unused in this configuration but still required in the cluster).
|
||||
|
||||
The Flocker cluster can be installed on the same nodes you are using for Kubernetes. For instance, you can install the Flocker Control Service on the same node as Kubernetes Master and Flocker Dataset/Container Agents on every Kubernetes Slave node.
|
||||
|
||||
It is recommended to follow [Installing Flocker](https://docs.clusterhq.com/en/latest/install/index.html) and the instructions below to set-up the Flocker cluster to be used with Kubernetes.
|
||||
|
||||
#### Flocker Control Service
|
||||
|
||||
The Flocker Control Service should be installed manually on a host. In the future, this may be deployed in pod(s) and exposed as a Kubernetes service.
|
||||
|
||||
#### Flocker Agent(s)
|
||||
|
||||
The Flocker Agents should be manually installed on *all* Kubernetes nodes. These agents are responsible for (de)attachment and (un)mounting and are therefore services that should be run with appropriate privileges on these hosts.
|
||||
|
||||
In order for the plugin to connect to Flocker (via REST API), several environment variables must be specified on *all* Kubernetes nodes. This may be specified in an init script for the node's Kubelet service, for example, you could store the below environment variables in a file called `/etc/flocker/env` and place `EnvironmentFile=/etc/flocker/env` into `/etc/systemd/system/kubelet.service` or wherever the `kubelet.service` file lives.
|
||||
|
||||
The environment variables that need to be set are:
|
||||
|
||||
- `FLOCKER_CONTROL_SERVICE_HOST` should refer to the hostname of the Control Service
|
||||
- `FLOCKER_CONTROL_SERVICE_PORT` should refer to the port of the Control Service (the API service defaults to 4523 but this must still be specified)
|
||||
|
||||
The following environment variables should refer to keys and certificates on the host that are specific to that host.
|
||||
|
||||
- `FLOCKER_CONTROL_SERVICE_CA_FILE` should refer to the full path to the cluster certificate file
|
||||
- `FLOCKER_CONTROL_SERVICE_CLIENT_KEY_FILE` should refer to the full path to the [api key](https://docs.clusterhq.com/en/latest/config/generate-api-plugin.html) file for the API user
|
||||
- `FLOCKER_CONTROL_SERVICE_CLIENT_CERT_FILE` should refer to the full path to the [api certificate](https://docs.clusterhq.com/en/latest/config/generate-api-plugin.html) file for the API user
|
||||
|
||||
More details regarding cluster authentication can be found at the documentation: [Flocker Cluster Security & Authentication](https://docs.clusterhq.com/en/latest/concepts/security.html) and [Configuring Cluster Authentication](https://docs.clusterhq.com/en/latest/config/configuring-authentication.html).
|
||||
|
||||
### Create a pod with a Flocker volume
|
||||
|
||||
**Note**: A new dataset must first be provisioned using the Flocker tools or Docker CLI *(To use the Docker CLI, you need the [Flocker plugin for Docker](https://clusterhq.com/docker-plugin/) installed along with Docker 1.9+)*. For example, using the [Volumes CLI](https://docs.clusterhq.com/en/latest/labs/volumes-cli.html), create a new dataset called 'my-flocker-vol' of size 10GB:
|
||||
|
||||
```sh
|
||||
flocker-volumes create -m name=my-flocker-vol -s 10G -n <node-uuid>
|
||||
|
||||
# -n or --node= Is the initial primary node for dataset (any unique
|
||||
# prefix of node uuid, see flocker-volumes list-nodes)
|
||||
```
|
||||
|
||||
The following *volume* spec from the [example pod](flocker-pod.yml) illustrates how to use this Flocker dataset as a volume.
|
||||
|
||||
> Note, the [example pod](flocker-pod.yml) used here does not include a replication controller, therefore the POD will not be rescheduled upon failure. If your looking for an example that does include a replication controller and service spec you can use [this example pod including a replication controller](flocker-pod-with-rc.yml)
|
||||
|
||||
```yaml
|
||||
volumes:
|
||||
- name: www-root
|
||||
flocker:
|
||||
datasetName: my-flocker-vol
|
||||
```
|
||||
|
||||
- **datasetName** is the unique name for the Flocker dataset and should match the *name* in the metadata.
|
||||
|
||||
Use `kubetctl` to create the pod.
|
||||
|
||||
```sh
|
||||
$ kubectl create -f examples/volumes/flocker/flocker-pod.yml
|
||||
```
|
||||
|
||||
You should now verify that the pod is running and determine it's IP address:
|
||||
|
||||
```sh
|
||||
$ kubectl get pods
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
flocker 1/1 Running 0 3m
|
||||
$ kubectl get pods flocker -t '{{.status.hostIP}}{{"\n"}}'
|
||||
172.31.25.62
|
||||
```
|
||||
|
||||
An `ls` of the `/flocker` directory on the host (identified by the IP as above) will show the mount point for the volume.
|
||||
|
||||
```sh
|
||||
$ ls /flocker
|
||||
0cf8789f-00da-4da0-976a-b6b1dc831159
|
||||
```
|
||||
|
||||
You can also see the mountpoint by inspecting the docker container on that host.
|
||||
|
||||
```sh
|
||||
$ docker inspect -f "{{.Mounts}}" <container-id> | grep flocker
|
||||
...{ /flocker/0cf8789f-00da-4da0-976a-b6b1dc831159 /usr/share/nginx/html true}
|
||||
```
|
||||
|
||||
Add an index.html inside this directory and use `curl` to see this HTML file served up by nginx.
|
||||
|
||||
```sh
|
||||
$ echo "<h1>Hello, World</h1>" | tee /flocker/0cf8789f-00da-4da0-976a-b6b1dc831159/index.html
|
||||
$ curl ip
|
||||
|
||||
```
|
||||
|
||||
### More Info
|
||||
|
||||
Read more about the [Flocker Cluster Architecture](https://docs.clusterhq.com/en/latest/concepts/architecture.html) and learn more about Flocker by visiting the [Flocker Documentation](https://docs.clusterhq.com/).
|
||||
|
||||
#### Video Demo
|
||||
|
||||
To see a demo example of using Kubernetes and Flocker, visit [Flocker's blog post on High Availability with Kubernetes and Flocker](https://clusterhq.com/2015/12/22/ha-demo-kubernetes-flocker/)
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
47
vendor/k8s.io/kubernetes/examples/volumes/flocker/flocker-pod-with-rc.yml
generated
vendored
Normal file
47
vendor/k8s.io/kubernetes/examples/volumes/flocker/flocker-pod-with-rc.yml
generated
vendored
Normal file
|
@ -0,0 +1,47 @@
|
|||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: flocker-ghost
|
||||
labels:
|
||||
app: flocker-ghost
|
||||
spec:
|
||||
ports:
|
||||
# the port that this service should serve on
|
||||
- port: 80
|
||||
targetPort: 80
|
||||
selector:
|
||||
app: flocker-ghost
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ReplicationController
|
||||
metadata:
|
||||
name: flocker-ghost
|
||||
# these labels can be applied automatically
|
||||
# from the labels in the pod template if not set
|
||||
labels:
|
||||
purpose: demo
|
||||
spec:
|
||||
replicas: 1
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: flocker-ghost
|
||||
spec:
|
||||
containers:
|
||||
- name: flocker-ghost
|
||||
image: ghost:0.7.1
|
||||
env:
|
||||
- name: GET_HOSTS_FROM
|
||||
value: dns
|
||||
ports:
|
||||
- containerPort: 2368
|
||||
hostPort: 80
|
||||
protocol: TCP
|
||||
volumeMounts:
|
||||
# name must match the volume name below
|
||||
- name: ghost-data
|
||||
mountPath: "/var/lib/ghost"
|
||||
volumes:
|
||||
- name: ghost-data
|
||||
flocker:
|
||||
datasetName: my-flocker-vol
|
19
vendor/k8s.io/kubernetes/examples/volumes/flocker/flocker-pod.yml
generated
vendored
Normal file
19
vendor/k8s.io/kubernetes/examples/volumes/flocker/flocker-pod.yml
generated
vendored
Normal file
|
@ -0,0 +1,19 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: flocker-web
|
||||
spec:
|
||||
containers:
|
||||
- name: web
|
||||
image: nginx
|
||||
ports:
|
||||
- name: web
|
||||
containerPort: 80
|
||||
volumeMounts:
|
||||
# name must match the volume name below
|
||||
- name: www-root
|
||||
mountPath: "/usr/share/nginx/html"
|
||||
volumes:
|
||||
- name: www-root
|
||||
flocker:
|
||||
datasetName: my-flocker-vol
|
104
vendor/k8s.io/kubernetes/examples/volumes/glusterfs/README.md
generated
vendored
Normal file
104
vendor/k8s.io/kubernetes/examples/volumes/glusterfs/README.md
generated
vendored
Normal file
|
@ -0,0 +1,104 @@
|
|||
## Glusterfs
|
||||
|
||||
[Glusterfs](http://www.gluster.org) is an open source scale-out filesystem. These examples provide information about how to allow containers use Glusterfs volumes.
|
||||
|
||||
The example assumes that you have already set up a Glusterfs server cluster and the Glusterfs client package is installed on all Kubernetes nodes.
|
||||
|
||||
### Prerequisites
|
||||
|
||||
Set up Glusterfs server cluster; install Glusterfs client package on the Kubernetes nodes. ([Guide](http://gluster.readthedocs.io/en/latest/Administrator%20Guide/))
|
||||
|
||||
### Create endpoints
|
||||
|
||||
Here is a snippet of [glusterfs-endpoints.json](glusterfs-endpoints.json),
|
||||
|
||||
```
|
||||
"addresses": [
|
||||
{
|
||||
"IP": "10.240.106.152"
|
||||
}
|
||||
],
|
||||
"ports": [
|
||||
{
|
||||
"port": 1
|
||||
}
|
||||
]
|
||||
|
||||
```
|
||||
|
||||
The "IP" field should be filled with the address of a node in the Glusterfs server cluster. In this example, it is fine to give any valid value (from 1 to 65535) to the "port" field.
|
||||
|
||||
Create the endpoints,
|
||||
|
||||
```sh
|
||||
$ kubectl create -f examples/volumes/glusterfs/glusterfs-endpoints.json
|
||||
```
|
||||
|
||||
You can verify that the endpoints are successfully created by running
|
||||
|
||||
```sh
|
||||
$ kubectl get endpoints
|
||||
NAME ENDPOINTS
|
||||
glusterfs-cluster 10.240.106.152:1,10.240.79.157:1
|
||||
```
|
||||
|
||||
We need also create a service for this endpoints, so that the endpoints will be persistented. We will add this service without a selector to tell Kubernetes we want to add its endpoints manually. You can see [glusterfs-service.json](glusterfs-service.json) for details.
|
||||
|
||||
Use this command to create the service:
|
||||
|
||||
```sh
|
||||
$ kubectl create -f examples/volumes/glusterfs/glusterfs-service.json
|
||||
```
|
||||
|
||||
|
||||
### Create a POD
|
||||
|
||||
The following *volume* spec in [glusterfs-pod.json](glusterfs-pod.json) illustrates a sample configuration.
|
||||
|
||||
```json
|
||||
{
|
||||
"name": "glusterfsvol",
|
||||
"glusterfs": {
|
||||
"endpoints": "glusterfs-cluster",
|
||||
"path": "kube_vol",
|
||||
"readOnly": true
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
The parameters are explained as the followings.
|
||||
|
||||
- **endpoints** is endpoints name that represents a Gluster cluster configuration. *kubelet* is optimized to avoid mount storm, it will randomly pick one from the endpoints to mount. If this host is unresponsive, the next Gluster host in the endpoints is automatically selected.
|
||||
- **path** is the Glusterfs volume name.
|
||||
- **readOnly** is the boolean that sets the mountpoint readOnly or readWrite.
|
||||
|
||||
Create a pod that has a container using Glusterfs volume,
|
||||
|
||||
```sh
|
||||
$ kubectl create -f examples/volumes/glusterfs/glusterfs-pod.json
|
||||
```
|
||||
|
||||
You can verify that the pod is running:
|
||||
|
||||
```sh
|
||||
$ kubectl get pods
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
glusterfs 1/1 Running 0 3m
|
||||
|
||||
$ kubectl get pods glusterfs -t '{{.status.hostIP}}{{"\n"}}'
|
||||
10.240.169.172
|
||||
```
|
||||
|
||||
You may ssh to the host (the hostIP) and run 'mount' to see if the Glusterfs volume is mounted,
|
||||
|
||||
```sh
|
||||
$ mount | grep kube_vol
|
||||
10.240.106.152:kube_vol on /var/lib/kubelet/pods/f164a571-fa68-11e4-ad5c-42010af019b7/volumes/kubernetes.io~glusterfs/glusterfsvol type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
|
||||
```
|
||||
|
||||
You may also run `docker ps` on the host to see the actual container.
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
33
vendor/k8s.io/kubernetes/examples/volumes/glusterfs/glusterfs-endpoints.json
generated
vendored
Normal file
33
vendor/k8s.io/kubernetes/examples/volumes/glusterfs/glusterfs-endpoints.json
generated
vendored
Normal file
|
@ -0,0 +1,33 @@
|
|||
{
|
||||
"kind": "Endpoints",
|
||||
"apiVersion": "v1",
|
||||
"metadata": {
|
||||
"name": "glusterfs-cluster"
|
||||
},
|
||||
"subsets": [
|
||||
{
|
||||
"addresses": [
|
||||
{
|
||||
"ip": "10.240.106.152"
|
||||
}
|
||||
],
|
||||
"ports": [
|
||||
{
|
||||
"port": 1
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"addresses": [
|
||||
{
|
||||
"ip": "10.240.79.157"
|
||||
}
|
||||
],
|
||||
"ports": [
|
||||
{
|
||||
"port": 1
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
31
vendor/k8s.io/kubernetes/examples/volumes/glusterfs/glusterfs-pod.json
generated
vendored
Normal file
31
vendor/k8s.io/kubernetes/examples/volumes/glusterfs/glusterfs-pod.json
generated
vendored
Normal file
|
@ -0,0 +1,31 @@
|
|||
{
|
||||
"apiVersion": "v1",
|
||||
"kind": "Pod",
|
||||
"metadata": {
|
||||
"name": "glusterfs"
|
||||
},
|
||||
"spec": {
|
||||
"containers": [
|
||||
{
|
||||
"name": "glusterfs",
|
||||
"image": "kubernetes/pause",
|
||||
"volumeMounts": [
|
||||
{
|
||||
"mountPath": "/mnt/glusterfs",
|
||||
"name": "glusterfsvol"
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"volumes": [
|
||||
{
|
||||
"name": "glusterfsvol",
|
||||
"glusterfs": {
|
||||
"endpoints": "glusterfs-cluster",
|
||||
"path": "kube_vol",
|
||||
"readOnly": true
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
12
vendor/k8s.io/kubernetes/examples/volumes/glusterfs/glusterfs-service.json
generated
vendored
Normal file
12
vendor/k8s.io/kubernetes/examples/volumes/glusterfs/glusterfs-service.json
generated
vendored
Normal file
|
@ -0,0 +1,12 @@
|
|||
{
|
||||
"kind": "Service",
|
||||
"apiVersion": "v1",
|
||||
"metadata": {
|
||||
"name": "glusterfs-cluster"
|
||||
},
|
||||
"spec": {
|
||||
"ports": [
|
||||
{"port": 1}
|
||||
]
|
||||
}
|
||||
}
|
90
vendor/k8s.io/kubernetes/examples/volumes/iscsi/README.md
generated
vendored
Normal file
90
vendor/k8s.io/kubernetes/examples/volumes/iscsi/README.md
generated
vendored
Normal file
|
@ -0,0 +1,90 @@
|
|||
## Introduction
|
||||
|
||||
The Kubernetes iSCSI implementation can connect to iSCSI devices via open-iscsi and multipathd on Linux.
|
||||
Currently supported features are
|
||||
* Connecting to one portal
|
||||
* Mounting a device directly or via multipathd
|
||||
* Formatting and partitioning any new device connected
|
||||
|
||||
## Prerequisites
|
||||
|
||||
This example expects there to be a working iSCSI target to connect to.
|
||||
If there isn't one in place then it is possible to setup a software version on Linux by following these guides
|
||||
|
||||
* [Setup a iSCSI target on Fedora](http://www.server-world.info/en/note?os=Fedora_21&p=iscsi)
|
||||
* [Install the iSCSI initiator on Fedora](http://www.server-world.info/en/note?os=Fedora_21&p=iscsi&f=2)
|
||||
* [Install multipathd for mpio support if required](http://www.linuxstories.eu/2014/07/how-to-setup-dm-multipath-on-rhel.html)
|
||||
|
||||
|
||||
## Creating the pod with iSCSI persistent storage
|
||||
|
||||
Once you have configured the iSCSI initiator, you can create a pod based on the example *iscsi.yaml*. In the pod YAML, you need to provide *targetPortal* (the iSCSI target's **IP** address and *port* if not the default port 3260), target's *iqn*, *lun*, and the type of the filesystem that has been created on the lun, and *readOnly* boolean. No initiator information is required.
|
||||
|
||||
If you want to use an iSCSI offload card or other open-iscsi transports besides tcp, setup an iSCSI interface and provide *iscsiInterface* in the pod YAML. The default name for an iscsi iface (open-iscsi parameter iface.iscsi\_ifacename) is in the format transport\_name.hwaddress when generated by iscsiadm. See [open-iscsi](http://www.open-iscsi.org/docs/README) or [openstack](http://docs.openstack.org/kilo/config-reference/content/iscsi-iface-config.html) for detailed configuration information.
|
||||
|
||||
**Note:** If you have followed the instructions in the links above you
|
||||
may have partitioned the device, the iSCSI volume plugin does not
|
||||
currently support partitions so format the device as one partition or leave the device raw and Kubernetes will partition and format it one first mount.
|
||||
|
||||
|
||||
Once the pod config is created, run it on the Kubernetes master:
|
||||
|
||||
```console
|
||||
kubectl create -f ./your_new_pod.yaml
|
||||
```
|
||||
|
||||
Here is the example pod created and expected output:
|
||||
|
||||
```console
|
||||
# kubectl create -f examples/volumes/iscsi/iscsi.yaml
|
||||
# kubectl get pods
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
iscsipd 2/2 RUNNING 0 2m
|
||||
```
|
||||
|
||||
On the Kubernetes node, verify the mount output
|
||||
|
||||
For a non mpio device the output should look like the following
|
||||
|
||||
```console
|
||||
# mount |grep kub
|
||||
/dev/sdb on /var/lib/kubelet/plugins/kubernetes.io/iscsi/10.0.2.15:3260-iqn.2001-04.com.example:storage.kube.sys1.xyz-lun-0 type ext4 (ro,relatime,data=ordered)
|
||||
/dev/sdb on /var/lib/kubelet/pods/f527ca5b-6d87-11e5-aa7e-080027ff6387/volumes/kubernetes.io~iscsi/iscsipd-ro type ext4 (ro,relatime,data=ordered)
|
||||
/dev/sdc on /var/lib/kubelet/plugins/kubernetes.io/iscsi/10.0.2.15:3260-iqn.2001-04.com.example:storage.kube.sys1.xyz-lun-1 type ext4 (rw,relatime,data=ordered)
|
||||
/dev/sdc on /var/lib/kubelet/pods/f527ca5b-6d87-11e5-aa7e-080027ff6387/volumes/kubernetes.io~iscsi/iscsipd-rw type ext4 (rw,relatime,data=ordered)
|
||||
```
|
||||
|
||||
And for a node with mpio enabled the expected output would be similar to the following
|
||||
|
||||
```console
|
||||
# mount |grep kub
|
||||
/dev/mapper/mpatha on /var/lib/kubelet/plugins/kubernetes.io/iscsi/10.0.2.15:3260-iqn.2001-04.com.example:storage.kube.sys1.xyz-lun-0 type ext4 (ro,relatime,data=ordered)
|
||||
/dev/mapper/mpatha on /var/lib/kubelet/pods/f527ca5b-6d87-11e5-aa7e-080027ff6387/volumes/kubernetes.io~iscsi/iscsipd-ro type ext4 (ro,relatime,data=ordered)
|
||||
/dev/mapper/mpathb on /var/lib/kubelet/plugins/kubernetes.io/iscsi/10.0.2.15:3260-iqn.2001-04.com.example:storage.kube.sys1.xyz-lun-1 type ext4 (rw,relatime,data=ordered)
|
||||
/dev/mapper/mpathb on /var/lib/kubelet/pods/f527ca5b-6d87-11e5-aa7e-080027ff6387/volumes/kubernetes.io~iscsi/iscsipd-rw type ext4 (rw,relatime,data=ordered)
|
||||
```
|
||||
|
||||
|
||||
If you ssh to that machine, you can run `docker ps` to see the actual pod.
|
||||
|
||||
```console
|
||||
# docker ps
|
||||
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
|
||||
f855336407f4 kubernetes/pause "/pause" 6 minutes ago Up 6 minutes k8s_iscsipd-ro.d130ec3e_iscsipd_default_f527ca5b-6d87-11e5-aa7e-080027ff6387_5409a4cb
|
||||
3b8a772515d2 kubernetes/pause "/pause" 6 minutes ago Up 6 minutes k8s_iscsipd-rw.ed58ec4e_iscsipd_default_f527ca5b-6d87-11e5-aa7e-080027ff6387_d25592c5
|
||||
```
|
||||
|
||||
Run *docker inspect* and verify the container mounted the host directory into the their */mnt/iscsipd* directory.
|
||||
|
||||
```console
|
||||
# docker inspect --format '{{ range .Mounts }}{{ if eq .Destination "/mnt/iscsipd" }}{{ .Source }}{{ end }}{{ end }}' f855336407f4
|
||||
/var/lib/kubelet/pods/f527ca5b-6d87-11e5-aa7e-080027ff6387/volumes/kubernetes.io~iscsi/iscsipd-ro
|
||||
|
||||
# docker inspect --format '{{ range .Mounts }}{{ if eq .Destination "/mnt/iscsipd" }}{{ .Source }}{{ end }}{{ end }}' 3b8a772515d2
|
||||
/var/lib/kubelet/pods/f527ca5b-6d87-11e5-aa7e-080027ff6387/volumes/kubernetes.io~iscsi/iscsipd-rw
|
||||
```
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
32
vendor/k8s.io/kubernetes/examples/volumes/iscsi/iscsi.yaml
generated
vendored
Normal file
32
vendor/k8s.io/kubernetes/examples/volumes/iscsi/iscsi.yaml
generated
vendored
Normal file
|
@ -0,0 +1,32 @@
|
|||
---
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: iscsipd
|
||||
spec:
|
||||
containers:
|
||||
- name: iscsipd-ro
|
||||
image: kubernetes/pause
|
||||
volumeMounts:
|
||||
- mountPath: "/mnt/iscsipd"
|
||||
name: iscsipd-ro
|
||||
- name: iscsipd-rw
|
||||
image: kubernetes/pause
|
||||
volumeMounts:
|
||||
- mountPath: "/mnt/iscsipd"
|
||||
name: iscsipd-rw
|
||||
volumes:
|
||||
- name: iscsipd-ro
|
||||
iscsi:
|
||||
targetPortal: 10.0.2.15:3260
|
||||
iqn: iqn.2001-04.com.example:storage.kube.sys1.xyz
|
||||
lun: 0
|
||||
fsType: ext4
|
||||
readOnly: true
|
||||
- name: iscsipd-rw
|
||||
iscsi:
|
||||
targetPortal: 10.0.2.15:3260
|
||||
iqn: iqn.2001-04.com.example:storage.kube.sys1.xyz
|
||||
lun: 1
|
||||
fsType: ext4
|
||||
readOnly: false
|
165
vendor/k8s.io/kubernetes/examples/volumes/nfs/README.md
generated
vendored
Normal file
165
vendor/k8s.io/kubernetes/examples/volumes/nfs/README.md
generated
vendored
Normal file
|
@ -0,0 +1,165 @@
|
|||
# Outline
|
||||
|
||||
This example describes how to create Web frontend server, an auto-provisioned persistent volume on GCE, and an NFS-backed persistent claim.
|
||||
|
||||
Demonstrated Kubernetes Concepts:
|
||||
|
||||
* [Persistent Volumes](http://kubernetes.io/docs/user-guide/persistent-volumes/) to
|
||||
define persistent disks (disk lifecycle not tied to the Pods).
|
||||
* [Services](http://kubernetes.io/docs/user-guide/services/) to enable Pods to
|
||||
locate one another.
|
||||
|
||||
![alt text][nfs pv example]
|
||||
|
||||
As illustrated above, two persistent volumes are used in this example:
|
||||
|
||||
- Web frontend Pod uses a persistent volume based on NFS server, and
|
||||
- NFS server uses an auto provisioned [persistent volume](http://kubernetes.io/docs/user-guide/persistent-volumes/) from GCE PD or AWS EBS.
|
||||
|
||||
Note, this example uses an NFS container that doesn't support NFSv4.
|
||||
|
||||
[nfs pv example]: nfs-pv.png
|
||||
|
||||
|
||||
## Quickstart
|
||||
|
||||
```console
|
||||
$ kubectl create -f examples/volumes/nfs/provisioner/nfs-server-gce-pv.yaml
|
||||
$ kubectl create -f examples/volumes/nfs/nfs-server-rc.yaml
|
||||
$ kubectl create -f examples/volumes/nfs/nfs-server-service.yaml
|
||||
# get the cluster IP of the server using the following command
|
||||
$ kubectl describe services nfs-server
|
||||
# use the NFS server IP to update nfs-pv.yaml and execute the following
|
||||
$ kubectl create -f examples/volumes/nfs/nfs-pv.yaml
|
||||
$ kubectl create -f examples/volumes/nfs/nfs-pvc.yaml
|
||||
# run a fake backend
|
||||
$ kubectl create -f examples/volumes/nfs/nfs-busybox-rc.yaml
|
||||
# get pod name from this command
|
||||
$ kubectl get pod -l name=nfs-busybox
|
||||
# use the pod name to check the test file
|
||||
$ kubectl exec nfs-busybox-jdhf3 -- cat /mnt/index.html
|
||||
```
|
||||
|
||||
## Example of NFS based persistent volume
|
||||
|
||||
See [NFS Service and Replication Controller](nfs-web-rc.yaml) for a quick example of how to use an NFS
|
||||
volume claim in a replication controller. It relies on the
|
||||
[NFS persistent volume](nfs-pv.yaml) and
|
||||
[NFS persistent volume claim](nfs-pvc.yaml) in this example as well.
|
||||
|
||||
## Complete setup
|
||||
|
||||
The example below shows how to export a NFS share from a single pod replication
|
||||
controller and import it into two replication controllers.
|
||||
|
||||
### NFS server part
|
||||
|
||||
Define [the NFS Service and Replication Controller](nfs-server-rc.yaml) and
|
||||
[NFS service](nfs-server-service.yaml):
|
||||
|
||||
The NFS server exports an an auto-provisioned persistent volume backed by GCE PD:
|
||||
|
||||
```console
|
||||
$ kubectl create -f examples/volumes/nfs/provisioner/nfs-server-gce-pv.yaml
|
||||
```
|
||||
|
||||
```console
|
||||
$ kubectl create -f examples/volumes/nfs/nfs-server-rc.yaml
|
||||
$ kubectl create -f examples/volumes/nfs/nfs-server-service.yaml
|
||||
```
|
||||
|
||||
The directory contains dummy `index.html`. Wait until the pod is running
|
||||
by checking `kubectl get pods -l role=nfs-server`.
|
||||
|
||||
### Create the NFS based persistent volume claim
|
||||
|
||||
The [NFS busybox controller](nfs-busybox-rc.yaml) uses a simple script to
|
||||
generate data written to the NFS server we just started. First, you'll need to
|
||||
find the cluster IP of the server:
|
||||
|
||||
```console
|
||||
$ kubectl describe services nfs-server
|
||||
```
|
||||
|
||||
Replace the invalid IP in the [nfs PV](nfs-pv.yaml). (In the future,
|
||||
we'll be able to tie these together using the service names, but for
|
||||
now, you have to hardcode the IP.)
|
||||
|
||||
Create the the [persistent volume](../../../docs/user-guide/persistent-volumes.md)
|
||||
and the persistent volume claim for your NFS server. The persistent volume and
|
||||
claim gives us an indirection that allow multiple pods to refer to the NFS
|
||||
server using a symbolic name rather than the hardcoded server address.
|
||||
|
||||
```console
|
||||
$ kubectl create -f examples/volumes/nfs/nfs-pv.yaml
|
||||
$ kubectl create -f examples/volumes/nfs/nfs-pvc.yaml
|
||||
```
|
||||
|
||||
## Setup the fake backend
|
||||
|
||||
The [NFS busybox controller](nfs-busybox-rc.yaml) updates `index.html` on the
|
||||
NFS server every 10 seconds. Let's start that now:
|
||||
|
||||
```console
|
||||
$ kubectl create -f examples/volumes/nfs/nfs-busybox-rc.yaml
|
||||
```
|
||||
|
||||
Conveniently, it's also a `busybox` pod, so we can get an early check
|
||||
that our mounts are working now. Find a busybox pod and exec:
|
||||
|
||||
```console
|
||||
$ kubectl get pod -l name=nfs-busybox
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
nfs-busybox-jdhf3 1/1 Running 0 25m
|
||||
nfs-busybox-w3s4t 1/1 Running 0 25m
|
||||
$ kubectl exec nfs-busybox-jdhf3 -- cat /mnt/index.html
|
||||
Thu Oct 22 19:20:18 UTC 2015
|
||||
nfs-busybox-w3s4t
|
||||
```
|
||||
|
||||
You should see output similar to the above if everything is working well. If
|
||||
it's not, make sure you changed the invalid IP in the [NFS PV](nfs-pv.yaml) file
|
||||
and make sure the `describe services` command above had endpoints listed
|
||||
(indicating the service was associated with a running pod).
|
||||
|
||||
### Setup the web server
|
||||
|
||||
The [web server controller](nfs-web-rc.yaml) is an another simple replication
|
||||
controller demonstrates reading from the NFS share exported above as a NFS
|
||||
volume and runs a simple web server on it.
|
||||
|
||||
Define the pod:
|
||||
|
||||
```console
|
||||
$ kubectl create -f examples/volumes/nfs/nfs-web-rc.yaml
|
||||
```
|
||||
|
||||
This creates two pods, each of which serve the `index.html` from above. We can
|
||||
then use a simple service to front it:
|
||||
|
||||
```console
|
||||
kubectl create -f examples/volumes/nfs/nfs-web-service.yaml
|
||||
```
|
||||
|
||||
We can then use the busybox container we launched before to check that `nginx`
|
||||
is serving the data appropriately:
|
||||
|
||||
```console
|
||||
$ kubectl get pod -l name=nfs-busybox
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
nfs-busybox-jdhf3 1/1 Running 0 1h
|
||||
nfs-busybox-w3s4t 1/1 Running 0 1h
|
||||
$ kubectl get services nfs-web
|
||||
NAME LABELS SELECTOR IP(S) PORT(S)
|
||||
nfs-web <none> role=web-frontend 10.0.68.37 80/TCP
|
||||
$ kubectl exec nfs-busybox-jdhf3 -- wget -qO- http://10.0.68.37
|
||||
Thu Oct 22 19:28:55 UTC 2015
|
||||
nfs-busybox-w3s4t
|
||||
```
|
||||
|
||||
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
32
vendor/k8s.io/kubernetes/examples/volumes/nfs/nfs-busybox-rc.yaml
generated
vendored
Normal file
32
vendor/k8s.io/kubernetes/examples/volumes/nfs/nfs-busybox-rc.yaml
generated
vendored
Normal file
|
@ -0,0 +1,32 @@
|
|||
# This mounts the nfs volume claim into /mnt and continuously
|
||||
# overwrites /mnt/index.html with the time and hostname of the pod.
|
||||
|
||||
apiVersion: v1
|
||||
kind: ReplicationController
|
||||
metadata:
|
||||
name: nfs-busybox
|
||||
spec:
|
||||
replicas: 2
|
||||
selector:
|
||||
name: nfs-busybox
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
name: nfs-busybox
|
||||
spec:
|
||||
containers:
|
||||
- image: busybox
|
||||
command:
|
||||
- sh
|
||||
- -c
|
||||
- 'while true; do date > /mnt/index.html; hostname >> /mnt/index.html; sleep $(($RANDOM % 5 + 5)); done'
|
||||
imagePullPolicy: IfNotPresent
|
||||
name: busybox
|
||||
volumeMounts:
|
||||
# name must match the volume name below
|
||||
- name: nfs
|
||||
mountPath: "/mnt"
|
||||
volumes:
|
||||
- name: nfs
|
||||
persistentVolumeClaim:
|
||||
claimName: nfs
|
25
vendor/k8s.io/kubernetes/examples/volumes/nfs/nfs-data/Dockerfile
generated
vendored
Normal file
25
vendor/k8s.io/kubernetes/examples/volumes/nfs/nfs-data/Dockerfile
generated
vendored
Normal file
|
@ -0,0 +1,25 @@
|
|||
# Copyright 2016 The Kubernetes Authors.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
FROM centos
|
||||
RUN yum -y install /usr/bin/ps nfs-utils && yum clean all
|
||||
RUN mkdir -p /exports
|
||||
ADD run_nfs.sh /usr/local/bin/
|
||||
ADD index.html /tmp/index.html
|
||||
RUN chmod 644 /tmp/index.html
|
||||
|
||||
# expose mountd 20048/tcp and nfsd 2049/tcp and rpcbind 111/tcp
|
||||
EXPOSE 2049/tcp 20048/tcp 111/tcp 111/udp
|
||||
|
||||
ENTRYPOINT ["/usr/local/bin/run_nfs.sh", "/exports"]
|
13
vendor/k8s.io/kubernetes/examples/volumes/nfs/nfs-data/README.md
generated
vendored
Normal file
13
vendor/k8s.io/kubernetes/examples/volumes/nfs/nfs-data/README.md
generated
vendored
Normal file
|
@ -0,0 +1,13 @@
|
|||
# NFS-exporter container with a file
|
||||
|
||||
This container exports /exports with index.html in it via NFS. Based on
|
||||
../exports. Since some Linux kernels have issues running NFSv4 daemons in containers,
|
||||
only NFSv3 is opened in this container.
|
||||
|
||||
Available as `gcr.io/google-samples/nfs-server`
|
||||
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
1
vendor/k8s.io/kubernetes/examples/volumes/nfs/nfs-data/index.html
generated
vendored
Normal file
1
vendor/k8s.io/kubernetes/examples/volumes/nfs/nfs-data/index.html
generated
vendored
Normal file
|
@ -0,0 +1 @@
|
|||
Hello world!
|
72
vendor/k8s.io/kubernetes/examples/volumes/nfs/nfs-data/run_nfs.sh
generated
vendored
Executable file
72
vendor/k8s.io/kubernetes/examples/volumes/nfs/nfs-data/run_nfs.sh
generated
vendored
Executable file
|
@ -0,0 +1,72 @@
|
|||
#!/bin/bash
|
||||
|
||||
# Copyright 2015 The Kubernetes Authors.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
function start()
|
||||
{
|
||||
|
||||
# prepare /etc/exports
|
||||
for i in "$@"; do
|
||||
# fsid=0: needed for NFSv4
|
||||
echo "$i *(rw,fsid=0,insecure,no_root_squash)" >> /etc/exports
|
||||
# move index.html to here
|
||||
/bin/cp /tmp/index.html $i/
|
||||
chmod 644 $i/index.html
|
||||
echo "Serving $i"
|
||||
done
|
||||
|
||||
# start rpcbind if it is not started yet
|
||||
/usr/sbin/rpcinfo 127.0.0.1 > /dev/null; s=$?
|
||||
if [ $s -ne 0 ]; then
|
||||
echo "Starting rpcbind"
|
||||
/usr/sbin/rpcbind -w
|
||||
fi
|
||||
|
||||
mount -t nfsd nfds /proc/fs/nfsd
|
||||
|
||||
# -N 4.x: disable NFSv4
|
||||
# -V 3: enable NFSv3
|
||||
/usr/sbin/rpc.mountd -N 2 -V 3 -N 4 -N 4.1
|
||||
|
||||
/usr/sbin/exportfs -r
|
||||
# -G 10 to reduce grace time to 10 seconds (the lowest allowed)
|
||||
/usr/sbin/rpc.nfsd -G 10 -N 2 -V 3 -N 4 -N 4.1 2
|
||||
/usr/sbin/rpc.statd --no-notify
|
||||
echo "NFS started"
|
||||
}
|
||||
|
||||
function stop()
|
||||
{
|
||||
echo "Stopping NFS"
|
||||
|
||||
/usr/sbin/rpc.nfsd 0
|
||||
/usr/sbin/exportfs -au
|
||||
/usr/sbin/exportfs -f
|
||||
|
||||
kill $( pidof rpc.mountd )
|
||||
umount /proc/fs/nfsd
|
||||
echo > /etc/exports
|
||||
exit 0
|
||||
}
|
||||
|
||||
|
||||
trap stop TERM
|
||||
|
||||
start "$@"
|
||||
|
||||
# Ugly hack to do nothing and wait for SIGTERM
|
||||
while true; do
|
||||
sleep 5
|
||||
done
|
BIN
vendor/k8s.io/kubernetes/examples/volumes/nfs/nfs-pv.png
generated
vendored
Normal file
BIN
vendor/k8s.io/kubernetes/examples/volumes/nfs/nfs-pv.png
generated
vendored
Normal file
Binary file not shown.
After Width: | Height: | Size: 9.2 KiB |
13
vendor/k8s.io/kubernetes/examples/volumes/nfs/nfs-pv.yaml
generated
vendored
Normal file
13
vendor/k8s.io/kubernetes/examples/volumes/nfs/nfs-pv.yaml
generated
vendored
Normal file
|
@ -0,0 +1,13 @@
|
|||
apiVersion: v1
|
||||
kind: PersistentVolume
|
||||
metadata:
|
||||
name: nfs
|
||||
spec:
|
||||
capacity:
|
||||
storage: 1Mi
|
||||
accessModes:
|
||||
- ReadWriteMany
|
||||
nfs:
|
||||
# FIXME: use the right IP
|
||||
server: 10.244.1.4
|
||||
path: "/exports"
|
10
vendor/k8s.io/kubernetes/examples/volumes/nfs/nfs-pvc.yaml
generated
vendored
Normal file
10
vendor/k8s.io/kubernetes/examples/volumes/nfs/nfs-pvc.yaml
generated
vendored
Normal file
|
@ -0,0 +1,10 @@
|
|||
kind: PersistentVolumeClaim
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: nfs
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteMany
|
||||
resources:
|
||||
requests:
|
||||
storage: 1Mi
|
32
vendor/k8s.io/kubernetes/examples/volumes/nfs/nfs-server-rc.yaml
generated
vendored
Normal file
32
vendor/k8s.io/kubernetes/examples/volumes/nfs/nfs-server-rc.yaml
generated
vendored
Normal file
|
@ -0,0 +1,32 @@
|
|||
apiVersion: v1
|
||||
kind: ReplicationController
|
||||
metadata:
|
||||
name: nfs-server
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
role: nfs-server
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
role: nfs-server
|
||||
spec:
|
||||
containers:
|
||||
- name: nfs-server
|
||||
image: gcr.io/google-samples/nfs-server:1.1
|
||||
ports:
|
||||
- name: nfs
|
||||
containerPort: 2049
|
||||
- name: mountd
|
||||
containerPort: 20048
|
||||
- name: rpcbind
|
||||
containerPort: 111
|
||||
securityContext:
|
||||
privileged: true
|
||||
volumeMounts:
|
||||
- mountPath: /exports
|
||||
name: mypvc
|
||||
volumes:
|
||||
- name: mypvc
|
||||
persistentVolumeClaim:
|
||||
claimName: nfs-pv-provisioning-demo
|
14
vendor/k8s.io/kubernetes/examples/volumes/nfs/nfs-server-service.yaml
generated
vendored
Normal file
14
vendor/k8s.io/kubernetes/examples/volumes/nfs/nfs-server-service.yaml
generated
vendored
Normal file
|
@ -0,0 +1,14 @@
|
|||
kind: Service
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: nfs-server
|
||||
spec:
|
||||
ports:
|
||||
- name: nfs
|
||||
port: 2049
|
||||
- name: mountd
|
||||
port: 20048
|
||||
- name: rpcbind
|
||||
port: 111
|
||||
selector:
|
||||
role: nfs-server
|
30
vendor/k8s.io/kubernetes/examples/volumes/nfs/nfs-web-rc.yaml
generated
vendored
Normal file
30
vendor/k8s.io/kubernetes/examples/volumes/nfs/nfs-web-rc.yaml
generated
vendored
Normal file
|
@ -0,0 +1,30 @@
|
|||
# This pod mounts the nfs volume claim into /usr/share/nginx/html and
|
||||
# serves a simple web page.
|
||||
|
||||
apiVersion: v1
|
||||
kind: ReplicationController
|
||||
metadata:
|
||||
name: nfs-web
|
||||
spec:
|
||||
replicas: 2
|
||||
selector:
|
||||
role: web-frontend
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
role: web-frontend
|
||||
spec:
|
||||
containers:
|
||||
- name: web
|
||||
image: nginx
|
||||
ports:
|
||||
- name: web
|
||||
containerPort: 80
|
||||
volumeMounts:
|
||||
# name must match the volume name below
|
||||
- name: nfs
|
||||
mountPath: "/usr/share/nginx/html"
|
||||
volumes:
|
||||
- name: nfs
|
||||
persistentVolumeClaim:
|
||||
claimName: nfs
|
9
vendor/k8s.io/kubernetes/examples/volumes/nfs/nfs-web-service.yaml
generated
vendored
Normal file
9
vendor/k8s.io/kubernetes/examples/volumes/nfs/nfs-web-service.yaml
generated
vendored
Normal file
|
@ -0,0 +1,9 @@
|
|||
kind: Service
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: nfs-web
|
||||
spec:
|
||||
ports:
|
||||
- port: 80
|
||||
selector:
|
||||
role: web-frontend
|
13
vendor/k8s.io/kubernetes/examples/volumes/nfs/provisioner/nfs-server-gce-pv.yaml
generated
vendored
Normal file
13
vendor/k8s.io/kubernetes/examples/volumes/nfs/provisioner/nfs-server-gce-pv.yaml
generated
vendored
Normal file
|
@ -0,0 +1,13 @@
|
|||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: nfs-pv-provisioning-demo
|
||||
labels:
|
||||
demo: nfs-pv-provisioning
|
||||
annotations:
|
||||
volume.alpha.kubernetes.io/storage-class: any
|
||||
spec:
|
||||
accessModes: [ "ReadWriteOnce" ]
|
||||
resources:
|
||||
requests:
|
||||
storage: 200Gi
|
98
vendor/k8s.io/kubernetes/examples/volumes/quobyte/Readme.md
generated
vendored
Normal file
98
vendor/k8s.io/kubernetes/examples/volumes/quobyte/Readme.md
generated
vendored
Normal file
|
@ -0,0 +1,98 @@
|
|||
<!-- BEGIN MUNGE: GENERATED_TOC -->
|
||||
|
||||
- [Quobyte Volume](#quobyte-volume)
|
||||
- [Quobyte](#quobyte)
|
||||
- [Prerequisites](#prerequisites)
|
||||
- [Fixed user Mounts](#fixed-user-mounts)
|
||||
- [Creating a pod](#creating-a-pod)
|
||||
|
||||
<!-- END MUNGE: GENERATED_TOC -->
|
||||
|
||||
# Quobyte Volume
|
||||
|
||||
## Quobyte
|
||||
|
||||
[Quobyte](http://www.quobyte.com) is software that turns commodity servers into a reliable and highly automated multi-data center file system.
|
||||
|
||||
The example assumes that you already have a running Kubernetes cluster and you already have setup Quobyte-Client (1.3+) on each Kubernetes node.
|
||||
|
||||
### Prerequisites
|
||||
|
||||
- Running Quobyte storage cluster
|
||||
- Quobyte client (1.3+) installed on the Kubernetes nodes more information how you can install Quobyte on your Kubernetes nodes, can be found in the [documentation](https://support.quobyte.com) of Quobyte.
|
||||
- To get access to Quobyte and the documentation please [contact us](http://www.quobyte.com/get-quobyte)
|
||||
- Already created Quobyte Volume
|
||||
- Added the line `allow-usermapping-in-volumename` in `/etc/quobyte/client.cfg` to allow the fixed user mounts
|
||||
|
||||
### Fixed user Mounts
|
||||
|
||||
Quobyte supports since 1.3 fixed user mounts. The fixed-user mounts simply allow to mount all Quobyte Volumes inside one directory and use them as different users. All access to the Quobyte Volume will be rewritten to the specified user and group – both are optional, independent of the user inside the container. You can read more about it [here](https://blog.inovex.de/docker-plugins) under the section "Quobyte Mount and Docker — what’s special"
|
||||
|
||||
## Creating a pod
|
||||
|
||||
See example:
|
||||
|
||||
<!-- BEGIN MUNGE: EXAMPLE ./quobyte-pod.yaml -->
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: quobyte
|
||||
spec:
|
||||
containers:
|
||||
- name: quobyte
|
||||
image: kubernetes/pause
|
||||
volumeMounts:
|
||||
- mountPath: /mnt
|
||||
name: quobytevolume
|
||||
volumes:
|
||||
- name: quobytevolume
|
||||
quobyte:
|
||||
registry: registry:7861
|
||||
volume: testVolume
|
||||
readOnly: false
|
||||
user: root
|
||||
group: root
|
||||
```
|
||||
|
||||
[Download example](quobyte-pod.yaml?raw=true)
|
||||
<!-- END MUNGE: EXAMPLE ./quobyte-pod.yaml -->
|
||||
|
||||
Parameters:
|
||||
* **registry** Quobyte registry to use to mount the volume. You can specifiy the registry as <host>:<port> pair or if you want to specify multiple registries you just have to put a comma between them e.q. <host1>:<port>,<host2>:<port>,<host3>:<port>. The host can be an IP address or if you have a working DNS you can also provide the DNS names.
|
||||
* **volume** volume represents a Quobyte volume which must be created before usage.
|
||||
* **readOnly** is the boolean that sets the mountpoint readOnly or readWrite.
|
||||
* **user** maps all access to this user. Default is `root`.
|
||||
* **group** maps all access to this group. Default is `nfsnobody`.
|
||||
|
||||
Creating the pod:
|
||||
|
||||
```bash
|
||||
$ kubectl create -f examples/volumes/quobyte/quobyte-pod.yaml
|
||||
```
|
||||
|
||||
Verify that the pod is running:
|
||||
|
||||
```bash
|
||||
$ kubectl get pods quobyte
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
quobyte 1/1 Running 0 48m
|
||||
|
||||
$ kubectl get pods quobyte --template '{{.status.hostIP}}{{"\n"}}'
|
||||
10.245.1.3
|
||||
```
|
||||
|
||||
SSH onto the Machine and validate that quobyte is mounted:
|
||||
|
||||
```bash
|
||||
$ mount | grep quobyte
|
||||
quobyte@10.239.10.21:7861/ on /var/lib/kubelet/plugins/kubernetes.io~quobyte type fuse (rw,nosuid,nodev,noatime,user_id=0,group_id=0,default_permissions,allow_other)
|
||||
|
||||
$ docker inspect --format '{{ range .Mounts }}{{ if eq .Destination "/mnt"}}{{ .Source }}{{ end }}{{ end }}' 55ab97593cd3
|
||||
/var/lib/kubelet/plugins/kubernetes.io~quobyte/root#root@testVolume
|
||||
```
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
19
vendor/k8s.io/kubernetes/examples/volumes/quobyte/quobyte-pod.yaml
generated
vendored
Normal file
19
vendor/k8s.io/kubernetes/examples/volumes/quobyte/quobyte-pod.yaml
generated
vendored
Normal file
|
@ -0,0 +1,19 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: quobyte
|
||||
spec:
|
||||
containers:
|
||||
- name: quobyte
|
||||
image: kubernetes/pause
|
||||
volumeMounts:
|
||||
- mountPath: /mnt
|
||||
name: quobytevolume
|
||||
volumes:
|
||||
- name: quobytevolume
|
||||
quobyte:
|
||||
registry: registry:7861
|
||||
volume: testVolume
|
||||
readOnly: false
|
||||
user: root
|
||||
group: root
|
59
vendor/k8s.io/kubernetes/examples/volumes/rbd/README.md
generated
vendored
Normal file
59
vendor/k8s.io/kubernetes/examples/volumes/rbd/README.md
generated
vendored
Normal file
|
@ -0,0 +1,59 @@
|
|||
# How to Use it?
|
||||
|
||||
Install Ceph on the Kubernetes host. For example, on Fedora 21
|
||||
|
||||
# yum -y install ceph-common
|
||||
|
||||
If you don't have a Ceph cluster, you can set up a [containerized Ceph cluster](https://github.com/ceph/ceph-docker)
|
||||
|
||||
Then get the keyring from the Ceph cluster and copy it to */etc/ceph/keyring*.
|
||||
|
||||
Once you have installed Ceph and new Kubernetes, you can create a pod based on my examples [rbd.json](rbd.json) [rbd-with-secret.json](rbd-with-secret.json). In the pod JSON, you need to provide the following information.
|
||||
|
||||
- *monitors*: Ceph monitors.
|
||||
- *pool*: The name of the RADOS pool, if not provided, default *rbd* pool is used.
|
||||
- *image*: The image name that rbd has created.
|
||||
- *user*: The RADOS user name. If not provided, default *admin* is used.
|
||||
- *keyring*: The path to the keyring file. If not provided, default */etc/ceph/keyring* is used.
|
||||
- *secretName*: The name of the authentication secrets. If provided, *secretName* overrides *keyring*. Note, see below about how to create a secret.
|
||||
- *fsType*: The filesystem type (ext4, xfs, etc) that formatted on the device.
|
||||
- *readOnly*: Whether the filesystem is used as readOnly.
|
||||
|
||||
# Use Ceph Authentication Secret
|
||||
|
||||
If Ceph authentication secret is provided, the secret should be first be *base64 encoded*, then encoded string is placed in a secret yaml. For example, getting Ceph user `kube`'s base64 encoded secret can use the following command:
|
||||
|
||||
```console
|
||||
# grep key /etc/ceph/ceph.client.kube.keyring |awk '{printf "%s", $NF}'|base64
|
||||
QVFBTWdYaFZ3QkNlRGhBQTlubFBhRnlmVVNhdEdENGRyRldEdlE9PQ==
|
||||
```
|
||||
|
||||
An example yaml is provided [here](secret/ceph-secret.yaml). Then post the secret through ```kubectl``` in the following command.
|
||||
|
||||
```console
|
||||
# kubectl create -f examples/volumes/rbd/secret/ceph-secret.yaml
|
||||
```
|
||||
|
||||
# Get started
|
||||
|
||||
Here are my commands:
|
||||
|
||||
```console
|
||||
# kubectl create -f examples/volumes/rbd/rbd.json
|
||||
# kubectl get pods
|
||||
```
|
||||
|
||||
On the Kubernetes host, I got these in mount output
|
||||
|
||||
```console
|
||||
#mount |grep kub
|
||||
/dev/rbd0 on /var/lib/kubelet/plugins/kubernetes.io/rbd/rbd/kube-image-foo type ext4 (ro,relatime,stripe=4096,data=ordered)
|
||||
/dev/rbd0 on /var/lib/kubelet/pods/ec2166b4-de07-11e4-aaf5-d4bed9b39058/volumes/kubernetes.io~rbd/rbdpd type ext4 (ro,relatime,stripe=4096,data=ordered)
|
||||
```
|
||||
|
||||
If you ssh to that machine, you can run `docker ps` to see the actual pod and `docker inspect` to see the volumes used by the container.
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
41
vendor/k8s.io/kubernetes/examples/volumes/rbd/rbd-with-secret.json
generated
vendored
Normal file
41
vendor/k8s.io/kubernetes/examples/volumes/rbd/rbd-with-secret.json
generated
vendored
Normal file
|
@ -0,0 +1,41 @@
|
|||
{
|
||||
"apiVersion": "v1",
|
||||
"kind": "Pod",
|
||||
"metadata": {
|
||||
"name": "rbd2"
|
||||
},
|
||||
"spec": {
|
||||
"containers": [
|
||||
{
|
||||
"name": "rbd-rw",
|
||||
"image": "kubernetes/pause",
|
||||
"volumeMounts": [
|
||||
{
|
||||
"mountPath": "/mnt/rbd",
|
||||
"name": "rbdpd"
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"volumes": [
|
||||
{
|
||||
"name": "rbdpd",
|
||||
"rbd": {
|
||||
"monitors": [
|
||||
"10.16.154.78:6789",
|
||||
"10.16.154.82:6789",
|
||||
"10.16.154.83:6789"
|
||||
],
|
||||
"pool": "kube",
|
||||
"image": "foo",
|
||||
"user": "admin",
|
||||
"secretRef": {
|
||||
"name": "ceph-secret"
|
||||
},
|
||||
"fsType": "ext4",
|
||||
"readOnly": true
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
39
vendor/k8s.io/kubernetes/examples/volumes/rbd/rbd.json
generated
vendored
Normal file
39
vendor/k8s.io/kubernetes/examples/volumes/rbd/rbd.json
generated
vendored
Normal file
|
@ -0,0 +1,39 @@
|
|||
{
|
||||
"apiVersion": "v1",
|
||||
"kind": "Pod",
|
||||
"metadata": {
|
||||
"name": "rbd"
|
||||
},
|
||||
"spec": {
|
||||
"containers": [
|
||||
{
|
||||
"name": "rbd-rw",
|
||||
"image": "kubernetes/pause",
|
||||
"volumeMounts": [
|
||||
{
|
||||
"mountPath": "/mnt/rbd",
|
||||
"name": "rbdpd"
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"volumes": [
|
||||
{
|
||||
"name": "rbdpd",
|
||||
"rbd": {
|
||||
"monitors": [
|
||||
"10.16.154.78:6789",
|
||||
"10.16.154.82:6789",
|
||||
"10.16.154.83:6789"
|
||||
],
|
||||
"pool": "kube",
|
||||
"image": "foo",
|
||||
"user": "admin",
|
||||
"keyring": "/etc/ceph/keyring",
|
||||
"fsType": "ext4",
|
||||
"readOnly": true
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
7
vendor/k8s.io/kubernetes/examples/volumes/rbd/secret/ceph-secret.yaml
generated
vendored
Normal file
7
vendor/k8s.io/kubernetes/examples/volumes/rbd/secret/ceph-secret.yaml
generated
vendored
Normal file
|
@ -0,0 +1,7 @@
|
|||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: ceph-secret
|
||||
type: "kubernetes.io/rbd"
|
||||
data:
|
||||
key: QVFCMTZWMVZvRjVtRXhBQTVrQ1FzN2JCajhWVUxSdzI2Qzg0SEE9PQ==
|
342
vendor/k8s.io/kubernetes/examples/volumes/vsphere/README.md
generated
vendored
Normal file
342
vendor/k8s.io/kubernetes/examples/volumes/vsphere/README.md
generated
vendored
Normal file
|
@ -0,0 +1,342 @@
|
|||
# vSphere Volume
|
||||
|
||||
- [Prerequisites](#prerequisites)
|
||||
- [Examples](#examples)
|
||||
- [Volumes](#volumes)
|
||||
- [Persistent Volumes](#persistent-volumes)
|
||||
- [Storage Class](#storage-class)
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Kubernetes with vSphere Cloud Provider configured.
|
||||
For cloudprovider configuration please refer [vSphere getting started guide](http://kubernetes.io/docs/getting-started-guides/vsphere/).
|
||||
|
||||
## Examples
|
||||
|
||||
### Volumes
|
||||
|
||||
1. Create VMDK.
|
||||
|
||||
First ssh into ESX and then use following command to create vmdk,
|
||||
|
||||
```shell
|
||||
vmkfstools -c 2G /vmfs/volumes/datastore1/volumes/myDisk.vmdk
|
||||
```
|
||||
|
||||
2. Create Pod which uses 'myDisk.vmdk'.
|
||||
|
||||
See example
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: test-vmdk
|
||||
spec:
|
||||
containers:
|
||||
- image: gcr.io/google_containers/test-webserver
|
||||
name: test-container
|
||||
volumeMounts:
|
||||
- mountPath: /test-vmdk
|
||||
name: test-volume
|
||||
volumes:
|
||||
- name: test-volume
|
||||
# This VMDK volume must already exist.
|
||||
vsphereVolume:
|
||||
volumePath: "[datastore1] volumes/myDisk"
|
||||
fsType: ext4
|
||||
```
|
||||
|
||||
[Download example](vsphere-volume-pod.yaml?raw=true)
|
||||
|
||||
Creating the pod:
|
||||
|
||||
``` bash
|
||||
$ kubectl create -f examples/volumes/vsphere/vsphere-volume-pod.yaml
|
||||
```
|
||||
|
||||
Verify that pod is running:
|
||||
|
||||
```bash
|
||||
$ kubectl get pods test-vmdk
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
test-vmdk 1/1 Running 0 48m
|
||||
```
|
||||
|
||||
### Persistent Volumes
|
||||
|
||||
1. Create VMDK.
|
||||
|
||||
First ssh into ESX and then use following command to create vmdk,
|
||||
|
||||
```shell
|
||||
vmkfstools -c 2G /vmfs/volumes/datastore1/volumes/myDisk.vmdk
|
||||
```
|
||||
|
||||
2. Create Persistent Volume.
|
||||
|
||||
See example:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: PersistentVolume
|
||||
metadata:
|
||||
name: pv0001
|
||||
spec:
|
||||
capacity:
|
||||
storage: 2Gi
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
persistentVolumeReclaimPolicy: Retain
|
||||
vsphereVolume:
|
||||
volumePath: "[datastore1] volumes/myDisk"
|
||||
fsType: ext4
|
||||
```
|
||||
|
||||
[Download example](vsphere-volume-pv.yaml?raw=true)
|
||||
|
||||
Creating the persistent volume:
|
||||
|
||||
``` bash
|
||||
$ kubectl create -f examples/volumes/vsphere/vsphere-volume-pv.yaml
|
||||
```
|
||||
|
||||
Verifying persistent volume is created:
|
||||
|
||||
``` bash
|
||||
$ kubectl describe pv pv0001
|
||||
Name: pv0001
|
||||
Labels: <none>
|
||||
Status: Available
|
||||
Claim:
|
||||
Reclaim Policy: Retain
|
||||
Access Modes: RWO
|
||||
Capacity: 2Gi
|
||||
Message:
|
||||
Source:
|
||||
Type: vSphereVolume (a Persistent Disk resource in vSphere)
|
||||
VolumePath: [datastore1] volumes/myDisk
|
||||
FSType: ext4
|
||||
No events.
|
||||
```
|
||||
|
||||
3. Create Persistent Volume Claim.
|
||||
|
||||
See example:
|
||||
|
||||
```yaml
|
||||
kind: PersistentVolumeClaim
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: pvc0001
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 2Gi
|
||||
```
|
||||
|
||||
[Download example](vsphere-volume-pvc.yaml?raw=true)
|
||||
|
||||
Creating the persistent volume claim:
|
||||
|
||||
``` bash
|
||||
$ kubectl create -f examples/volumes/vsphere/vsphere-volume-pvc.yaml
|
||||
```
|
||||
|
||||
Verifying persistent volume claim is created:
|
||||
|
||||
``` bash
|
||||
$ kubectl describe pvc pvc0001
|
||||
Name: pvc0001
|
||||
Namespace: default
|
||||
Status: Bound
|
||||
Volume: pv0001
|
||||
Labels: <none>
|
||||
Capacity: 2Gi
|
||||
Access Modes: RWO
|
||||
No events.
|
||||
```
|
||||
|
||||
3. Create Pod which uses Persistent Volume Claim.
|
||||
|
||||
See example:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: pvpod
|
||||
spec:
|
||||
containers:
|
||||
- name: test-container
|
||||
image: gcr.io/google_containers/test-webserver
|
||||
volumeMounts:
|
||||
- name: test-volume
|
||||
mountPath: /test-vmdk
|
||||
volumes:
|
||||
- name: vmdk-storage
|
||||
persistentVolumeClaim:
|
||||
claimName: pvc0001
|
||||
```
|
||||
|
||||
[Download example](vsphere-volume-pvcpod.yaml?raw=true)
|
||||
|
||||
Creating the pod:
|
||||
|
||||
``` bash
|
||||
$ kubectl create -f examples/volumes/vsphere/vsphere-volume-pvcpod.yaml
|
||||
```
|
||||
|
||||
Verifying pod is created:
|
||||
|
||||
``` bash
|
||||
$ kubectl get pod pvpod
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
pvpod 1/1 Running 0 48m
|
||||
```
|
||||
|
||||
### Storage Class
|
||||
|
||||
__Note: Here you don't need to create vmdk it is created for you.__
|
||||
1. Create Storage Class.
|
||||
|
||||
See example:
|
||||
|
||||
```yaml
|
||||
kind: StorageClass
|
||||
apiVersion: storage.k8s.io/v1beta1
|
||||
metadata:
|
||||
name: fast
|
||||
provisioner: kubernetes.io/vsphere-volume
|
||||
parameters:
|
||||
diskformat: zeroedthick
|
||||
```
|
||||
|
||||
[Download example](vsphere-volume-sc-fast.yaml?raw=true)
|
||||
|
||||
Creating the storageclass:
|
||||
|
||||
``` bash
|
||||
$ kubectl create -f examples/volumes/vsphere/vsphere-volume-sc-fast.yaml
|
||||
```
|
||||
|
||||
Verifying storage class is created:
|
||||
|
||||
``` bash
|
||||
$ kubectl describe storageclass fast
|
||||
Name: fast
|
||||
Annotations: <none>
|
||||
Provisioner: kubernetes.io/vsphere-volume
|
||||
Parameters: diskformat=zeroedthick
|
||||
No events.
|
||||
```
|
||||
|
||||
2. Create Persistent Volume Claim.
|
||||
|
||||
See example:
|
||||
|
||||
```yaml
|
||||
kind: PersistentVolumeClaim
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: pvcsc001
|
||||
annotations:
|
||||
volume.beta.kubernetes.io/storage-class: fast
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 2Gi
|
||||
```
|
||||
|
||||
[Download example](vsphere-volume-pvcsc.yaml?raw=true)
|
||||
|
||||
Creating the persistent volume claim:
|
||||
|
||||
``` bash
|
||||
$ kubectl create -f examples/volumes/vsphere/vsphere-volume-pvcsc.yaml
|
||||
```
|
||||
|
||||
Verifying persistent volume claim is created:
|
||||
|
||||
``` bash
|
||||
$ kubectl describe pvc pvcsc001
|
||||
Name: pvcsc001
|
||||
Namespace: default
|
||||
Status: Bound
|
||||
Volume: pvc-80f7b5c1-94b6-11e6-a24f-005056a79d2d
|
||||
Labels: <none>
|
||||
Capacity: 2Gi
|
||||
Access Modes: RWO
|
||||
No events.
|
||||
```
|
||||
|
||||
Persistent Volume is automatically created and is bounded to this pvc.
|
||||
|
||||
Verifying persistent volume claim is created:
|
||||
|
||||
``` bash
|
||||
$ kubectl describe pv pvc-80f7b5c1-94b6-11e6-a24f-005056a79d2d
|
||||
Name: pvc-80f7b5c1-94b6-11e6-a24f-005056a79d2d
|
||||
Labels: <none>
|
||||
Status: Bound
|
||||
Claim: default/pvcsc001
|
||||
Reclaim Policy: Delete
|
||||
Access Modes: RWO
|
||||
Capacity: 2Gi
|
||||
Message:
|
||||
Source:
|
||||
Type: vSphereVolume (a Persistent Disk resource in vSphere)
|
||||
VolumePath: [datastore1] kubevols/kubernetes-dynamic-pvc-80f7b5c1-94b6-11e6-a24f-005056a79d2d.vmdk
|
||||
FSType: ext4
|
||||
No events.
|
||||
```
|
||||
|
||||
__Note: VMDK is created inside ```kubevols``` folder in datastore which is mentioned in 'vsphere' cloudprovider configuration.
|
||||
The cloudprovider config is created during setup of Kubernetes cluster on vSphere.__
|
||||
|
||||
3. Create Pod which uses Persistent Volume Claim with storage class.
|
||||
|
||||
See example:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: pvpod
|
||||
spec:
|
||||
containers:
|
||||
- name: test-container
|
||||
image: gcr.io/google_containers/test-webserver
|
||||
volumeMounts:
|
||||
- name: test-volume
|
||||
mountPath: /test-vmdk
|
||||
volumes:
|
||||
- name: vmdk-storage
|
||||
persistentVolumeClaim:
|
||||
claimName: pvcsc001
|
||||
```
|
||||
|
||||
[Download example](vsphere-volume-pvcscpod.yaml?raw=true)
|
||||
|
||||
Creating the pod:
|
||||
|
||||
``` bash
|
||||
$ kubectl create -f examples/volumes/vsphere/vsphere-volume-pvcscpod.yaml
|
||||
```
|
||||
|
||||
Verifying pod is created:
|
||||
|
||||
``` bash
|
||||
$ kubectl get pod pvpod
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
pvpod 1/1 Running 0 48m
|
||||
```
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
22
vendor/k8s.io/kubernetes/examples/volumes/vsphere/deployment.yaml
generated
vendored
Normal file
22
vendor/k8s.io/kubernetes/examples/volumes/vsphere/deployment.yaml
generated
vendored
Normal file
|
@ -0,0 +1,22 @@
|
|||
apiVersion: extensions/v1beta1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: deployment
|
||||
spec:
|
||||
replicas: 1
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: redis
|
||||
spec:
|
||||
containers:
|
||||
- name: redis
|
||||
image: redis
|
||||
volumeMounts:
|
||||
- name: vmfs-vmdk-storage
|
||||
mountPath: /data/
|
||||
volumes:
|
||||
- name: vmfs-vmdk-storage
|
||||
vsphereVolume:
|
||||
volumePath: "[Datastore] volumes/testdir"
|
||||
fsType: ext4
|
17
vendor/k8s.io/kubernetes/examples/volumes/vsphere/vsphere-volume-pod.yaml
generated
vendored
Normal file
17
vendor/k8s.io/kubernetes/examples/volumes/vsphere/vsphere-volume-pod.yaml
generated
vendored
Normal file
|
@ -0,0 +1,17 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: test-vmdk
|
||||
spec:
|
||||
containers:
|
||||
- image: gcr.io/google_containers/test-webserver
|
||||
name: test-container
|
||||
volumeMounts:
|
||||
- mountPath: /test-vmdk
|
||||
name: test-volume
|
||||
volumes:
|
||||
- name: test-volume
|
||||
# This VMDK volume must already exist.
|
||||
vsphereVolume:
|
||||
volumePath: "[DatastoreName] volumes/myDisk"
|
||||
fsType: ext4
|
13
vendor/k8s.io/kubernetes/examples/volumes/vsphere/vsphere-volume-pv.yaml
generated
vendored
Normal file
13
vendor/k8s.io/kubernetes/examples/volumes/vsphere/vsphere-volume-pv.yaml
generated
vendored
Normal file
|
@ -0,0 +1,13 @@
|
|||
apiVersion: v1
|
||||
kind: PersistentVolume
|
||||
metadata:
|
||||
name: pv0001
|
||||
spec:
|
||||
capacity:
|
||||
storage: 2Gi
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
persistentVolumeReclaimPolicy: Retain
|
||||
vsphereVolume:
|
||||
volumePath: "[DatastoreName] volumes/myDisk"
|
||||
fsType: ext4
|
10
vendor/k8s.io/kubernetes/examples/volumes/vsphere/vsphere-volume-pvc.yaml
generated
vendored
Normal file
10
vendor/k8s.io/kubernetes/examples/volumes/vsphere/vsphere-volume-pvc.yaml
generated
vendored
Normal file
|
@ -0,0 +1,10 @@
|
|||
kind: PersistentVolumeClaim
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: pvc0001
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 2Gi
|
15
vendor/k8s.io/kubernetes/examples/volumes/vsphere/vsphere-volume-pvcpod.yaml
generated
vendored
Normal file
15
vendor/k8s.io/kubernetes/examples/volumes/vsphere/vsphere-volume-pvcpod.yaml
generated
vendored
Normal file
|
@ -0,0 +1,15 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: pvpod
|
||||
spec:
|
||||
containers:
|
||||
- name: test-container
|
||||
image: gcr.io/google_containers/test-webserver
|
||||
volumeMounts:
|
||||
- name: test-volume
|
||||
mountPath: /test-vmdk
|
||||
volumes:
|
||||
- name: vmdk-storage
|
||||
persistentVolumeClaim:
|
||||
claimName: pvc0001
|
12
vendor/k8s.io/kubernetes/examples/volumes/vsphere/vsphere-volume-pvcsc.yaml
generated
vendored
Normal file
12
vendor/k8s.io/kubernetes/examples/volumes/vsphere/vsphere-volume-pvcsc.yaml
generated
vendored
Normal file
|
@ -0,0 +1,12 @@
|
|||
kind: PersistentVolumeClaim
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: pvcsc001
|
||||
annotations:
|
||||
volume.beta.kubernetes.io/storage-class: fast
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 2Gi
|
15
vendor/k8s.io/kubernetes/examples/volumes/vsphere/vsphere-volume-pvcscpod.yaml
generated
vendored
Normal file
15
vendor/k8s.io/kubernetes/examples/volumes/vsphere/vsphere-volume-pvcscpod.yaml
generated
vendored
Normal file
|
@ -0,0 +1,15 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: pvpod
|
||||
spec:
|
||||
containers:
|
||||
- name: test-container
|
||||
image: gcr.io/google_containers/test-webserver
|
||||
volumeMounts:
|
||||
- name: test-volume
|
||||
mountPath: /test-vmdk
|
||||
volumes:
|
||||
- name: vmdk-storage
|
||||
persistentVolumeClaim:
|
||||
claimName: pvcsc0001
|
7
vendor/k8s.io/kubernetes/examples/volumes/vsphere/vsphere-volume-sc-fast.yaml
generated
vendored
Normal file
7
vendor/k8s.io/kubernetes/examples/volumes/vsphere/vsphere-volume-sc-fast.yaml
generated
vendored
Normal file
|
@ -0,0 +1,7 @@
|
|||
kind: StorageClass
|
||||
apiVersion: storage.k8s.io/v1beta1
|
||||
metadata:
|
||||
name: fast
|
||||
provisioner: kubernetes.io/vsphere-volume
|
||||
parameters:
|
||||
diskformat: zeroedthick
|
Loading…
Add table
Add a link
Reference in a new issue