440 lines
18 KiB
Markdown
440 lines
18 KiB
Markdown
|
## Persistent Volume Provisioning
|
||
|
|
||
|
This example shows how to use dynamic persistent volume provisioning.
|
||
|
|
||
|
### Prerequisites
|
||
|
|
||
|
This example assumes that you have an understanding of Kubernetes administration and can modify the
|
||
|
scripts that launch kube-controller-manager.
|
||
|
|
||
|
### Admin Configuration
|
||
|
|
||
|
The admin must define `StorageClass` objects that describe named "classes" of storage offered in a cluster. Different classes might map to arbitrary levels or policies determined by the admin. When configuring a `StorageClass` object for persistent volume provisioning, the admin will need to describe the type of provisioner to use and the parameters that will be used by the provisioner when it provisions a `PersistentVolume` belonging to the class.
|
||
|
|
||
|
The name of a StorageClass object is significant, and is how users can request a particular class, by specifying the name in their `PersistentVolumeClaim`. The `provisioner` field must be specified as it determines what volume plugin is used for provisioning PVs. 2 cloud providers will be provided in the beta version of this feature: EBS and GCE. The `parameters` field contains the parameters that describe volumes belonging to the storage class. Different parameters may be accepted depending on the `provisioner`. For example, the value `io1`, for the parameter `type`, and the parameter `iopsPerGB` are specific to EBS . When a parameter is omitted, some default is used.
|
||
|
|
||
|
#### AWS
|
||
|
|
||
|
```yaml
|
||
|
kind: StorageClass
|
||
|
apiVersion: storage.k8s.io/v1beta1
|
||
|
metadata:
|
||
|
name: slow
|
||
|
provisioner: kubernetes.io/aws-ebs
|
||
|
parameters:
|
||
|
type: io1
|
||
|
zone: us-east-1d
|
||
|
iopsPerGB: "10"
|
||
|
```
|
||
|
|
||
|
* `type`: `io1`, `gp2`, `sc1`, `st1`. See AWS docs for details. Default: `gp2`.
|
||
|
* `zone`: AWS zone. If not specified, a random zone from those where Kubernetes cluster has a node is chosen.
|
||
|
* `iopsPerGB`: only for `io1` volumes. I/O operations per second per GiB. AWS volume plugin multiplies this with size of requested volume to compute IOPS of the volume and caps it at 20 000 IOPS (maximum supported by AWS, see AWS docs).
|
||
|
* `encrypted`: denotes whether the EBS volume should be encrypted or not. Valid values are `true` or `false`.
|
||
|
* `kmsKeyId`: optional. The full Amazon Resource Name of the key to use when encrypting the volume. If none is supplied but `encrypted` is true, a key is generated by AWS. See AWS docs for valid ARN value.
|
||
|
|
||
|
#### GCE
|
||
|
|
||
|
```yaml
|
||
|
kind: StorageClass
|
||
|
apiVersion: storage.k8s.io/v1beta1
|
||
|
metadata:
|
||
|
name: slow
|
||
|
provisioner: kubernetes.io/gce-pd
|
||
|
parameters:
|
||
|
type: pd-standard
|
||
|
zone: us-central1-a
|
||
|
```
|
||
|
|
||
|
* `type`: `pd-standard` or `pd-ssd`. Default: `pd-ssd`
|
||
|
* `zone`: GCE zone. If not specified, a random zone in the same region as controller-manager will be chosen.
|
||
|
|
||
|
#### vSphere
|
||
|
|
||
|
```yaml
|
||
|
kind: StorageClass
|
||
|
apiVersion: storage.k8s.io/v1beta1
|
||
|
metadata:
|
||
|
name: slow
|
||
|
provisioner: kubernetes.io/vsphere-volume
|
||
|
parameters:
|
||
|
diskformat: eagerzeroedthick
|
||
|
```
|
||
|
|
||
|
* `diskformat`: `thin`, `zeroedthick` and `eagerzeroedthick`. See vSphere docs for details. Default: `"thin"`.
|
||
|
|
||
|
|
||
|
#### GLUSTERFS
|
||
|
|
||
|
```yaml
|
||
|
apiVersion: storage.k8s.io/v1beta1
|
||
|
kind: StorageClass
|
||
|
metadata:
|
||
|
name: slow
|
||
|
provisioner: kubernetes.io/glusterfs
|
||
|
parameters:
|
||
|
resturl: "http://127.0.0.1:8081"
|
||
|
clusterid: "630372ccdc720a92c681fb928f27b53f"
|
||
|
restuser: "admin"
|
||
|
secretNamespace: "default"
|
||
|
secretName: "heketi-secret"
|
||
|
gidMin: "40000"
|
||
|
gidMax: "50000"
|
||
|
volumetype: "replicate:3"
|
||
|
```
|
||
|
|
||
|
* `resturl` : Gluster REST service/Heketi service url which provision gluster volumes on demand. The general format should be `IPaddress:Port` and this is a mandatory parameter for GlusterFS dynamic provisioner. If Heketi service is exposed as a routable service in openshift/kubernetes setup, this can have a format similar to
|
||
|
`http://heketi-storage-project.cloudapps.mystorage.com` where the fqdn is a resolvable heketi service url.
|
||
|
* `restauthenabled` : Gluster REST service authentication boolean that enables authentication to the REST server. If this value is 'true', `restuser` and `restuserkey` or `secretNamespace` + `secretName` have to be filled. This option is deprecated, authentication is enabled when any of `restuser`, `restuserkey`, `secretName` or `secretNamespace` is specified.
|
||
|
* `restuser` : Gluster REST service/Heketi user who has access to create volumes in the Gluster Trusted Pool.
|
||
|
* `restuserkey` : Gluster REST service/Heketi user's password which will be used for authentication to the REST server. This parameter is deprecated in favor of `secretNamespace` + `secretName`.
|
||
|
* `secretNamespace` + `secretName` : Identification of Secret instance that containes user password to use when talking to Gluster REST service. These parameters are optional, empty password will be used when both `secretNamespace` and `secretName` are omitted. The provided secret must have type "kubernetes.io/glusterfs".
|
||
|
When both `restuserkey` and `secretNamespace` + `secretName` is specified, the secret will be used.
|
||
|
* `clusterid`: `630372ccdc720a92c681fb928f27b53f` is the ID of the cluster which will be used by Heketi when provisioning the volume. It can also be a list of clusterids, for ex:
|
||
|
"8452344e2becec931ece4e33c4674e4e,42982310de6c63381718ccfa6d8cf397". This is an optional parameter.
|
||
|
|
||
|
Example of a secret can be found in [glusterfs-provisioning-secret.yaml](glusterfs-provisioning-secret.yaml).
|
||
|
|
||
|
* `gidMin` + `gidMax` : The minimum and maximum value of GID range for the storage class. A unique value (GID) in this range ( gidMin-gidMax ) will be used for dynamically provisioned volumes. These are optional values. If not specified, the volume will be provisioned with a value between 2000-2147483647 which are defaults for gidMin and gidMax respectively.
|
||
|
|
||
|
* `volumetype` : The volume type and it's parameters can be configured with this optional value. If the volume type is not mentioned, it's up to the provisioner to decide the volume type.
|
||
|
For example:
|
||
|
'Replica volume':
|
||
|
`volumetype: replicate:3` where '3' is replica count.
|
||
|
'Disperse/EC volume':
|
||
|
`volumetype: disperse:4:2` where '4' is data and '2' is the redundancy count.
|
||
|
'Distribute volume':
|
||
|
`volumetype: none`
|
||
|
|
||
|
For available volume types and it's administration options refer: ([Administration Guide](https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3.1/html/Administration_Guide/part-Overview.html))
|
||
|
|
||
|
Reference : ([How to configure Heketi](https://github.com/heketi/heketi/wiki/Setting-up-the-topology))
|
||
|
|
||
|
When the persistent volumes are dynamically provisioned, the Gluster plugin automatically create an endpoint and a headless service in the name `gluster-dynamic-<claimname>`. This dynamic endpoint and service will be deleted automatically when the persistent volume claim is deleted.
|
||
|
|
||
|
|
||
|
#### OpenStack Cinder
|
||
|
|
||
|
```yaml
|
||
|
kind: StorageClass
|
||
|
apiVersion: storage.k8s.io/v1beta1
|
||
|
metadata:
|
||
|
name: gold
|
||
|
provisioner: kubernetes.io/cinder
|
||
|
parameters:
|
||
|
type: fast
|
||
|
availability: nova
|
||
|
```
|
||
|
|
||
|
* `type`: [VolumeType](http://docs.openstack.org/admin-guide/dashboard-manage-volumes.html) created in Cinder. Default is empty.
|
||
|
* `availability`: Availability Zone. Default is empty.
|
||
|
|
||
|
#### Ceph RBD
|
||
|
|
||
|
```yaml
|
||
|
apiVersion: storage.k8s.io/v1beta1
|
||
|
kind: StorageClass
|
||
|
metadata:
|
||
|
name: fast
|
||
|
provisioner: kubernetes.io/rbd
|
||
|
parameters:
|
||
|
monitors: 10.16.153.105:6789
|
||
|
adminId: kube
|
||
|
adminSecretName: ceph-secret
|
||
|
adminSecretNamespace: kube-system
|
||
|
pool: kube
|
||
|
userId: kube
|
||
|
userSecretName: ceph-secret-user
|
||
|
```
|
||
|
|
||
|
* `monitors`: Ceph monitors, comma delimited. It is required.
|
||
|
* `adminId`: Ceph client ID that is capable of creating images in the pool. Default is "admin".
|
||
|
* `adminSecret`: Secret Name for `adminId`. It is required. The provided secret must have type "kubernetes.io/rbd".
|
||
|
* `adminSecretNamespace`: The namespace for `adminSecret`. Default is "default".
|
||
|
* `pool`: Ceph RBD pool. Default is "rbd".
|
||
|
* `userId`: Ceph client ID that is used to map the RBD image. Default is the same as `adminId`.
|
||
|
* `userSecretName`: The name of Ceph Secret for `userId` to map RBD image. It must exist in the same namespace as PVCs. It is required.
|
||
|
|
||
|
#### Quobyte
|
||
|
|
||
|
<!-- BEGIN MUNGE: EXAMPLE quobyte/quobyte-storage-class.yaml -->
|
||
|
|
||
|
```yaml
|
||
|
apiVersion: storage.k8s.io/v1beta1
|
||
|
kind: StorageClass
|
||
|
metadata:
|
||
|
name: slow
|
||
|
provisioner: kubernetes.io/quobyte
|
||
|
parameters:
|
||
|
quobyteAPIServer: "http://138.68.74.142:7860"
|
||
|
registry: "138.68.74.142:7861"
|
||
|
adminSecretName: "quobyte-admin-secret"
|
||
|
adminSecretNamespace: "kube-system"
|
||
|
user: "root"
|
||
|
group: "root"
|
||
|
quobyteConfig: "BASE"
|
||
|
quobyteTenant: "DEFAULT"
|
||
|
```
|
||
|
|
||
|
[Download example](quobyte/quobyte-storage-class.yaml?raw=true)
|
||
|
<!-- END MUNGE: EXAMPLE quobyte/quobyte-storage-class.yaml -->
|
||
|
|
||
|
* **quobyteAPIServer** API Server of Quobyte in the format http(s)://api-server:7860
|
||
|
* **registry** Quobyte registry to use to mount the volume. You can specifiy the registry as <host>:<port> pair or if you want to specify multiple registries you just have to put a comma between them e.q. <host1>:<port>,<host2>:<port>,<host3>:<port>. The host can be an IP address or if you have a working DNS you can also provide the DNS names.
|
||
|
* **adminSecretName** secret that holds information about the Quobyte user and the password to authenticate agains the API server. The provided secret must have type "kubernetes.io/quobyte".
|
||
|
* **adminSecretNamespace** The namespace for **adminSecretName**. Default is `default`.
|
||
|
* **user** maps all access to this user. Default is `root`.
|
||
|
* **group** maps all access to this group. Default is `nfsnobody`.
|
||
|
* **quobyteConfig** use the specified configuration to create the volume. You can create a new configuration or modify an existing one with the Web console or the quobyte CLI. Default is `BASE`
|
||
|
* **quobyteTenant** use the specified tenant ID to create/delete the volume. This Quobyte tenant has to be already present in Quobyte. Default is `DEFAULT`
|
||
|
|
||
|
First create Quobyte admin's Secret in the system namespace. Here the Secret is created in `kube-system`:
|
||
|
|
||
|
```
|
||
|
$ kubectl create -f examples/persistent-volume-provisioning/quobyte/quobyte-admin-secret.yaml --namespace=kube-system
|
||
|
```
|
||
|
|
||
|
Then create the Quobyte storage class:
|
||
|
|
||
|
```
|
||
|
$ kubectl create -f examples/persistent-volume-provisioning/quobyte/quobyte-storage-class.yaml
|
||
|
```
|
||
|
|
||
|
Now create a PVC
|
||
|
|
||
|
```
|
||
|
$ kubectl create -f examples/persistent-volume-provisioning/claim1.json
|
||
|
```
|
||
|
|
||
|
Check the created PVC:
|
||
|
|
||
|
```
|
||
|
$ kubectl describe pvc
|
||
|
Name: claim1
|
||
|
Namespace: default
|
||
|
Status: Bound
|
||
|
Volume: pvc-bdb82652-694a-11e6-b811-080027242396
|
||
|
Labels: <none>
|
||
|
Capacity: 3Gi
|
||
|
Access Modes: RWO
|
||
|
No events.
|
||
|
|
||
|
$ kubectl describe pv
|
||
|
Name: pvc-bdb82652-694a-11e6-b811-080027242396
|
||
|
Labels: <none>
|
||
|
Status: Bound
|
||
|
Claim: default/claim1
|
||
|
Reclaim Policy: Delete
|
||
|
Access Modes: RWO
|
||
|
Capacity: 3Gi
|
||
|
Message:
|
||
|
Source:
|
||
|
Type: Quobyte (a Quobyte mount on the host that shares a pod's lifetime)
|
||
|
Registry: 138.68.79.14:7861
|
||
|
Volume: kubernetes-dynamic-pvc-bdb97c58-694a-11e6-91b6-080027242396
|
||
|
ReadOnly: false
|
||
|
No events.
|
||
|
```
|
||
|
|
||
|
Create a Pod to use the PVC:
|
||
|
|
||
|
```
|
||
|
$ kubectl create -f examples/persistent-volume-provisioning/quobyte/example-pod.yaml
|
||
|
```
|
||
|
|
||
|
#### Azure Disk
|
||
|
|
||
|
```yaml
|
||
|
kind: StorageClass
|
||
|
apiVersion: storage.k8s.io/v1beta1
|
||
|
metadata:
|
||
|
name: slow
|
||
|
provisioner: kubernetes.io/azure-disk
|
||
|
parameters:
|
||
|
skuName: Standard_LRS
|
||
|
location: eastus
|
||
|
storageAccount: azure_storage_account_name
|
||
|
```
|
||
|
|
||
|
* `skuName`: Azure storage account Sku tier. Default is empty.
|
||
|
* `location`: Azure storage account location. Default is empty.
|
||
|
* `storageAccount`: Azure storage account name. If storage account is not provided, all storage accounts associated with the resource group are searched to find one that matches `skuName` and `location`. If storage account is provided, `skuName` and `location` are ignored.
|
||
|
|
||
|
### User provisioning requests
|
||
|
|
||
|
Users request dynamically provisioned storage by including a storage class in their `PersistentVolumeClaim`.
|
||
|
The annotation `volume.beta.kubernetes.io/storage-class` is used to access this feature. It is required that this value matches the name of a `StorageClass` configured by the administrator.
|
||
|
In the future, the storage class may remain in an annotation or become a field on the claim itself.
|
||
|
|
||
|
```
|
||
|
{
|
||
|
"kind": "PersistentVolumeClaim",
|
||
|
"apiVersion": "v1",
|
||
|
"metadata": {
|
||
|
"name": "claim1",
|
||
|
"annotations": {
|
||
|
"volume.beta.kubernetes.io/storage-class": "slow"
|
||
|
}
|
||
|
},
|
||
|
"spec": {
|
||
|
"accessModes": [
|
||
|
"ReadWriteOnce"
|
||
|
],
|
||
|
"resources": {
|
||
|
"requests": {
|
||
|
"storage": "3Gi"
|
||
|
}
|
||
|
}
|
||
|
}
|
||
|
}
|
||
|
```
|
||
|
|
||
|
### Sample output
|
||
|
|
||
|
#### GCE
|
||
|
|
||
|
This example uses GCE but any provisioner would follow the same flow.
|
||
|
|
||
|
First we note there are no Persistent Volumes in the cluster. After creating a storage class and a claim including that storage class, we see a new PV is created
|
||
|
and automatically bound to the claim requesting storage.
|
||
|
|
||
|
|
||
|
```
|
||
|
$ kubectl get pv
|
||
|
|
||
|
$ kubectl create -f examples/persistent-volume-provisioning/gce-pd.yaml
|
||
|
storageclass "slow" created
|
||
|
|
||
|
$ kubectl create -f examples/persistent-volume-provisioning/claim1.json
|
||
|
persistentvolumeclaim "claim1" created
|
||
|
|
||
|
$ kubectl get pv
|
||
|
NAME CAPACITY ACCESSMODES STATUS CLAIM REASON AGE
|
||
|
pvc-bb6d2f0c-534c-11e6-9348-42010af00002 3Gi RWO Bound default/claim1 4s
|
||
|
|
||
|
$ kubectl get pvc
|
||
|
NAME LABELS STATUS VOLUME CAPACITY ACCESSMODES AGE
|
||
|
claim1 <none> Bound pvc-bb6d2f0c-534c-11e6-9348-42010af00002 3Gi RWO 7s
|
||
|
|
||
|
# delete the claim to release the volume
|
||
|
$ kubectl delete pvc claim1
|
||
|
persistentvolumeclaim "claim1" deleted
|
||
|
|
||
|
# the volume is deleted in response to being release of its claim
|
||
|
$ kubectl get pv
|
||
|
|
||
|
```
|
||
|
|
||
|
|
||
|
#### Ceph RBD
|
||
|
|
||
|
This section will guide you on how to configure and use the Ceph RBD provisioner.
|
||
|
|
||
|
##### Pre-requisites
|
||
|
|
||
|
For this to work you must have a functional Ceph cluster, and the `rbd` command line utility must be installed on any host/container that `kube-controller-manager` or `kubelet` is running on.
|
||
|
|
||
|
##### Configuration
|
||
|
|
||
|
First we must identify the Ceph client admin key. This is usually found in `/etc/ceph/ceph.client.admin.keyring` on your Ceph cluster nodes. The file will look something like this:
|
||
|
|
||
|
```
|
||
|
[client.admin]
|
||
|
key = AQBfxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx==
|
||
|
auid = 0
|
||
|
caps mds = "allow"
|
||
|
caps mon = "allow *"
|
||
|
caps osd = "allow *"
|
||
|
```
|
||
|
|
||
|
From the key value, we will create a secret. We must create the Ceph admin Secret in the namespace defined in our `StorageClass`. In this example we've set the namespace to `kube-system`.
|
||
|
|
||
|
```
|
||
|
$ kubectl create secret generic ceph-secret-admin --from-literal=key='AQBfxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx==' --namespace=kube-system --type=kubernetes.io/rbd
|
||
|
```
|
||
|
|
||
|
Now modify `examples/persistent-volume-provisioning/rbd/rbd-storage-class.yaml` to reflect your environment, particularly the `monitors` field. We are now ready to create our RBD Storage Class:
|
||
|
|
||
|
```
|
||
|
$ kubectl create -f examples/persistent-volume-provisioning/rbd/rbd-storage-class.yaml
|
||
|
```
|
||
|
|
||
|
The kube-controller-manager is now able to provision storage, however we still need to be able to map the RBD volume to a node. Mapping should be done with a non-privileged key, if you have existing users you can get all keys by running `ceph auth list` on your Ceph cluster with the admin key. For this example we will create a new user and pool.
|
||
|
|
||
|
```
|
||
|
$ ceph osd pool create kube 512
|
||
|
$ ceph auth get-or-create client.kube mon 'allow r' osd 'allow rwx pool=kube'
|
||
|
[client.kube]
|
||
|
key = AQBQyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy==
|
||
|
```
|
||
|
|
||
|
This key will be made into a secret, just like the admin secret. However this user secret will need to be created in every namespace where you intend to consume RBD volumes provisioned in our example storage class. Let's create a namespace called `myns`, and create the user secret in that namespace.
|
||
|
|
||
|
```
|
||
|
kubectl create namespace myns
|
||
|
kubectl create secret generic ceph-secret-user --from-literal=key='AQBQyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy==' --namespace=myns --type=kubernetes.io/rbd
|
||
|
```
|
||
|
|
||
|
You are now ready to provision and use RBD storage.
|
||
|
|
||
|
##### Usage
|
||
|
|
||
|
With the storageclass configured, let's create a PVC in our example namespace, `myns`:
|
||
|
|
||
|
```
|
||
|
$ kubectl create -f examples/persistent-volume-provisioning/claim1.json --namespace=myns
|
||
|
```
|
||
|
|
||
|
Eventually the PVC creation will result in a PV and RBD volume to match:
|
||
|
|
||
|
```
|
||
|
$ kubectl describe pvc --namespace=myns
|
||
|
Name: claim1
|
||
|
Namespace: myns
|
||
|
Status: Bound
|
||
|
Volume: pvc-1cfa23b3-664b-11e6-9eb9-90b11c09520d
|
||
|
Labels: <none>
|
||
|
Capacity: 3Gi
|
||
|
Access Modes: RWO
|
||
|
No events.
|
||
|
|
||
|
$ kubectl describe pv
|
||
|
Name: pvc-1cfa23b3-664b-11e6-9eb9-90b11c09520d
|
||
|
Labels: <none>
|
||
|
Status: Bound
|
||
|
Claim: myns/claim1
|
||
|
Reclaim Policy: Delete
|
||
|
Access Modes: RWO
|
||
|
Capacity: 3Gi
|
||
|
Message:
|
||
|
Source:
|
||
|
Type: RBD (a Rados Block Device mount on the host that shares a pod's lifetime)
|
||
|
CephMonitors: [127.0.0.1:6789]
|
||
|
RBDImage: kubernetes-dynamic-pvc-1cfb1862-664b-11e6-9a5d-90b11c09520d
|
||
|
FSType:
|
||
|
RBDPool: kube
|
||
|
RadosUser: kube
|
||
|
Keyring: /etc/ceph/keyring
|
||
|
SecretRef: &{ceph-secret-user}
|
||
|
ReadOnly: false
|
||
|
No events.
|
||
|
```
|
||
|
|
||
|
With our storage provisioned, we can now create a Pod to use the PVC:
|
||
|
|
||
|
```
|
||
|
$ kubectl create -f examples/persistent-volume-provisioning/rbd/pod.yaml --namespace=myns
|
||
|
```
|
||
|
|
||
|
Now our pod has an RBD mount!
|
||
|
|
||
|
```
|
||
|
$ export PODNAME=`kubectl get pod --selector='role=server' --namespace=myns --output=template --template="{{with index .items 0}}{{.metadata.name}}{{end}}"`
|
||
|
$ kubectl exec -it $PODNAME --namespace=myns -- df -h | grep rbd
|
||
|
/dev/rbd1 2.9G 4.5M 2.8G 1% /var/lib/www/html
|
||
|
```
|
||
|
|
||
|
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||
|
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/persistent-volume-provisioning/README.md?pixel)]()
|
||
|
<!-- END MUNGE: GENERATED_ANALYTICS -->
|