Switch to github.com/golang/dep for vendoring
Signed-off-by: Mrunal Patel <mrunalp@gmail.com>
This commit is contained in:
parent
d6ab91be27
commit
8e5b17cf13
15431 changed files with 3971413 additions and 8881 deletions
163
vendor/k8s.io/kubernetes/examples/elasticsearch/README.md
generated
vendored
Normal file
163
vendor/k8s.io/kubernetes/examples/elasticsearch/README.md
generated
vendored
Normal file
|
@ -0,0 +1,163 @@
|
|||
# Elasticsearch for Kubernetes
|
||||
|
||||
Kubernetes makes it trivial for anyone to easily build and scale [Elasticsearch](http://www.elasticsearch.org/) clusters. Here, you'll find how to do so.
|
||||
Current Elasticsearch version is `1.7.1`.
|
||||
|
||||
[A more robust example that follows Elasticsearch best-practices of separating nodes concern is also available](production_cluster/README.md).
|
||||
|
||||
<img src="http://kubernetes.io/kubernetes/img/warning.png" alt="WARNING" width="25" height="25"> Current pod descriptors use an `emptyDir` for storing data in each data node container. This is meant to be for the sake of simplicity and [should be adapted according to your storage needs](../../docs/design/persistent-storage.md).
|
||||
|
||||
## Docker image
|
||||
|
||||
The [pre-built image](https://github.com/pires/docker-elasticsearch-kubernetes) used in this example will not be supported. Feel free to fork to fit your own needs, but keep in mind that you will need to change Kubernetes descriptors accordingly.
|
||||
|
||||
## Deploy
|
||||
|
||||
Let's kickstart our cluster with 1 instance of Elasticsearch.
|
||||
|
||||
```
|
||||
kubectl create -f examples/elasticsearch/service-account.yaml
|
||||
kubectl create -f examples/elasticsearch/es-svc.yaml
|
||||
kubectl create -f examples/elasticsearch/es-rc.yaml
|
||||
```
|
||||
|
||||
Let's see if it worked:
|
||||
|
||||
```
|
||||
$ kubectl get pods
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
es-kfymw 1/1 Running 0 7m
|
||||
kube-dns-p3v1u 3/3 Running 0 19m
|
||||
```
|
||||
|
||||
```
|
||||
$ kubectl logs es-kfymw
|
||||
log4j:WARN No such property [maxBackupIndex] in org.apache.log4j.DailyRollingFileAppender.
|
||||
log4j:WARN No such property [maxBackupIndex] in org.apache.log4j.DailyRollingFileAppender.
|
||||
log4j:WARN No such property [maxBackupIndex] in org.apache.log4j.DailyRollingFileAppender.
|
||||
[2015-08-30 10:01:31,946][INFO ][node ] [Hammerhead] version[1.7.1], pid[7], build[b88f43f/2015-07-29T09:54:16Z]
|
||||
[2015-08-30 10:01:31,946][INFO ][node ] [Hammerhead] initializing ...
|
||||
[2015-08-30 10:01:32,110][INFO ][plugins ] [Hammerhead] loaded [cloud-kubernetes], sites []
|
||||
[2015-08-30 10:01:32,153][INFO ][env ] [Hammerhead] using [1] data paths, mounts [[/data (/dev/sda9)]], net usable_space [14.4gb], net total_space [15.5gb], types [ext4]
|
||||
[2015-08-30 10:01:37,188][INFO ][node ] [Hammerhead] initialized
|
||||
[2015-08-30 10:01:37,189][INFO ][node ] [Hammerhead] starting ...
|
||||
[2015-08-30 10:01:37,499][INFO ][transport ] [Hammerhead] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/10.244.48.2:9300]}
|
||||
[2015-08-30 10:01:37,550][INFO ][discovery ] [Hammerhead] myesdb/n2-6uu_UT3W5XNrjyqBPiA
|
||||
[2015-08-30 10:01:43,966][INFO ][cluster.service ] [Hammerhead] new_master [Hammerhead][n2-6uu_UT3W5XNrjyqBPiA][es-kfymw][inet[/10.244.48.2:9300]]{master=true}, reason: zen-disco-join (elected_as_master)
|
||||
[2015-08-30 10:01:44,010][INFO ][http ] [Hammerhead] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/10.244.48.2:9200]}
|
||||
[2015-08-30 10:01:44,011][INFO ][node ] [Hammerhead] started
|
||||
[2015-08-30 10:01:44,042][INFO ][gateway ] [Hammerhead] recovered [0] indices into cluster_state
|
||||
```
|
||||
|
||||
So we have a 1-node Elasticsearch cluster ready to handle some work.
|
||||
|
||||
## Scale
|
||||
|
||||
Scaling is as easy as:
|
||||
|
||||
```
|
||||
kubectl scale --replicas=3 rc es
|
||||
```
|
||||
|
||||
Did it work?
|
||||
|
||||
```
|
||||
$ kubectl get pods
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
es-78e0s 1/1 Running 0 8m
|
||||
es-kfymw 1/1 Running 0 17m
|
||||
es-rjmer 1/1 Running 0 8m
|
||||
kube-dns-p3v1u 3/3 Running 0 30m
|
||||
```
|
||||
|
||||
Let's take a look at logs:
|
||||
|
||||
```
|
||||
$ kubectl logs es-kfymw
|
||||
log4j:WARN No such property [maxBackupIndex] in org.apache.log4j.DailyRollingFileAppender.
|
||||
log4j:WARN No such property [maxBackupIndex] in org.apache.log4j.DailyRollingFileAppender.
|
||||
log4j:WARN No such property [maxBackupIndex] in org.apache.log4j.DailyRollingFileAppender.
|
||||
[2015-08-30 10:01:31,946][INFO ][node ] [Hammerhead] version[1.7.1], pid[7], build[b88f43f/2015-07-29T09:54:16Z]
|
||||
[2015-08-30 10:01:31,946][INFO ][node ] [Hammerhead] initializing ...
|
||||
[2015-08-30 10:01:32,110][INFO ][plugins ] [Hammerhead] loaded [cloud-kubernetes], sites []
|
||||
[2015-08-30 10:01:32,153][INFO ][env ] [Hammerhead] using [1] data paths, mounts [[/data (/dev/sda9)]], net usable_space [14.4gb], net total_space [15.5gb], types [ext4]
|
||||
[2015-08-30 10:01:37,188][INFO ][node ] [Hammerhead] initialized
|
||||
[2015-08-30 10:01:37,189][INFO ][node ] [Hammerhead] starting ...
|
||||
[2015-08-30 10:01:37,499][INFO ][transport ] [Hammerhead] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/10.244.48.2:9300]}
|
||||
[2015-08-30 10:01:37,550][INFO ][discovery ] [Hammerhead] myesdb/n2-6uu_UT3W5XNrjyqBPiA
|
||||
[2015-08-30 10:01:43,966][INFO ][cluster.service ] [Hammerhead] new_master [Hammerhead][n2-6uu_UT3W5XNrjyqBPiA][es-kfymw][inet[/10.244.48.2:9300]]{master=true}, reason: zen-disco-join (elected_as_master)
|
||||
[2015-08-30 10:01:44,010][INFO ][http ] [Hammerhead] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/10.244.48.2:9200]}
|
||||
[2015-08-30 10:01:44,011][INFO ][node ] [Hammerhead] started
|
||||
[2015-08-30 10:01:44,042][INFO ][gateway ] [Hammerhead] recovered [0] indices into cluster_state
|
||||
[2015-08-30 10:08:02,517][INFO ][cluster.service ] [Hammerhead] added {[Tenpin][2gv5MiwhRiOSsrTOF3DhuA][es-78e0s][inet[/10.244.54.4:9300]]{master=true},}, reason: zen-disco-receive(join from node[[Tenpin][2gv5MiwhRiOSsrTOF3DhuA][es-78e0s][inet[/10.244.54.4:9300]]{master=true}])
|
||||
[2015-08-30 10:10:10,645][INFO ][cluster.service ] [Hammerhead] added {[Evilhawk][ziTq2PzYRJys43rNL2tbyg][es-rjmer][inet[/10.244.33.3:9300]]{master=true},}, reason: zen-disco-receive(join from node[[Evilhawk][ziTq2PzYRJys43rNL2tbyg][es-rjmer][inet[/10.244.33.3:9300]]{master=true}])
|
||||
```
|
||||
|
||||
So we have a 3-node Elasticsearch cluster ready to handle more work.
|
||||
|
||||
## Access the service
|
||||
|
||||
*Don't forget* that services in Kubernetes are only acessible from containers in the cluster. For different behavior you should [configure the creation of an external load-balancer](http://kubernetes.io/v1.0/docs/user-guide/services.html#type-loadbalancer). While it's supported within this example service descriptor, its usage is out of scope of this document, for now.
|
||||
|
||||
```
|
||||
$ kubectl get service elasticsearch
|
||||
NAME LABELS SELECTOR IP(S) PORT(S)
|
||||
elasticsearch component=elasticsearch component=elasticsearch 10.100.108.94 9200/TCP
|
||||
9300/TCP
|
||||
```
|
||||
|
||||
From any host on your cluster (that's running `kube-proxy`), run:
|
||||
|
||||
```
|
||||
$ curl 10.100.108.94:9200
|
||||
```
|
||||
|
||||
You should see something similar to the following:
|
||||
|
||||
|
||||
```json
|
||||
{
|
||||
"status" : 200,
|
||||
"name" : "Hammerhead",
|
||||
"cluster_name" : "myesdb",
|
||||
"version" : {
|
||||
"number" : "1.7.1",
|
||||
"build_hash" : "b88f43fc40b0bcd7f173a1f9ee2e97816de80b19",
|
||||
"build_timestamp" : "2015-07-29T09:54:16Z",
|
||||
"build_snapshot" : false,
|
||||
"lucene_version" : "4.10.4"
|
||||
},
|
||||
"tagline" : "You Know, for Search"
|
||||
}
|
||||
```
|
||||
|
||||
Or if you want to check cluster information:
|
||||
|
||||
|
||||
```
|
||||
curl 10.100.108.94:9200/_cluster/health?pretty
|
||||
```
|
||||
|
||||
You should see something similar to the following:
|
||||
|
||||
```json
|
||||
{
|
||||
"cluster_name" : "myesdb",
|
||||
"status" : "green",
|
||||
"timed_out" : false,
|
||||
"number_of_nodes" : 3,
|
||||
"number_of_data_nodes" : 3,
|
||||
"active_primary_shards" : 0,
|
||||
"active_shards" : 0,
|
||||
"relocating_shards" : 0,
|
||||
"initializing_shards" : 0,
|
||||
"unassigned_shards" : 0,
|
||||
"delayed_unassigned_shards" : 0,
|
||||
"number_of_pending_tasks" : 0,
|
||||
"number_of_in_flight_fetch" : 0
|
||||
}
|
||||
```
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
51
vendor/k8s.io/kubernetes/examples/elasticsearch/es-rc.yaml
generated
vendored
Normal file
51
vendor/k8s.io/kubernetes/examples/elasticsearch/es-rc.yaml
generated
vendored
Normal file
|
@ -0,0 +1,51 @@
|
|||
apiVersion: v1
|
||||
kind: ReplicationController
|
||||
metadata:
|
||||
name: es
|
||||
labels:
|
||||
component: elasticsearch
|
||||
spec:
|
||||
replicas: 1
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
component: elasticsearch
|
||||
spec:
|
||||
serviceAccount: elasticsearch
|
||||
containers:
|
||||
- name: es
|
||||
securityContext:
|
||||
capabilities:
|
||||
add:
|
||||
- IPC_LOCK
|
||||
image: quay.io/pires/docker-elasticsearch-kubernetes:1.7.1-4
|
||||
env:
|
||||
- name: KUBERNETES_CA_CERTIFICATE_FILE
|
||||
value: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
|
||||
- name: NAMESPACE
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.namespace
|
||||
- name: "CLUSTER_NAME"
|
||||
value: "myesdb"
|
||||
- name: "DISCOVERY_SERVICE"
|
||||
value: "elasticsearch"
|
||||
- name: NODE_MASTER
|
||||
value: "true"
|
||||
- name: NODE_DATA
|
||||
value: "true"
|
||||
- name: HTTP_ENABLE
|
||||
value: "true"
|
||||
ports:
|
||||
- containerPort: 9200
|
||||
name: http
|
||||
protocol: TCP
|
||||
- containerPort: 9300
|
||||
name: transport
|
||||
protocol: TCP
|
||||
volumeMounts:
|
||||
- mountPath: /data
|
||||
name: storage
|
||||
volumes:
|
||||
- name: storage
|
||||
emptyDir: {}
|
17
vendor/k8s.io/kubernetes/examples/elasticsearch/es-svc.yaml
generated
vendored
Normal file
17
vendor/k8s.io/kubernetes/examples/elasticsearch/es-svc.yaml
generated
vendored
Normal file
|
@ -0,0 +1,17 @@
|
|||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: elasticsearch
|
||||
labels:
|
||||
component: elasticsearch
|
||||
spec:
|
||||
type: LoadBalancer
|
||||
selector:
|
||||
component: elasticsearch
|
||||
ports:
|
||||
- name: http
|
||||
port: 9200
|
||||
protocol: TCP
|
||||
- name: transport
|
||||
port: 9300
|
||||
protocol: TCP
|
189
vendor/k8s.io/kubernetes/examples/elasticsearch/production_cluster/README.md
generated
vendored
Normal file
189
vendor/k8s.io/kubernetes/examples/elasticsearch/production_cluster/README.md
generated
vendored
Normal file
|
@ -0,0 +1,189 @@
|
|||
# Elasticsearch for Kubernetes
|
||||
|
||||
Kubernetes makes it trivial for anyone to easily build and scale [Elasticsearch](http://www.elasticsearch.org/) clusters. Here, you'll find how to do so.
|
||||
Current Elasticsearch version is `1.7.1`.
|
||||
|
||||
Before we start, one needs to know that Elasticsearch best-practices recommend to separate nodes in three roles:
|
||||
* `Master` nodes - intended for clustering management only, no data, no HTTP API
|
||||
* `Client` nodes - intended for client usage, no data, with HTTP API
|
||||
* `Data` nodes - intended for storing and indexing your data, no HTTP API
|
||||
|
||||
This is enforced throughout this document.
|
||||
|
||||
<img src="http://kubernetes.io/kubernetes/img/warning.png" alt="WARNING" width="25" height="25"> Current pod descriptors use an `emptyDir` for storing data in each data node container. This is meant to be for the sake of simplicity and [should be adapted according to your storage needs](../../../docs/design/persistent-storage.md).
|
||||
|
||||
## Docker image
|
||||
|
||||
This example uses [this pre-built image](https://github.com/pires/docker-elasticsearch-kubernetes). Feel free to fork and update it to fit your own needs, but keep in mind that you will need to change Kubernetes descriptors accordingly.
|
||||
|
||||
## Deploy
|
||||
|
||||
```
|
||||
kubectl create -f examples/elasticsearch/production_cluster/service-account.yaml
|
||||
kubectl create -f examples/elasticsearch/production_cluster/es-discovery-svc.yaml
|
||||
kubectl create -f examples/elasticsearch/production_cluster/es-svc.yaml
|
||||
kubectl create -f examples/elasticsearch/production_cluster/es-master-rc.yaml
|
||||
```
|
||||
|
||||
Wait until `es-master` is provisioned, and
|
||||
|
||||
```
|
||||
kubectl create -f examples/elasticsearch/production_cluster/es-client-rc.yaml
|
||||
```
|
||||
|
||||
Wait until `es-client` is provisioned, and
|
||||
|
||||
```
|
||||
kubectl create -f examples/elasticsearch/production_cluster/es-data-rc.yaml
|
||||
```
|
||||
|
||||
Wait until `es-data` is provisioned.
|
||||
|
||||
Now, I leave up to you how to validate the cluster, but a first step is to wait for containers to be in ```RUNNING``` state and check the Elasticsearch master logs:
|
||||
|
||||
```
|
||||
$ kubectl get pods
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
es-client-2ep9o 1/1 Running 0 2m
|
||||
es-data-r9tgv 1/1 Running 0 1m
|
||||
es-master-vxl6c 1/1 Running 0 6m
|
||||
```
|
||||
|
||||
```
|
||||
$ kubectl logs es-master-vxl6c
|
||||
log4j:WARN No such property [maxBackupIndex] in org.apache.log4j.DailyRollingFileAppender.
|
||||
log4j:WARN No such property [maxBackupIndex] in org.apache.log4j.DailyRollingFileAppender.
|
||||
log4j:WARN No such property [maxBackupIndex] in org.apache.log4j.DailyRollingFileAppender.
|
||||
[2015-08-21 10:58:51,324][INFO ][node ] [Arc] version[1.7.1], pid[8], build[b88f43f/2015-07-29T09:54:16Z]
|
||||
[2015-08-21 10:58:51,328][INFO ][node ] [Arc] initializing ...
|
||||
[2015-08-21 10:58:51,542][INFO ][plugins ] [Arc] loaded [cloud-kubernetes], sites []
|
||||
[2015-08-21 10:58:51,624][INFO ][env ] [Arc] using [1] data paths, mounts [[/data (/dev/sda9)]], net usable_space [14.4gb], net total_space [15.5gb], types [ext4]
|
||||
[2015-08-21 10:58:57,439][INFO ][node ] [Arc] initialized
|
||||
[2015-08-21 10:58:57,439][INFO ][node ] [Arc] starting ...
|
||||
[2015-08-21 10:58:57,782][INFO ][transport ] [Arc] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/10.244.15.2:9300]}
|
||||
[2015-08-21 10:58:57,847][INFO ][discovery ] [Arc] myesdb/-x16XFUzTCC8xYqWoeEOYQ
|
||||
[2015-08-21 10:59:05,167][INFO ][cluster.service ] [Arc] new_master [Arc][-x16XFUzTCC8xYqWoeEOYQ][es-master-vxl6c][inet[/10.244.15.2:9300]]{data=false, master=true}, reason: zen-disco-join (elected_as_master)
|
||||
[2015-08-21 10:59:05,202][INFO ][node ] [Arc] started
|
||||
[2015-08-21 10:59:05,238][INFO ][gateway ] [Arc] recovered [0] indices into cluster_state
|
||||
[2015-08-21 11:02:28,797][INFO ][cluster.service ] [Arc] added {[Gideon][4EfhWSqaTqikbK4tI7bODA][es-data-r9tgv][inet[/10.244.59.4:9300]]{master=false},}, reason: zen-disco-receive(join from node[[Gideon][4EfhWSqaTqikbK4tI7bODA][es-data-r9tgv][inet[/10.244.59.4:9300]]{master=false}])
|
||||
[2015-08-21 11:03:16,822][INFO ][cluster.service ] [Arc] added {[Venomm][tFYxwgqGSpOejHLG4umRqg][es-client-2ep9o][inet[/10.244.53.2:9300]]{data=false, master=false},}, reason: zen-disco-receive(join from node[[Venomm][tFYxwgqGSpOejHLG4umRqg][es-client-2ep9o][inet[/10.244.53.2:9300]]{data=false, master=false}])
|
||||
```
|
||||
|
||||
As you can assert, the cluster is up and running. Easy, wasn't it?
|
||||
|
||||
## Scale
|
||||
|
||||
Scaling each type of node to handle your cluster is as easy as:
|
||||
|
||||
```
|
||||
kubectl scale --replicas=3 rc es-master
|
||||
kubectl scale --replicas=2 rc es-client
|
||||
kubectl scale --replicas=2 rc es-data
|
||||
```
|
||||
|
||||
Did it work?
|
||||
|
||||
```
|
||||
$ kubectl get pods
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
es-client-2ep9o 1/1 Running 0 4m
|
||||
es-client-ye5s1 1/1 Running 0 50s
|
||||
es-data-8az22 1/1 Running 0 47s
|
||||
es-data-r9tgv 1/1 Running 0 3m
|
||||
es-master-57h7k 1/1 Running 0 52s
|
||||
es-master-kuwse 1/1 Running 0 52s
|
||||
es-master-vxl6c 1/1 Running 0 8m
|
||||
```
|
||||
|
||||
Let's take another look of the Elasticsearch master logs:
|
||||
|
||||
```
|
||||
$ kubectl logs es-master-vxl6c
|
||||
log4j:WARN No such property [maxBackupIndex] in org.apache.log4j.DailyRollingFileAppender.
|
||||
log4j:WARN No such property [maxBackupIndex] in org.apache.log4j.DailyRollingFileAppender.
|
||||
log4j:WARN No such property [maxBackupIndex] in org.apache.log4j.DailyRollingFileAppender.
|
||||
[2015-08-21 10:58:51,324][INFO ][node ] [Arc] version[1.7.1], pid[8], build[b88f43f/2015-07-29T09:54:16Z]
|
||||
[2015-08-21 10:58:51,328][INFO ][node ] [Arc] initializing ...
|
||||
[2015-08-21 10:58:51,542][INFO ][plugins ] [Arc] loaded [cloud-kubernetes], sites []
|
||||
[2015-08-21 10:58:51,624][INFO ][env ] [Arc] using [1] data paths, mounts [[/data (/dev/sda9)]], net usable_space [14.4gb], net total_space [15.5gb], types [ext4]
|
||||
[2015-08-21 10:58:57,439][INFO ][node ] [Arc] initialized
|
||||
[2015-08-21 10:58:57,439][INFO ][node ] [Arc] starting ...
|
||||
[2015-08-21 10:58:57,782][INFO ][transport ] [Arc] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/10.244.15.2:9300]}
|
||||
[2015-08-21 10:58:57,847][INFO ][discovery ] [Arc] myesdb/-x16XFUzTCC8xYqWoeEOYQ
|
||||
[2015-08-21 10:59:05,167][INFO ][cluster.service ] [Arc] new_master [Arc][-x16XFUzTCC8xYqWoeEOYQ][es-master-vxl6c][inet[/10.244.15.2:9300]]{data=false, master=true}, reason: zen-disco-join (elected_as_master)
|
||||
[2015-08-21 10:59:05,202][INFO ][node ] [Arc] started
|
||||
[2015-08-21 10:59:05,238][INFO ][gateway ] [Arc] recovered [0] indices into cluster_state
|
||||
[2015-08-21 11:02:28,797][INFO ][cluster.service ] [Arc] added {[Gideon][4EfhWSqaTqikbK4tI7bODA][es-data-r9tgv][inet[/10.244.59.4:9300]]{master=false},}, reason: zen-disco-receive(join from node[[Gideon][4EfhWSqaTqikbK4tI7bODA][es-data-r9tgv][inet[/10.244.59.4:9300]]{master=false}])
|
||||
[2015-08-21 11:03:16,822][INFO ][cluster.service ] [Arc] added {[Venomm][tFYxwgqGSpOejHLG4umRqg][es-client-2ep9o][inet[/10.244.53.2:9300]]{data=false, master=false},}, reason: zen-disco-receive(join from node[[Venomm][tFYxwgqGSpOejHLG4umRqg][es-client-2ep9o][inet[/10.244.53.2:9300]]{data=false, master=false}])
|
||||
[2015-08-21 11:04:40,781][INFO ][cluster.service ] [Arc] added {[Erik Josten][QUJlahfLTi-MsxzM6_Da0g][es-master-kuwse][inet[/10.244.59.5:9300]]{data=false, master=true},}, reason: zen-disco-receive(join from node[[Erik Josten][QUJlahfLTi-MsxzM6_Da0g][es-master-kuwse][inet[/10.244.59.5:9300]]{data=false, master=true}])
|
||||
[2015-08-21 11:04:41,076][INFO ][cluster.service ] [Arc] added {[Power Princess][V4qnR-6jQOS5ovXQsPgo7g][es-master-57h7k][inet[/10.244.53.3:9300]]{data=false, master=true},}, reason: zen-disco-receive(join from node[[Power Princess][V4qnR-6jQOS5ovXQsPgo7g][es-master-57h7k][inet[/10.244.53.3:9300]]{data=false, master=true}])
|
||||
[2015-08-21 11:04:53,966][INFO ][cluster.service ] [Arc] added {[Cagliostro][Wpfx5fkBRiG2qCEWd8laaQ][es-client-ye5s1][inet[/10.244.15.3:9300]]{data=false, master=false},}, reason: zen-disco-receive(join from node[[Cagliostro][Wpfx5fkBRiG2qCEWd8laaQ][es-client-ye5s1][inet[/10.244.15.3:9300]]{data=false, master=false}])
|
||||
[2015-08-21 11:04:56,803][INFO ][cluster.service ] [Arc] added {[Thog][vkdEtX3ESfWmhXXf-Wi0_Q][es-data-8az22][inet[/10.244.15.4:9300]]{master=false},}, reason: zen-disco-receive(join from node[[Thog][vkdEtX3ESfWmhXXf-Wi0_Q][es-data-8az22][inet[/10.244.15.4:9300]]{master=false}])
|
||||
```
|
||||
|
||||
## Access the service
|
||||
|
||||
*Don't forget* that services in Kubernetes are only accessible from containers in the cluster. For different behavior you should [configure the creation of an external load-balancer](http://kubernetes.io/v1.0/docs/user-guide/services.html#type-loadbalancer). While it's supported within this example service descriptor, its usage is out of scope of this document, for now.
|
||||
|
||||
```
|
||||
$ kubectl get service elasticsearch
|
||||
NAME LABELS SELECTOR IP(S) PORT(S)
|
||||
elasticsearch component=elasticsearch,role=client component=elasticsearch,role=client 10.100.134.2 9200/TCP
|
||||
```
|
||||
|
||||
From any host on your cluster (that's running `kube-proxy`), run:
|
||||
|
||||
```
|
||||
curl http://10.100.134.2:9200
|
||||
```
|
||||
|
||||
You should see something similar to the following:
|
||||
|
||||
|
||||
```json
|
||||
{
|
||||
"status" : 200,
|
||||
"name" : "Cagliostro",
|
||||
"cluster_name" : "myesdb",
|
||||
"version" : {
|
||||
"number" : "1.7.1",
|
||||
"build_hash" : "b88f43fc40b0bcd7f173a1f9ee2e97816de80b19",
|
||||
"build_timestamp" : "2015-07-29T09:54:16Z",
|
||||
"build_snapshot" : false,
|
||||
"lucene_version" : "4.10.4"
|
||||
},
|
||||
"tagline" : "You Know, for Search"
|
||||
}
|
||||
```
|
||||
|
||||
Or if you want to check cluster information:
|
||||
|
||||
|
||||
```
|
||||
curl http://10.100.134.2:9200/_cluster/health?pretty
|
||||
```
|
||||
|
||||
You should see something similar to the following:
|
||||
|
||||
```json
|
||||
{
|
||||
"cluster_name" : "myesdb",
|
||||
"status" : "green",
|
||||
"timed_out" : false,
|
||||
"number_of_nodes" : 7,
|
||||
"number_of_data_nodes" : 2,
|
||||
"active_primary_shards" : 0,
|
||||
"active_shards" : 0,
|
||||
"relocating_shards" : 0,
|
||||
"initializing_shards" : 0,
|
||||
"unassigned_shards" : 0,
|
||||
"delayed_unassigned_shards" : 0,
|
||||
"number_of_pending_tasks" : 0,
|
||||
"number_of_in_flight_fetch" : 0
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
51
vendor/k8s.io/kubernetes/examples/elasticsearch/production_cluster/es-client-rc.yaml
generated
vendored
Normal file
51
vendor/k8s.io/kubernetes/examples/elasticsearch/production_cluster/es-client-rc.yaml
generated
vendored
Normal file
|
@ -0,0 +1,51 @@
|
|||
apiVersion: v1
|
||||
kind: ReplicationController
|
||||
metadata:
|
||||
name: es-client
|
||||
labels:
|
||||
component: elasticsearch
|
||||
role: client
|
||||
spec:
|
||||
replicas: 1
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
component: elasticsearch
|
||||
role: client
|
||||
spec:
|
||||
serviceAccount: elasticsearch
|
||||
containers:
|
||||
- name: es-client
|
||||
securityContext:
|
||||
capabilities:
|
||||
add:
|
||||
- IPC_LOCK
|
||||
image: quay.io/pires/docker-elasticsearch-kubernetes:1.7.1-4
|
||||
env:
|
||||
- name: KUBERNETES_CA_CERTIFICATE_FILE
|
||||
value: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
|
||||
- name: NAMESPACE
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.namespace
|
||||
- name: "CLUSTER_NAME"
|
||||
value: "myesdb"
|
||||
- name: NODE_MASTER
|
||||
value: "false"
|
||||
- name: NODE_DATA
|
||||
value: "false"
|
||||
- name: HTTP_ENABLE
|
||||
value: "true"
|
||||
ports:
|
||||
- containerPort: 9200
|
||||
name: http
|
||||
protocol: TCP
|
||||
- containerPort: 9300
|
||||
name: transport
|
||||
protocol: TCP
|
||||
volumeMounts:
|
||||
- mountPath: /data
|
||||
name: storage
|
||||
volumes:
|
||||
- name: storage
|
||||
emptyDir: {}
|
46
vendor/k8s.io/kubernetes/examples/elasticsearch/production_cluster/es-data-rc.yaml
generated
vendored
Normal file
46
vendor/k8s.io/kubernetes/examples/elasticsearch/production_cluster/es-data-rc.yaml
generated
vendored
Normal file
|
@ -0,0 +1,46 @@
|
|||
apiVersion: v1
|
||||
kind: ReplicationController
|
||||
metadata:
|
||||
name: es-data
|
||||
labels:
|
||||
component: elasticsearch
|
||||
role: data
|
||||
spec:
|
||||
replicas: 1
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
component: elasticsearch
|
||||
role: data
|
||||
spec:
|
||||
serviceAccount: elasticsearch
|
||||
containers:
|
||||
- name: es-data
|
||||
securityContext:
|
||||
capabilities:
|
||||
add:
|
||||
- IPC_LOCK
|
||||
image: quay.io/pires/docker-elasticsearch-kubernetes:1.7.1-4
|
||||
env:
|
||||
- name: KUBERNETES_CA_CERTIFICATE_FILE
|
||||
value: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
|
||||
- name: NAMESPACE
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.namespace
|
||||
- name: "CLUSTER_NAME"
|
||||
value: "myesdb"
|
||||
- name: NODE_MASTER
|
||||
value: "false"
|
||||
- name: HTTP_ENABLE
|
||||
value: "false"
|
||||
ports:
|
||||
- containerPort: 9300
|
||||
name: transport
|
||||
protocol: TCP
|
||||
volumeMounts:
|
||||
- mountPath: /data
|
||||
name: storage
|
||||
volumes:
|
||||
- name: storage
|
||||
emptyDir: {}
|
15
vendor/k8s.io/kubernetes/examples/elasticsearch/production_cluster/es-discovery-svc.yaml
generated
vendored
Normal file
15
vendor/k8s.io/kubernetes/examples/elasticsearch/production_cluster/es-discovery-svc.yaml
generated
vendored
Normal file
|
@ -0,0 +1,15 @@
|
|||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: elasticsearch-discovery
|
||||
labels:
|
||||
component: elasticsearch
|
||||
role: master
|
||||
spec:
|
||||
selector:
|
||||
component: elasticsearch
|
||||
role: master
|
||||
ports:
|
||||
- name: transport
|
||||
port: 9300
|
||||
protocol: TCP
|
48
vendor/k8s.io/kubernetes/examples/elasticsearch/production_cluster/es-master-rc.yaml
generated
vendored
Normal file
48
vendor/k8s.io/kubernetes/examples/elasticsearch/production_cluster/es-master-rc.yaml
generated
vendored
Normal file
|
@ -0,0 +1,48 @@
|
|||
apiVersion: v1
|
||||
kind: ReplicationController
|
||||
metadata:
|
||||
name: es-master
|
||||
labels:
|
||||
component: elasticsearch
|
||||
role: master
|
||||
spec:
|
||||
replicas: 1
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
component: elasticsearch
|
||||
role: master
|
||||
spec:
|
||||
serviceAccount: elasticsearch
|
||||
containers:
|
||||
- name: es-master
|
||||
securityContext:
|
||||
capabilities:
|
||||
add:
|
||||
- IPC_LOCK
|
||||
image: quay.io/pires/docker-elasticsearch-kubernetes:1.7.1-4
|
||||
env:
|
||||
- name: KUBERNETES_CA_CERTIFICATE_FILE
|
||||
value: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
|
||||
- name: NAMESPACE
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.namespace
|
||||
- name: "CLUSTER_NAME"
|
||||
value: "myesdb"
|
||||
- name: NODE_MASTER
|
||||
value: "true"
|
||||
- name: NODE_DATA
|
||||
value: "false"
|
||||
- name: HTTP_ENABLE
|
||||
value: "false"
|
||||
ports:
|
||||
- containerPort: 9300
|
||||
name: transport
|
||||
protocol: TCP
|
||||
volumeMounts:
|
||||
- mountPath: /data
|
||||
name: storage
|
||||
volumes:
|
||||
- name: storage
|
||||
emptyDir: {}
|
16
vendor/k8s.io/kubernetes/examples/elasticsearch/production_cluster/es-svc.yaml
generated
vendored
Normal file
16
vendor/k8s.io/kubernetes/examples/elasticsearch/production_cluster/es-svc.yaml
generated
vendored
Normal file
|
@ -0,0 +1,16 @@
|
|||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: elasticsearch
|
||||
labels:
|
||||
component: elasticsearch
|
||||
role: client
|
||||
spec:
|
||||
type: LoadBalancer
|
||||
selector:
|
||||
component: elasticsearch
|
||||
role: client
|
||||
ports:
|
||||
- name: http
|
||||
port: 9200
|
||||
protocol: TCP
|
4
vendor/k8s.io/kubernetes/examples/elasticsearch/production_cluster/service-account.yaml
generated
vendored
Normal file
4
vendor/k8s.io/kubernetes/examples/elasticsearch/production_cluster/service-account.yaml
generated
vendored
Normal file
|
@ -0,0 +1,4 @@
|
|||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: elasticsearch
|
4
vendor/k8s.io/kubernetes/examples/elasticsearch/service-account.yaml
generated
vendored
Normal file
4
vendor/k8s.io/kubernetes/examples/elasticsearch/service-account.yaml
generated
vendored
Normal file
|
@ -0,0 +1,4 @@
|
|||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: elasticsearch
|
Loading…
Add table
Add a link
Reference in a new issue