Skip to content

Commit 7bcd9ee

Browse files
committed
Bump Elasticsearch to v6.3.2
Signed-off-by: Paulo Pires <[email protected]>
1 parent 6b8050f commit 7bcd9ee

File tree

6 files changed

+35
-34
lines changed

6 files changed

+35
-34
lines changed

README.md

Lines changed: 30 additions & 29 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
# kubernetes-elasticsearch-cluster
2-
Elasticsearch (6.3.0) cluster on top of Kubernetes made easy.
2+
Elasticsearch (6.3.2) cluster on top of Kubernetes made easy.
33

44
### Table of Contents
55

@@ -52,9 +52,8 @@ Given this, I'm going to demonstrate how to provision a production grade scenari
5252

5353
## Pre-requisites
5454

55-
* Kubernetes 1.9.x (tested with v1.10.4 on top of [Vagrant + CoreOS](https://github.com/pires/kubernetes-vagrant-coreos-cluster)), thas's because curator is a CronJob object which comes from `batch/v2alpha1`, to enable it, just add
56-
`--runtime-config=batch/v2alpha1=true` into your kube-apiserver options.
57-
* `kubectl` configured to access the cluster master API Server
55+
* Kubernetes 1.11.x (tested with v1.11.2 on top of [Vagrant + CoreOS](https://github.com/pires/kubernetes-vagrant-coreos-cluster)).
56+
* `kubectl` configured to access the Kubernetes API.
5857

5958
<a id="build-images">
6059

@@ -81,26 +80,27 @@ kubectl rollout status -f es-data.yaml
8180
```
8281

8382
Let's check if everything is working properly:
83+
8484
```shell
8585
kubectl get svc,deployment,pods -l component=elasticsearch
86-
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
87-
service/elasticsearch ClusterIP 10.100.32.137 <none> 9200/TCP 1h
88-
service/elasticsearch-discovery ClusterIP None <none> 9300/TCP 1h
89-
service/elasticsearch-ingest ClusterIP 10.100.31.141 <none> 9200/TCP 1h
86+
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
87+
service/elasticsearch ClusterIP 10.100.243.196 <none> 9200/TCP 3m
88+
service/elasticsearch-discovery ClusterIP None <none> 9300/TCP 3m
89+
service/elasticsearch-ingest ClusterIP 10.100.76.74 <none> 9200/TCP 2m
9090

9191
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
92-
deployment.extensions/es-data 2 2 2 2 4m
93-
deployment.extensions/es-ingest 2 2 2 2 7m
94-
deployment.extensions/es-master 3 3 3 3 7m
95-
96-
NAME READY STATUS RESTARTS AGE
97-
pod/es-data-5c5969967-wb2b8 1/1 Running 0 4m
98-
pod/es-data-5c5969967-wrrxk 1/1 Running 0 4m
99-
pod/es-ingest-548b65475-6s7hg 1/1 Running 0 7m
100-
pod/es-ingest-548b65475-whvqx 1/1 Running 0 7m
101-
pod/es-master-879576496-dhnlp 1/1 Running 0 7m
102-
pod/es-master-879576496-jjlvf 1/1 Running 0 7m
103-
pod/es-master-879576496-sgwxf 1/1 Running 0 7m
92+
deployment.extensions/es-data 2 2 2 2 1m
93+
deployment.extensions/es-ingest 2 2 2 2 2m
94+
deployment.extensions/es-master 3 3 3 3 3m
95+
96+
NAME READY STATUS RESTARTS AGE
97+
pod/es-data-56f8ff8c97-642bq 1/1 Running 0 1m
98+
pod/es-data-56f8ff8c97-h6hpc 1/1 Running 0 1m
99+
pod/es-ingest-6ddd5fc689-b4s94 1/1 Running 0 2m
100+
pod/es-ingest-6ddd5fc689-d8rtj 1/1 Running 0 2m
101+
pod/es-master-68bf8f86c4-bsfrx 1/1 Running 0 3m
102+
pod/es-master-68bf8f86c4-g8nph 1/1 Running 0 3m
103+
pod/es-master-68bf8f86c4-q5khn 1/1 Running 0 3m
104104
```
105105

106106
As we can assert, the cluster seems to be up and running. Easy, wasn't it?
@@ -113,29 +113,29 @@ As we can assert, the cluster seems to be up and running. Easy, wasn't it?
113113

114114
```shell
115115
kubectl get svc elasticsearch
116-
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
117-
elasticsearch ClusterIP 10.100.32.137 <none> 9200/TCP 1h
116+
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
117+
elasticsearch ClusterIP 10.100.243.196 <none> 9200/TCP 3m
118118
```
119119

120120
From any host on the Kubernetes cluster (that's running `kube-proxy` or similar), run:
121121

122122
```shell
123-
curl http://10.100.32.137:9200
123+
curl http://10.100.243.196:9200
124124
```
125125

126126
One should see something similar to the following:
127127

128128
```json
129129
{
130-
"name" : "es-data-5c5969967-wb2b8",
130+
"name" : "es-data-56f8ff8c97-642bq",
131131
"cluster_name" : "myesdb",
132-
"cluster_uuid" : "qSps-b9dRI2ngGHBguJ44Q",
132+
"cluster_uuid" : "RkRkTl26TDOE7o0FhCcW_g",
133133
"version" : {
134-
"number" : "6.3.0",
134+
"number" : "6.3.2",
135135
"build_flavor" : "default",
136136
"build_type" : "tar",
137-
"build_hash" : "424e937",
138-
"build_date" : "2018-06-11T23:38:03.357887Z",
137+
"build_hash" : "053779d",
138+
"build_date" : "2018-07-20T05:20:23.451332Z",
139139
"build_snapshot" : false,
140140
"lucene_version" : "7.3.1",
141141
"minimum_wire_compatibility_version" : "5.6.0",
@@ -148,7 +148,7 @@ One should see something similar to the following:
148148
Or if one wants to see cluster information:
149149

150150
```shell
151-
curl http://10.100.32.137:9200/_cluster/health?pretty
151+
curl http://10.100.243.196:9200/_cluster/health?pretty
152152
```
153153

154154
One should see something similar to the following:
@@ -184,6 +184,7 @@ It is then **highly recommended**, in the context of the solution described in t
184184
in order to guarantee that two data pods will never run on the same node.
185185

186186
Here's an example:
187+
187188
```yaml
188189
spec:
189190
affinity:

es-data.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@ spec:
2424
privileged: true
2525
containers:
2626
- name: es-data
27-
image: quay.io/pires/docker-elasticsearch-kubernetes:6.3.0
27+
image: quay.io/pires/docker-elasticsearch-kubernetes:6.3.2
2828
env:
2929
- name: NAMESPACE
3030
valueFrom:

es-ingest.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@ spec:
2424
privileged: true
2525
containers:
2626
- name: es-ingest
27-
image: quay.io/pires/docker-elasticsearch-kubernetes:6.3.0
27+
image: quay.io/pires/docker-elasticsearch-kubernetes:6.3.2
2828
env:
2929
- name: NAMESPACE
3030
valueFrom:

es-master.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@ spec:
2424
privileged: true
2525
containers:
2626
- name: es-master
27-
image: quay.io/pires/docker-elasticsearch-kubernetes:6.3.0
27+
image: quay.io/pires/docker-elasticsearch-kubernetes:6.3.2
2828
env:
2929
- name: NAMESPACE
3030
valueFrom:

kibana.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ spec:
1616
spec:
1717
containers:
1818
- name: kibana
19-
image: docker.elastic.co/kibana/kibana-oss:6.3.0
19+
image: docker.elastic.co/kibana/kibana-oss:6.3.2
2020
env:
2121
- name: CLUSTER_NAME
2222
value: myesdb

stateful/es-data-stateful.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ spec:
2525
privileged: true
2626
containers:
2727
- name: es-data
28-
image: quay.io/pires/docker-elasticsearch-kubernetes:6.3.0
28+
image: quay.io/pires/docker-elasticsearch-kubernetes:6.3.2
2929
env:
3030
- name: NAMESPACE
3131
valueFrom:

0 commit comments

Comments
 (0)