Skip to content

Commit

Permalink
Improve Kibana deployment with ConfigMap
Browse files Browse the repository at this point in the history
  • Loading branch information
guessi authored and pires committed Aug 21, 2018
1 parent 0fad6fb commit 2f432d3
Show file tree
Hide file tree
Showing 3 changed files with 37 additions and 6 deletions.
19 changes: 13 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -172,6 +172,7 @@ One should see something similar to the following:
"active_shards_percent_as_number" : 100.0
}
```

<a id="pod-anti-affinity">

## Pod anti-affinity
Expand Down Expand Up @@ -286,19 +287,20 @@ kubectl delete configmap curator-config

## Kibana

**ATTENTION**: This is community supported so it most probably is out-of-date.

Additionally, one can also add Kibana to the mix. In order to do so, one can use the Elastic upstream open source docker image without x-pack.
**WARNING:** The Kibana section is maintained by someone else in the community and may not up-to-date with this repo.

### Deploy

If Kibana defaults are not enough, one may want to customize `kibana.yaml` through a `ConfigMap`.
Please refer to [Configuring Kibana](https://www.elastic.co/guide/en/kibana/current/settings.html) for all available attributes.

```shell
kubectl create -f kibana.yaml
kubectl create -f kibana-cm.yaml
kubectl create -f kibana-svc.yaml
kubectl create -f kibana.yaml
```

Kibana will be available through service `kibana`, and one will be able to access it from within the cluster or
proxy it through the Kubernetes API Server, as follows:
Kibana will become available through service `kibana`, and one will be able to access it from within the cluster, or proxy it through the Kubernetes API as follows:

```shell
curl https://<API_SERVER_URL>/api/v1/namespaces/default/services/kibana:http/proxy
Expand All @@ -310,17 +312,20 @@ In the case one proceeds to do so, one must change the environment variable `SER
## FAQ

### Why does `NUMBER_OF_MASTERS` differ from number of master-replicas?

The default value for this environment variable is 2, meaning a cluster will need a minimum of 2 master nodes to operate. If a cluster has 3 masters and one dies, the cluster still works. Minimum master nodes are usually `n/2 + 1`, where `n` is the number of master nodes in a cluster. If a cluster has 5 master nodes, one should have a minimum of 3, less than that and the cluster _stops_. If one scales the number of masters, make sure to update the minimum number of master nodes through the Elasticsearch API as setting environment variable will only work on cluster setup. More info: https://www.elastic.co/guide/en/elasticsearch/guide/1.x/_important_configuration_changes.html#_minimum_master_nodes


### How can I customize `elasticsearch.yaml`?

Read a different config file by settings env var `ES_PATH_CONF=/path/to/my/config/` [(see the Elasticsearch docs for more)](https://www.elastic.co/guide/en/elasticsearch/reference/current/settings.html#config-files-location). Another option would be to build one's own image from [this repository](https://github.com/pires/docker-elasticsearch-kubernetes)

## Troubleshooting

### No up-and-running site-local

One of the errors one may come across when running the setup is the following error:

```
[2016-11-29T01:28:36,515][WARN ][o.e.b.ElasticsearchUncaughtExceptionHandler] [] uncaught exception in thread [main]
org.elasticsearch.bootstrap.StartupException: java.lang.IllegalArgumentException: No up-and-running site-local (private) addresses found, got [name:lo (lo), name:eth0 (eth0)]
Expand Down Expand Up @@ -356,6 +361,7 @@ This is related to how the container binds to network ports (defaults to ``_loca
Please see [the documentation](https://github.com/pires/docker-elasticsearch#environment-variables) for reference of options.

In order to workaround this, set `NETWORK_HOST` environment variable in the pod descriptors as follows:

```yaml
- name: "NETWORK_HOST"
value: "_eth0_" #_p1p1_ if interface name is p1p1, _ens4_ if interface name is ens4, and so on.
Expand All @@ -367,6 +373,7 @@ Intermittent failures occur when the local network interface has both IPv4 and I
If the IPv4 address is chosen first, Elasticsearch starts correctly.

In order to workaround this, set `NETWORK_HOST` environment variable in the pod descriptors as follows:

```yaml
- name: "NETWORK_HOST"
value: "_eth0:ipv4_" #_p1p1:ipv4_ if interface name is p1p1, _ens4:ipv4_ if interface name is ens4, and so on.
Expand Down
10 changes: 10 additions & 0 deletions kibana-cm.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: kibana-config
data:
kibana.yml: |
---
server.name: kibana
server.host: "0"
elasticsearch.url: http://elasticsearch:9200
14 changes: 14 additions & 0 deletions kibana.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -30,3 +30,17 @@ spec:
ports:
- containerPort: 5601
name: http
readinessProbe:
httpGet:
path: /api/status
port: http
initialDelaySeconds: 20
timeoutSeconds: 5
volumeMounts:
- name: config
mountPath: /usr/share/kibana/config
readOnly: true
volumes:
- name: config
configMap:
name: kibana-config

0 comments on commit 2f432d3

Please sign in to comment.