You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: pipeline/inputs/elasticsearch.md
+29-24Lines changed: 29 additions & 24 deletions
Original file line number
Diff line number
Diff line change
@@ -1,43 +1,44 @@
1
1
# Elasticsearch (Bulk API)
2
2
3
-
The **elasticsearch** input plugin handles both Elasticsearch and OpenSearch Bulk API requests.
3
+
The _Elasticsearch_ input plugin handles both Elasticsearch and OpenSearch Bulk API requests.
4
4
5
-
## Configuration Parameters
5
+
## Configuration parameters
6
6
7
7
The plugin supports the following configuration parameters:
8
8
9
9
| Key | Description | Default value |
10
10
| :--- | :--- | :--- |
11
-
|buffer\_max\_size| Set the maximum size of buffer. |4M|
12
-
|buffer\_chunk\_size| Set the buffer chunk size. | 512K |
13
-
|tag\_key| Specify a key name for extracting as a tag. |`NULL`|
14
-
|meta\_key| Specify a key name for meta information. | "@meta" |
15
-
| hostname | Specify hostname or FQDN. This parameter can be used for "sniffing" (auto-discovery of) cluster node information. | "localhost" |
16
-
| version | Specify Elasticsearch server version. This parameter is effective for checking a version of Elasticsearch/OpenSearch server version. | "8.0.0" |
17
-
| threaded | Indicates whether to run this input in its own [thread](../../administration/multithreading.md#inputs). |`false`|
18
-
19
-
**Note:**The Elasticsearch cluster uses "sniffing" to optimize the connections between its cluster and clients.
11
+
|`buffer_max_size`| Set the maximum size of buffer. |`4M`|
12
+
|`buffer_chunk_size`| Set the buffer chunk size. |`512K`|
13
+
|`tag_key`| Specify a key name for extracting as a tag. |`NULL`|
14
+
|`meta_key`| Specify a key name for meta information. | "@meta" |
15
+
|`hostname`| Specify hostname or fully qualified domain name. This parameter can be used for "sniffing" (auto-discovery of) cluster node information. | "localhost" |
16
+
|`version`| Specify Elasticsearch server version. This parameter is effective for checking a version of Elasticsearch/OpenSearch server version. | "8.0.0" |
17
+
|`threaded`| Indicates whether to run this input in its own [thread](../../administration/multithreading.md#inputs). |`false`|
18
+
19
+
The Elasticsearch cluster uses "sniffing" to optimize the connections between its cluster and clients.
20
20
Elasticsearch can build its cluster and dynamically generate a connection list which is called "sniffing".
21
21
The `hostname` will be used for sniffing information and this is handled by the sniffing endpoint.
22
22
23
-
## Getting Started
23
+
## Get started
24
24
25
25
In order to start performing the checks, you can run the plugin from the command line or through the configuration file:
26
26
27
-
### Command Line
27
+
### Command line
28
28
29
29
From the command line you can configure Fluent Bit to handle Bulk API requests with the following options:
In your main configuration file append the following _Input_ & _Output_ sections:
37
+
In your configuration file append the following `Input` and `Output` sections:
38
38
39
39
{% tabs %}
40
40
{% tab title="fluent-bit.conf" %}
41
+
41
42
```python
42
43
[INPUT]
43
44
name elasticsearch
@@ -48,9 +49,11 @@ In your main configuration file append the following _Input_ & _Output_ sections
48
49
name stdout
49
50
match *
50
51
```
52
+
51
53
{% endtab %}
52
54
53
55
{% tab title="fluent-bit.yaml" %}
56
+
54
57
```yaml
55
58
pipeline:
56
59
inputs:
@@ -62,14 +65,16 @@ pipeline:
62
65
- name: stdout
63
66
match: '*'
64
67
```
68
+
65
69
{% endtab %}
66
70
{% endtabs %}
67
71
68
-
As described above, the plugin will handle ingested Bulk API requests.
69
-
For large bulk ingestions, you may have to increase buffer size with **buffer_max_size** and **buffer_chunk_size** parameters:
72
+
As described previously, the plugin will handle ingested Bulk API requests.
73
+
For large bulk ingestion, you might have to increase buffer size using the `buffer_max_size` and `buffer_chunk_size` parameters:
70
74
71
75
{% tabs %}
72
76
{% tab title="fluent-bit.conf" %}
77
+
73
78
```python
74
79
[INPUT]
75
80
name elasticsearch
@@ -82,9 +87,11 @@ For large bulk ingestions, you may have to increase buffer size with **buffer_ma
82
87
name stdout
83
88
match *
84
89
```
90
+
85
91
{% endtab %}
86
92
87
93
{% tab title="fluent-bit.yaml" %}
94
+
88
95
```yaml
89
96
pipeline:
90
97
inputs:
@@ -98,6 +105,7 @@ pipeline:
98
105
- name: stdout
99
106
match: '*'
100
107
```
108
+
101
109
{% endtab %}
102
110
{% endtabs %}
103
111
@@ -106,20 +114,17 @@ pipeline:
106
114
Ingesting from beats series agents is also supported.
107
115
For example, [Filebeats](https://www.elastic.co/beats/filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat), and [Winlogbeat](https://www.elastic.co/beats/winlogbeat) are able to ingest their collected data through this plugin.
108
116
109
-
Note that Fluent Bit's node information is returning as Elasticsearch 8.0.0.
117
+
The Fluent Bit node information is returning as Elasticsearch 8.0.0.
110
118
111
-
So, users have to specify the following configurations on their beats configurations:
119
+
Users must specify the following configurations on their beats configurations:
112
120
113
121
```yaml
114
122
output.elasticsearch:
115
123
allow_older_versions: true
116
124
ilm: false
117
125
```
118
126
119
-
For large log ingestion on these beat plugins,
120
-
users might have to configure rate limiting on those beats plugins
121
-
when Fluent Bit indicates that the application is exceeding the size limit for HTTP requests:
122
-
127
+
For large log ingestion on these beat plugins, users might have to configure rate limiting on those beats plugins when Fluent Bit indicates that the application is exceeding the size limit for HTTP requests:
0 commit comments