diff --git a/pipeline/inputs/elasticsearch.md b/pipeline/inputs/elasticsearch.md index 08a67d795..3b90e8b3e 100644 --- a/pipeline/inputs/elasticsearch.md +++ b/pipeline/inputs/elasticsearch.md @@ -1,43 +1,44 @@ # Elasticsearch (Bulk API) -The **elasticsearch** input plugin handles both Elasticsearch and OpenSearch Bulk API requests. +The _Elasticsearch_ input plugin handles both Elasticsearch and OpenSearch Bulk API requests. -## Configuration Parameters +## Configuration parameters The plugin supports the following configuration parameters: | Key | Description | Default value | | :--- | :--- | :--- | -| buffer\_max\_size | Set the maximum size of buffer. | 4M | -| buffer\_chunk\_size | Set the buffer chunk size. | 512K | -| tag\_key | Specify a key name for extracting as a tag. | `NULL` | -| meta\_key | Specify a key name for meta information. | "@meta" | -| hostname | Specify hostname or FQDN. This parameter can be used for "sniffing" (auto-discovery of) cluster node information. | "localhost" | -| version | Specify Elasticsearch server version. This parameter is effective for checking a version of Elasticsearch/OpenSearch server version. | "8.0.0" | -| threaded | Indicates whether to run this input in its own [thread](../../administration/multithreading.md#inputs). | `false` | - -**Note:** The Elasticsearch cluster uses "sniffing" to optimize the connections between its cluster and clients. +| `buffer_max_size` | Set the maximum size of buffer. | `4M` | +| `buffer_chunk_size` | Set the buffer chunk size. | `512K` | +| `tag_key` | Specify a key name for extracting as a tag. | `NULL` | +| `meta_key` | Specify a key name for meta information. | "@meta" | +| `hostname` | Specify hostname or fully qualified domain name. This parameter can be used for "sniffing" (auto-discovery of) cluster node information. | "localhost" | +| `version` | Specify Elasticsearch server version. This parameter is effective for checking a version of Elasticsearch/OpenSearch server version. | "8.0.0" | +| `threaded` | Indicates whether to run this input in its own [thread](../../administration/multithreading.md#inputs). | `false` | + +The Elasticsearch cluster uses "sniffing" to optimize the connections between its cluster and clients. Elasticsearch can build its cluster and dynamically generate a connection list which is called "sniffing". The `hostname` will be used for sniffing information and this is handled by the sniffing endpoint. -## Getting Started +## Get started In order to start performing the checks, you can run the plugin from the command line or through the configuration file: -### Command Line +### Command line From the command line you can configure Fluent Bit to handle Bulk API requests with the following options: ```bash -$ fluent-bit -i elasticsearch -p port=9200 -o stdout +fluent-bit -i elasticsearch -p port=9200 -o stdout ``` -### Configuration File +### Configuration file -In your main configuration file append the following _Input_ & _Output_ sections: +In your configuration file append the following `Input` and `Output` sections: {% tabs %} {% tab title="fluent-bit.conf" %} + ```python [INPUT] name elasticsearch @@ -48,9 +49,11 @@ In your main configuration file append the following _Input_ & _Output_ sections name stdout match * ``` + {% endtab %} {% tab title="fluent-bit.yaml" %} + ```yaml pipeline: inputs: @@ -62,14 +65,16 @@ pipeline: - name: stdout match: '*' ``` + {% endtab %} {% endtabs %} -As described above, the plugin will handle ingested Bulk API requests. -For large bulk ingestions, you may have to increase buffer size with **buffer_max_size** and **buffer_chunk_size** parameters: +As described previously, the plugin will handle ingested Bulk API requests. +For large bulk ingestion, you might have to increase buffer size using the `buffer_max_size` and `buffer_chunk_size` parameters: {% tabs %} {% tab title="fluent-bit.conf" %} + ```python [INPUT] name elasticsearch @@ -82,9 +87,11 @@ For large bulk ingestions, you may have to increase buffer size with **buffer_ma name stdout match * ``` + {% endtab %} {% tab title="fluent-bit.yaml" %} + ```yaml pipeline: inputs: @@ -98,6 +105,7 @@ pipeline: - name: stdout match: '*' ``` + {% endtab %} {% endtabs %} @@ -106,9 +114,9 @@ pipeline: Ingesting from beats series agents is also supported. For example, [Filebeats](https://www.elastic.co/beats/filebeat), [Metricbeat](https://www.elastic.co/beats/metricbeat), and [Winlogbeat](https://www.elastic.co/beats/winlogbeat) are able to ingest their collected data through this plugin. -Note that Fluent Bit's node information is returning as Elasticsearch 8.0.0. +The Fluent Bit node information is returning as Elasticsearch 8.0.0. -So, users have to specify the following configurations on their beats configurations: +Users must specify the following configurations on their beats configurations: ```yaml output.elasticsearch: @@ -116,10 +124,7 @@ output.elasticsearch: ilm: false ``` -For large log ingestion on these beat plugins, -users might have to configure rate limiting on those beats plugins -when Fluent Bit indicates that the application is exceeding the size limit for HTTP requests: - +For large log ingestion on these beat plugins, users might have to configure rate limiting on those beats plugins when Fluent Bit indicates that the application is exceeding the size limit for HTTP requests: ```yaml processors: