Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Doc: Add tutorial for filter-elastic_integration #15932

Draft
wants to merge 6 commits into
base: main
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
150 changes: 150 additions & 0 deletions docs/static/ea-integration-tutorial.asciidoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,150 @@
[[ea-integrations-tutorial]]
=== Tutorial: {ls} `elastic_integration filter` to extend Elastic {integrations} (Beta)
++++
<titleabbrev>Tutorial: {ls} `elastic_integration filter`</titleabbrev>
++++


The purpose of this guide is to walk through the steps necessary to configure {ls} to transform events
collected by the Elastic Agent using our pre-built Elastic Integrations that normalize data to the Elastic Common Schema (ECS).
This is possible with a new beta feature in Logstash known as the elastic-integration
filter plugin.
Using this new plugin, Logstash reads certain field values generated by the Elastic Agent, and uses them to apply the transformations from Elastic Integrations so that it can further process events before
sending them to their configured destinations.

[[ea-integrations-prereqs]]
*Prerequisites/Requirements*

There are a few requirements needed to make this possible:

* A working Elasticsearch Cluster
* Fleet server
* An Elastic Agent configured to send its output to Logstash
* An Enterprise License
* A user configured with the minimum required privileges

This feature can also be used with a self-managed agent, but the appropriate setup and configuration details
of using a self-managed agent will not be provided in this guide.

[[ea-integrations-process-overview]]
*Process overview*

* <<ea-integrations-fleet>>
* <<ea-integrations-create-policy>>
* <<ea-integrations-pipeline>>

[discrete]
[[ea-integrations-fleet]]
=== Configure Fleet to send data from Elastic Agent to Logstash

. For Fleet Managed Agent, go to Kibana and navigate to Fleet → Settings.
. Create a new output and specify Logstash as the output type.
. Add the Logstash hosts (domain or IP address/s) that the Elastic Agent will send data to.
. Add the client SSL certificate and the Client SSL certificate key to the configuration.
You can specify at the bottom of the settings if you would like to make this out the default for agent integrations.
By selecting this option, all Elastic Agent policies will default to using this Logstash output configuration.
. Click “Save and apply settings” in the bottom right-hand corner of the page.

[discrete]
[[ea-integrations-create-policy]]
=== Create an Elastic Agent policy with the necessary integrations

. In Kibana navigate to Fleet → Agent policies and click on “Create agent policy”.
. Give this policy a name, and then click on “Advanced options”.
. Change the “Output for integrations” setting to the Logstash output you created in the last step.
. Click “Create agent policy” at the bottom of the flyout.
. The new policy should be listed on the Agent policies page now.
. Click on the policy name so that we can start configuring an integration.
. On the policy page, click “Add integration”.
This will take you to the integrations browser, where you can select an integration that will have everything necessary to _integrate_ that data source with your other data in the Elastic stack.
. On the Crowdstrike integration overview page, click “Add Crowdstrike” to configure the integration.
. Configure the integration to collect the needed data.
On step 2 at the bottom of the page (Where to add this integration?), make sure the “Existing hosts” option
is selected and the Agent policy selected is our Logstash policy we created for our Logstash output. This
should be selected by default using the workflow of these instructions.
. Click “Save and continue” at the bottom of the page.
A modal will appear on the screen asking if you want to add the Elastic Agent to your hosts. If you have not
already done so, please install the Elastic Agent on a host somewhere. Documentation for this process can be
found here: https://www.elastic.co/guide/en/fleet/current/elastic-agent-installation.html


[discrete]
[[ea-integrations-pipeline]]
=== Configure Logstash to use the elastic_integration filter plugin

Create a new pipeline configuration in Logstash.

Make sure elastic_integration plugin is installed or install with /bin/logstash-plugin install logstash-filter-
elastic_integration before running the pipeline.

A full list of configuration options can be found here: https://www.elastic.co/guide/en/logstash/current/plugins-filters-elastic_integration.html

[source,txt]
-----
input {
elastic_agent { port => 5055 }
}

filter {
elastic_integration {
hosts => "{es-host}:9200"
ssl_enabled => true
ssl_certificate_authorities => ["/usr/share/logstash/config/certs/ca-cert.pem"]
yaauie marked this conversation as resolved.
Show resolved Hide resolved
auth_basic_username => "elastic"
auth_basic_password => "changeme"
remove_field => ["_version"]
}
}

output {
stdout {
codec => rubydebug # to debug datastream inputs
}
## add elasticsearch
elasticsearch {
hosts => "{es-host}:9200"
password => "changeme"
user => "elastic"
cacert => "/usr/share/logstash/config/certs/ca-cert.pem"
yaauie marked this conversation as resolved.
Show resolved Hide resolved
}
}
-----


If you are using Elastic Cloud, please refer to this configuration instead.

[source,txt]
-----
input {
elastic_agent { port => 5055 }
}

filter {
elastic_integration {
cloud_id => "your-cloud:id"
api_key => "api-key"
remove_field => ["_version"]
}
}

output {
stdout {}
elasticsearch {
cloud_auth => "elastic:<pwd>"
cloud_id => "your-cloud-id"
}
}
-----

Every event sent from the Elastic Agent to Logstash contains specific meta-fields.
Input event are expected to have data_stream.type, data_stream.dataset, and data_stream.namespace.
Logstash uses this information and its connection to Elasticsearch to determine which Integrations to apply to the event before sending that event to its destination output.
Logstash frequently synchronizes with Elasticsearch to ensure it has the most recent versions of the enabled Integrations.


All processing occurs in Logstash.


The user or credentials specified in the elastic_integration plugin needs to have sufficient privileges to get information about Elasticsearch and the Integrations that are enabled.
Minimum required privileges can be found here: https://www.elastic.co/guide/en/logstash/current/plugins-filters-elastic_integration.html#plugins-filters-elastic_integration-minimum_required_privileges.

5 changes: 4 additions & 1 deletion docs/static/ea-integrations.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -78,5 +78,8 @@ output { <3>
-----

<1> Use `filter-elastic_integration` as the first filter in your pipeline
<2> You can use additional filters as long as they follow `filter-elastic_integration`
<2> You can use additional filters as long as they follow `filter-elastic_integration`. They will have access to the event as-transformed by your enabled integrations.
<3> Sample config to output data to multiple destinations


include::ea-integration-tutorial.asciidoc[]