Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to connect my postges database with logstash #972

Open
nhluan opened this issue Apr 5, 2024 · 2 comments
Open

How to connect my postges database with logstash #972

nhluan opened this issue Apr 5, 2024 · 2 comments
Labels
docker Issues pertaining to the usage of Docker logstash Issues pertaining to the Logstash component

Comments

@nhluan
Copy link

nhluan commented Apr 5, 2024

Problem description

I cannot understand how to my postgres database which is running by container to connect with logstash and then, they can display on the kibana. So, when I set up my logstash.conf and run them. I have received the error => [2024-04-05T06:31:40,600][ERROR][logstash.config.sourceloader] No configuration found in the configured sources. And my target is how to connect postgres database with logstash with this source.

Extra information

I have not change anything file or structure what I have cloned from this repository.

`logstash.conf`
input {
   jdbc {
	jdbc_connection_string => "jdbc:postgresql://172.17.0.1:5552/elife_loyalty_dev"
	jdbc_user => "admin"
    jdbc_driver_library => "/home/luan/NHLuan/postgresql-42.6.2.jar"
    jdbc_driver_class => "org.postgresql.Driver"
    jdbc_password => "forSureNotTheRealPassword"
    statement => "SELECT * from Admin"
}

## Add your filters / logstash plugins configuration here

output {
	stdout { codec => json_lines }
	elasticsearch {
		hosts => "elasticsearch:9200"
		user => "logstash_internal"
		password => "${LOGSTASH_INTERNAL_PASSWORD}"
	}
}

Stack configuration

Docker setup

version: '3.7'

services:

  # The 'setup' service runs a one-off script which initializes users inside
  # Elasticsearch — such as 'logstash_internal' and 'kibana_system' — with the
  # values of the passwords defined in the '.env' file. It also creates the
  # roles required by some of these users.
  #
  # This task only needs to be performed once, during the *initial* startup of
  # the stack. Any subsequent run will reset the passwords of existing users to
  # the values defined inside the '.env' file, and the built-in roles to their
  # default permissions.
  #
  # By default, it is excluded from the services started by 'docker compose up'
  # due to the non-default profile it belongs to. To run it, either provide the
  # '--profile=setup' CLI flag to Compose commands, or "up" the service by name
  # such as 'docker compose up setup'.
  setup:
    profiles:
      - setup
    build:
      context: setup/
      args:
        ELASTIC_VERSION: ${ELASTIC_VERSION}
    init: true
    volumes:
      - ./setup/entrypoint.sh:/entrypoint.sh:ro,Z
      - ./setup/lib.sh:/lib.sh:ro,Z
      - ./setup/roles:/roles:ro,Z
    environment:
      ELASTIC_PASSWORD: ${ELASTIC_PASSWORD:-}
      LOGSTASH_INTERNAL_PASSWORD: ${LOGSTASH_INTERNAL_PASSWORD:-}
      KIBANA_SYSTEM_PASSWORD: ${KIBANA_SYSTEM_PASSWORD:-}
      METRICBEAT_INTERNAL_PASSWORD: ${METRICBEAT_INTERNAL_PASSWORD:-}
      FILEBEAT_INTERNAL_PASSWORD: ${FILEBEAT_INTERNAL_PASSWORD:-}
      HEARTBEAT_INTERNAL_PASSWORD: ${HEARTBEAT_INTERNAL_PASSWORD:-}
      MONITORING_INTERNAL_PASSWORD: ${MONITORING_INTERNAL_PASSWORD:-}
      BEATS_SYSTEM_PASSWORD: ${BEATS_SYSTEM_PASSWORD:-}
    networks:
      - elk
    depends_on:
      - elasticsearch

  elasticsearch:
    build:
      context: elasticsearch/
      args:
        ELASTIC_VERSION: ${ELASTIC_VERSION}
    volumes:
      - ./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml:ro,Z
      - elasticsearch:/usr/share/elasticsearch/data:Z
    ports:
      - 9200:9200
      - 9300:9300
    environment:
      node.name: elasticsearch
      ES_JAVA_OPTS: -Xms512m -Xmx512m
      # Bootstrap password.
      # Used to initialize the keystore during the initial startup of
      # Elasticsearch. Ignored on subsequent runs.
      ELASTIC_PASSWORD: ${ELASTIC_PASSWORD:-}
      # Use single node discovery in order to disable production mode and avoid bootstrap checks.
      # see: https://www.elastic.co/guide/en/elasticsearch/reference/current/bootstrap-checks.html
      discovery.type: single-node
    networks:
      - elk
    restart: unless-stopped

  logstash:
    build:
      context: logstash/
      args:
        ELASTIC_VERSION: ${ELASTIC_VERSION}
    volumes:
      - ./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml:ro,Z
      - ./logstash/pipeline:/usr/share/logstash/pipeline:ro,Z
    ports:
      - 5044:5044
      - 50000:50000/tcp
      - 50000:50000/udp
      - 9600:9600
    environment:
      LS_JAVA_OPTS: -Xms256m -Xmx256m
      LOGSTASH_INTERNAL_PASSWORD: ${LOGSTASH_INTERNAL_PASSWORD:-}
    networks:
      - elk
    depends_on:
      - elasticsearch
    restart: unless-stopped

  kibana:
    build:
      context: kibana/
      args:
        ELASTIC_VERSION: ${ELASTIC_VERSION}
    volumes:
      - ./kibana/config/kibana.yml:/usr/share/kibana/config/kibana.yml:ro,Z
    ports:
      - 5601:5601
    environment:
      KIBANA_SYSTEM_PASSWORD: ${KIBANA_SYSTEM_PASSWORD:-}
    networks:
      - elk
    depends_on:
      - elasticsearch
    restart: unless-stopped

networks:
  elk:
    driver: bridge

volumes:
  elasticsearch:

Container logs

$ docker-compose logs
2024-04-05 15:31:39 [2024-04-05T06:31:39,954][INFO ][logstash.runner          ] JVM bootstrap flags: [-Xms1g, -Xmx1g, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djruby.compile.invokedynamic=true, -XX:+HeapDumpOnOutOfMemoryError, -Djava.security.egd=file:/dev/urandom, -Dlog4j2.isThreadContextMapInheritable=true, -Dlogstash.jackson.stream-read-constraints.max-string-length=200000000, -Dlogstash.jackson.stream-read-constraints.max-number-length=10000, -Dls.cgroup.cpuacct.path.override=/, -Dls.cgroup.cpu.path.override=/, -Xms256m, -Xmx256m, -Djruby.regexp.interruptible=true, -Djdk.io.File.enableADS=true, --add-exports=jdk.compiler/com.sun.tools.javac.api=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.file=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.parser=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.tree=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.util=ALL-UNNAMED, --add-opens=java.base/java.security=ALL-UNNAMED, --add-opens=java.base/java.io=ALL-UNNAMED, --add-opens=java.base/java.nio.channels=ALL-UNNAMED, --add-opens=java.base/sun.nio.ch=ALL-UNNAMED, --add-opens=java.management/sun.management=ALL-UNNAMED, -Dio.netty.allocator.maxOrder=11]
2024-04-05 15:31:39 [2024-04-05T06:31:39,956][INFO ][logstash.runner          ] Jackson default value override `logstash.jackson.stream-read-constraints.max-string-length` configured to `200000000`
2024-04-05 15:31:39 [2024-04-05T06:31:39,957][INFO ][logstash.runner          ] Jackson default value override `logstash.jackson.stream-read-constraints.max-number-length` configured to `10000`
2024-04-05 15:31:40 [2024-04-05T06:31:40,122][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
2024-04-05 15:31:40 [2024-04-05T06:31:40,599][INFO ][logstash.config.source.local.configpathloader] No config files found in path {:path=>"/usr/share/logstash/config/usersync.conf"}
2024-04-05 15:31:40 [2024-04-05T06:31:40,600][ERROR][logstash.config.sourceloader] No configuration found in the configured sources.
2024-04-05 15:31:40 [2024-04-05T06:31:40,686][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600, :ssl_enabled=>false}
2024-04-05 15:31:40 [2024-04-05T06:31:40,696][INFO ][logstash.runner          ] Logstash shut down.
2024-04-05 15:31:40 [2024-04-05T06:31:40,702][FATAL][org.logstash.Logstash    ] Logstash stopped processing because of an error: (SystemExit) exit
2024-04-05 15:31:40 org.jruby.exceptions.SystemExit: (SystemExit) exit
2024-04-05 15:31:40     at org.jruby.RubyKernel.exit(org/jruby/RubyKernel.java:808) ~[jruby.jar:?]
2024-04-05 15:31:40     at org.jruby.RubyKernel.exit(org/jruby/RubyKernel.java:767) ~[jruby.jar:?]
2024-04-05 15:31:40     at usr.share.logstash.lib.bootstrap.environment.<main>(/usr/share/logstash/lib/bootstrap/environment.rb:90) ~[?:?]
2024-04-05 15:31:41 Using bundled JDK: /usr/share/logstash/jdk
2024-04-05 15:31:55 Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
2024-04-05 15:31:55 [2024-04-05T06:31:55,764][INFO ][logstash.runner          ] Log4j configuration path used is: /usr/share/logstash/config/log4j2.properties
2024-04-05 15:31:55 [2024-04-05T06:31:55,771][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"8.13.0", "jruby.version"=>"jruby 9.4.5.0 (3.1.4) 2023-11-02 1abae2700f OpenJDK 64-Bit Server VM 17.0.10+7 on 17.0.10+7 +indy +jit [x86_64-linux]"}
2024-04-05 15:31:55 [2024-04-05T06:31:55,774][INFO ][logstash.runner          ] JVM bootstrap flags: [-Xms1g, -Xmx1g, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djruby.compile.invokedynamic=true, -XX:+HeapDumpOnOutOfMemoryError, -Djava.security.egd=file:/dev/urandom, -Dlog4j2.isThreadContextMapInheritable=true, -Dlogstash.jackson.stream-read-constraints.max-string-length=200000000, -Dlogstash.jackson.stream-read-constraints.max-number-length=10000, -Dls.cgroup.cpuacct.path.override=/, -Dls.cgroup.cpu.path.override=/, -Xms256m, -Xmx256m, -Djruby.regexp.interruptible=true, -Djdk.io.File.enableADS=true, --add-exports=jdk.compiler/com.sun.tools.javac.api=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.file=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.parser=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.tree=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.util=ALL-UNNAMED, --add-opens=java.base/java.security=ALL-UNNAMED, --add-opens=java.base/java.io=ALL-UNNAMED, --add-opens=java.base/java.nio.channels=ALL-UNNAMED, --add-opens=java.base/sun.nio.ch=ALL-UNNAMED, --add-opens=java.management/sun.management=ALL-UNNAMED, -Dio.netty.allocator.maxOrder=11]
2024-04-05 15:31:55 [2024-04-05T06:31:55,777][INFO ][logstash.runner          ] Jackson default value override `logstash.jackson.stream-read-constraints.max-string-length` configured to `200000000`
2024-04-05 15:31:55 [2024-04-05T06:31:55,777][INFO ][logstash.runner          ] Jackson default value override `logstash.jackson.stream-read-constraints.max-number-length` configured to `10000`
2024-04-05 15:31:55 [2024-04-05T06:31:55,991][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
2024-04-05 15:31:56 [2024-04-05T06:31:56,498][INFO ][logstash.config.source.local.configpathloader] No config files found in path {:path=>"/usr/share/logstash/config/usersync.conf"}
2024-04-05 15:31:56 [2024-04-05T06:31:56,499][ERROR][logstash.config.sourceloader] No configuration found in the configured sources.
2024-04-05 15:31:56 [2024-04-05T06:31:56,594][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600, :ssl_enabled=>false}
2024-04-05 15:31:56 [2024-04-05T06:31:56,608][INFO ][logstash.runner          ] Logstash shut down.
2024-04-05 15:31:56 [2024-04-05T06:31:56,616][FATAL][org.logstash.Logstash    ] Logstash stopped processing because of an error: (SystemExit) exit
2024-04-05 15:31:56 org.jruby.exceptions.SystemExit: (SystemExit) exit
2024-04-05 15:31:56     at org.jruby.RubyKernel.exit(org/jruby/RubyKernel.java:808) ~[jruby.jar:?]
2024-04-05 15:31:56     at org.jruby.RubyKernel.exit(org/jruby/RubyKernel.java:767) ~[jruby.jar:?]
2024-04-05 15:31:56     at usr.share.logstash.lib.bootstrap.environment.<main>(/usr/share/logstash/lib/bootstrap/environment.rb:90) ~[?:?]
2024-04-05 15:31:57 Using bundled JDK: /usr/share/logstash/jdk
@antoineco antoineco added docker Issues pertaining to the usage of Docker logstash Issues pertaining to the Logstash component labels Apr 5, 2024
@antoineco
Copy link
Collaborator

@nhluan all containers of the ELK stack are inside their own bridge network named "elk":

networks:
elk:
driver: bridge

For them to be able to communicate with other containers, it is necessary to share a network. If your postgresql container is running in a different network (e.g. the default Docker bridge network), the components of the ELK stack can't access it.

Using a user-defined network provides a scoped network in which only containers attached to that network are able to communicate.
-- https://docs.docker.com/network/drivers/bridge/

You can connect your postgresql container to the "elk" network and see if you are able to reach it.

For this, start by listing the available networks on your host:

docker network ls

Then list your containers to identify the postgresql container:

docker container ls

Finally, attach the postgresql container to the "elk" network:

docker network connect --alias my-database ELK_NETWORK PSQL_CONTAINER

After that, you should be able to change your Logstash configuration to use the following connection string: jdbc:postgresql://my-database:5552.

@nhluan
Copy link
Author

nhluan commented Apr 6, 2024

Thank you, @antoineco for your response. By the way, I am encountering an issue when running my Docker compose with an ELK source using the new configuration I presented. (my structure based on your ELK source and make sure I don't change anything about position or something). And it's giving me an error indicating that it can't my configuration with logstash postgres. Any insight on this? (my error: [2024-04-05T06:31:40,600][ERROR][logstash.config.sourceloader] No configuration found in the configured sources.).
Additional, except for the jdbc_connection_string, Are all the parameters I have set up above correct, right?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
docker Issues pertaining to the usage of Docker logstash Issues pertaining to the Logstash component
Projects
None yet
Development

No branches or pull requests

2 participants