- gateway, influxdb, opentsdb, tdengine success/failure metric integration completed issue: #40
- Added promtail-ingester component that can use kafka to promtail's log transfer backend #47
- promtail - log data transfer to gateway(gRPC, HTTP) #46
- Development of promtail-gateway component for log data pipeline #53
Added missing tenantID when saving to elasticsearch with clymene-promtail.
A log collection agent, promtail, is now available. Collected logs can be stored in loki and elasticsearch.Getting Started
- Add log collection component clymene-promtail issue: #41
- Added http receiver function to send/receive gateway metricWriter to http issue: #37
- Add elasticsearch index name change option issue: #44
- Logs collected by clymene-promtail can be stored in elasticsearch issue: #43
Updated Clymene's service discovery feature. For details, please check the clymene official docs(https://clymene-project.github.io/docs/service-discovery/configuration/)
- kuma(https://kuma.io/)
- uyuni(https://www.uyuni-project.org/)
- scaleway(https://www.scaleway.com/en/virtual-instances/)
- puppetdb(https://puppet.com/docs/puppetdb/7/overview.html)
- linode(https://www.linode.com/)
- hetzner(https://www.hetzner.com/)
- eureka(https://github.com/Netflix/eureka)
- digitalocean(https://www.digitalocean.com/)
Now, You can use clymene using tdengine's http interface. https://www.taosdata.com/en/ issue: #35 #31
--tdengine.dbname string Destination database (default "clymene")
--tdengine.hostname string The host to connect to TDengine server. (default "127.0.0.1")
--tdengine.max-sql-length int Number of SQLs that can be sent at one time (default 4096)
--tdengine.password string The password to use when connecting to the server (default "taosdata")
--tdengine.server-port int he HTTP port number to use for the connection to TDengine server (default 6041)
--tdengine.user string The TDengine user name to use when connecting to the server (default "root")
Clymene official logo added! Creating metrics for Clymene components issue: #33
- clymene-agent metric port = :15691/metrics
- clymene-ingester metric port = :15694/metrics
- clymene-gateway metric port = :15690/metrics
influxdb is now officially supported. The options below are mandatory.
--influxdb.bucket string influx bucket, A bucket is a named location where time series data is stored
--influxdb.org string influx organization, An organization is a workspace for a group of users.
--influxdb.token string Use the Authorization header and the Token scheme
TDengin support has been added to the roadmap.
- influxdb option (STORAGE_TYPE=influxdb)
- Influxdb option
Separate the prometheus option and the cortex option to block confusion
- cortex option (STORAGE_TYPE=cortex)
- Cortex option
Now, you can use the opentsdb in Clymene.
- opentsdb support (STORAGE_TYPE=opentsdb)
- opentsdb supports two methods: socket(default) and http(Opentsdb option)
A gateway supporting gRPC communication has been added.
Try using it in various architectures.
- gRPC support (STORAGE_TYPE=gateway)
Use kafka with clymene-agent and clymene-ingester.
With only clymene-agent, you can collect timeseries of various environments and store them in DB. Composite writer
implementation allows simultaneous storage in prometheus and elasticsearch.
- Support kafka writer
- Support prometheus/cortex writer
- Support elasticsearch writer
- Support composite writer
- Support Service Discovery(https://prometheus.io/docs/prometheus/latest/configuration/configuration/)