Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

buffer space has too many data #13

Open
scalp42 opened this issue Sep 30, 2019 · 2 comments
Open

buffer space has too many data #13

scalp42 opened this issue Sep 30, 2019 · 2 comments

Comments

@scalp42
Copy link

scalp42 commented Sep 30, 2019

Hi folks,

Without having any infra change or increase in traffic, we started seeing the following issue:

2019-09-30 17:29:21 +0000 [error]: #0 syslog failed to emit error_class=Fluent::Plugin::Buffer::BufferOverflowError error="buffer space has too many data" tag="syslog.docker.containers.daemon.info" 

We looked at increasing flush_thread_count but noticed the README mentioning:

This is currently fixed to 1 will cause fluentd to fail with a ConfigError if set to anything greater.

We believe the issue is due to the output not matching our input rate but we never ran into that issue before. What's the rationale behind limiting the number of threads for the output plugin?

Thanks!

cc @imron @czerwingithub

@scalp42
Copy link
Author

scalp42 commented Sep 30, 2019

Same issue trying to use multiple workers:

2019-09-30 20:33:11 +0000 [error]: config error file="/etc/td-agent/td-agent.conf" error_class=Fluent::ConfigError error="Plugin 'scalyr' does not support multi workers configuration (Scalyr::ScalyrOut)"

@imron
Copy link
Contributor

imron commented Oct 1, 2019

Hi @scalp42 , thanks for getting in touch. The restriction on multi-workers was originally put in place due to an early limitation of the Scalyr servers, however those restrictions are no longer applicable so we'll be looking at providing an update that removes that restriction.

It also looks like your problem was if not caused, then at least exacerbated by a separate issue with the scalyr servers (I noticed your logs had a number of "error/server" messages), and we're currently looking at that also.

Fixes to one of both of these things should hopefully sort out this problem.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants