Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG]: 从kafka采集后写入es异常. #1748

Open
silentmoooon opened this issue Sep 10, 2024 · 19 comments
Open

[BUG]: 从kafka采集后写入es异常. #1748

silentmoooon opened this issue Sep 10, 2024 · 19 comments
Labels
bug Something isn't working

Comments

@silentmoooon
Copy link
Contributor

Describe the bug
A clear and concise description of what the bug is, ideally within 20 words.

iLogtail Running Environment
Please provide the following information:

  • ilogtail version:
    2.0.7

  • Yaml configuration:

enable: true
inputs:

  • Type: service_kafka
    Brokers:

    • *:6667
    • *:6667
    • *:6667
      version: 1.0.0
      Topics:
    • test
      ConsumerGroup: ****
      ClientID: ****
      flushers:
  • Type: flusher_stdout
    FileName: stdout.txt

  • Type: flusher_elasticsearch
    Addresses:

    • http://...:9200
      Index: ****
      Authentication:
      PlainText:
      Username: ***
      Password: ***
  • ilogtail.LOG:

  • logtail_plugin.LOG:

原始日志已经找不到了, 大致情况是这样子,
image

如果直接从文件采集,得到的日志格式是如图中的1,
如果从kafka采集,日志格式就变成了2那样, 本来应该为content的key名变成了空字符串, 导致写不进es, 简单改下代码让空字符串变成"content",就可以成功写入.

@silentmoooon silentmoooon added the bug Something isn't working label Sep 10, 2024
@messixukejia
Copy link
Collaborator

可能得看下原始kafka里日志的内容

@silentmoooon
Copy link
Contributor Author

kafka中的内容就是原始的内容 12345 , 同样的内容,如果从文件中采集,发往es的是 "content":"12345",如果从kafka采集,就变成了"":"12345" ,然后就被es忽略了.

@silentmoooon
Copy link
Contributor Author

好像知道是什么原因了, kafka input 插件用的是version: 1.0.0, v1版本会读kafka消息中的key来作为key, 然后我们往kafka中发数据的时候也没指定key, 而v2版本会给msg指定一个"content"的key

@silentmoooon
Copy link
Contributor Author

silentmoooon commented Sep 11, 2024 via email

@ZLfen
Copy link

ZLfen commented Sep 12, 2024 via email

@silentmoooon
Copy link
Contributor Author

image
我是在v1的基础上稍微改了一点代码,如果key是空字符串则加一个"content"的key.

image
如果要用v2,得这么配,直接配version:v2没用.
而且v2的格式和v1也不一样,长若下这样,其中aaaa是原始内容.
{"eventType":"byteArray","name":"","timestamp":0,"observedTimestamp":0,"tags":{},"byteArray":"aaaa"}

@messixukejia
Copy link
Collaborator

messixukejia commented Sep 12, 2024

👍🏻可以把代码整理下提交个pr

@silentmoooon
Copy link
Contributor Author

好的

@ZLfen
Copy link

ZLfen commented Sep 12, 2024

谢谢哈,还有想请教下,您这边配置v2的时候,我按照您的方法配置会一直爆出初始化异常image
image
请问下是我这边配置错了吗?

@Takuka0311
Copy link
Collaborator

谢谢哈,还有想请教下,您这边配置v2的时候,我按照您的方法配置会一直爆出初始化异常image image 请问下是我这边配置错了吗?

logtail_plugin.LOG中有相应的报错信息吗?

@ZLfen
Copy link

ZLfen commented Sep 12, 2024

logtail_plugin.LOG没有爆出啥具体信息
image

@Takuka0311
Copy link
Collaborator

logtail_plugin.LOG没有爆出啥具体信息 image

image 看看这个时间点的

@silentmoooon
Copy link
Contributor Author

image
ve这不是要报错信息么

@silentmoooon
Copy link
Contributor Author

奇怪,意思是service_kafka 插件实现了input 的v2接口, 但flusher_kafka_v2插件又没有实现flush的v2接口,
所以你加上全局的v2标识就导致flusher_kafka_v2报错了.

@ZLfen
Copy link

ZLfen commented Sep 12, 2024

刚刚那个好像是走缓存了,更新配置要是失败它会自动复用上个版本的,上个版本我是配置消费kafka写到kafka。我刚刚从新写了一个ilogtail_es.yaml。配置信息:
image
ilogtail.LOG报错:
image
logtail_plugin.LOG日志:
image

@silentmoooon
Copy link
Contributor Author

感觉还只能用v1, 看起来是flusher_kafka_v2插件没有实现flush的v2接口,所以才会报这个错

@ZLfen
Copy link

ZLfen commented Sep 12, 2024

谢谢哈,我试试改v1改代码的方式

@messixukejia
Copy link
Collaborator

感觉还只能用v1, 看起来是flusher_kafka_v2插件没有实现flush的v2接口,所以才会报这个错

v2还没完全实现全,有兴趣的话可以一起参与加速下
#784

@silentmoooon
Copy link
Contributor Author

难怪, 等有空过去瞧瞧, 哈哈.
顺便借楼问下, 配置v2这个也是一个小bug吧.
我看文档中都是写的version: v2
但实际上貌似只能用
global:
StructureType: v2

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

4 participants