This repository was archived by the owner on Jan 8, 2020. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 166
Question about incremental insertion #80
Comments
hello,I added this property, [root@mf01 ~]# hdfs dfs -tail -f /flume/mysqls/FlumeData.1547451242689.tmp |
@HbnKing,你好,请问这个问题怎么解决? |
@qintian95 跟我以前一样的问题呢 |
@HbnKing是咋解决的?😁️ |
@qintian95 自己写了一个啊 ,没办法 |
@HbnKing 你好,方便分享出来吗?这么大的问题作者没发现吗。。。 |
年后吧 |
@HbnKing好的😄 |
@HbnKing 新年快乐,最近方便分享下吗? |
on my github |
hibernate.connection.driver_class参数问题吧,你的配置貌似是使用hibernate自带连接池,我之前没配该参数也一样有问题。 |
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Uh oh!
There was an error while loading. Please reload this page.
Hello,I have set the query interval but it won't update automatically. I need to manually restart the agent.
test.sources.sqls.type = org.keedio.flume.source.SQLSource
test.sources.sqls.hibernate.connection.url = jdbc:mysql://192.168.199.237:3306/testflume
test.sources.sqls.hibernate.connection.user = root
test.sources.sqls.hibernate.connection.password = ~!@mysql@2018
test.sources.sqls.hibernate.connection.autocommit = true
test.sources.sqls.hibernate.connection.provider_class = org.hibernate.connection.C3P0ConnectionProvider
test.sources.sqls.hibernate.connection.driver_class = com.mysql.jdbc.Driver
test.sources.sqls.hibernate.c3p0.min_size=1
test.sources.sqls.hibernate.c3p0.max_size=10
test.sources.sqls.hibernate.dialect = org.hibernate.dialect.MySQL5Dialect
test.sources.sqls.max.rows = 1000000
test.sources.sqls.table = test
test.sources.sqls.columns.to.select = *
test.sources.sqls.column.name = id
test.sources.sqls.incremental.value = 0
test.sources.sqls.delimiter.entry = ,
test.sources.sqls.enclose.by.quotes = false
test.sources.sqls.run.query.delay=5000
test.sources.sqls.hibernate.dialect = org.hibernate.dialect.MySQL5Dialect
test.sources.sqls.status.file.path = /data/flume
test.sources.sqls.status.file.name = sqls.status
test.sources.sqls.channels = c1
test.sinks.hdfssink.channel = c1
test.sinks.hdfssink.type = hdfs
test.sinks.hdfssink.hdfs.path = hdfs://nameservice1:8020/flume/mysqls
test.sinks.hdfssink.hdfs.fileType = DataStream
test.sinks.hdfssink.hdfs.writeFormat = Text
test.sinks.hdfssink.hdfs.rollSize = 268435456
test.sinks.hdfssink.hdfs.rollInterval = 0
test.sinks.hdfssink.hdfs.rollCount = 0
test.channels.c1.type = memory
The text was updated successfully, but these errors were encountered: