-
Notifications
You must be signed in to change notification settings - Fork 69
Open
Description
I have a situation where bulk api isnt appilcable to my function. I have a lambda and execution time is around 1200ms, so bulk api cant get full enough to flush logs to elastic. Ive tried almost every combination of flush-bytes
and flush-interval
will no avail. I forked lib and now use index api which works nicely for short running applications. Would like to know if this is something that would be beneficial to library.
The way I see it is that you can choose between bulk or index depending on situation. It would still use the same splitter stream but instead of splitting the docs up it would be just used for the on('data')
event from a node stream's perspective.
heres just a snippet from on data
event:
splitter.on('data', doc => {
console.log('DATA IN SPLITTER: ', doc);
client
.index({
index: getIndexName(doc.time || doc['@timestamp']),
body: doc,
type: type,
})
.then(
stats => {
splitter.emit('insert', stats);
},
err => {
splitter.emit('error', err);
},
);
});
Lmk
Metadata
Metadata
Assignees
Labels
No labels