You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have referred to issue #733 and #1327 but none has seemed to offer a viable solution so far for the same issue, even with the latest versions of neptune. Has there been an update on possible solutions on logging in multiprocessing/threaded processes? Suppressing error logs seems like a hacky way to go about doing this.
I am currently running neptune on a kedro environment via tensorflow keras, initialising the Neptune client on my Kedro node. But am still getting the error logs.
arning: string series 'monitoring/bb773e00/stdout' value was longer than 1000 characters and was truncated. This warning is printed only once per series.
ERROR Error occurred during asynchronous async_operation_processor.py:272
operation processing: Timestamp must
be non-decreasing for series
attribute:
monitoring/bb773e00/stdout. Invalid
point: 2023-08-28T12:22:28.341Z
ERROR Error occurred during asynchronous async_operation_processor.py:272
operation processing: Timestamp must
be non-decreasing for series
attribute:
monitoring/bb773e00/stdout. Invalid
point: 2023-08-28T12:22:28.514Z
ERROR Error occurred during asynchronous async_operation_processor.py:272
operation processing: Timestamp must
be non-decreasing for series
attribute:
monitoring/bb773e00/stdout. Invalid
point: 2023-08-28T12:22:28.686Z
ERROR Error occurred during asynchronous async_operation_processor.py:272
operation processing: Timestamp must
be non-decreasing for series
attribute:
monitoring/bb773e00/stdout. Invalid
point: 2023-08-28T12:22:28.857Z
ERROR Error occurred during asynchronous async_operation_processor.py:272
operation processing: Timestamp must
be non-decreasing for series
attribute:
monitoring/bb773e00/stdout. Invalid
point: 2023-08-28T12:22:29.031Z
ERROR Error occurred during asynchronous async_operation_processor.py:272
operation processing: Timestamp must
be non-decreasing for series
attribute:
monitoring/bb773e00/stdout. Invalid
point: 2023-08-28T12:22:29.031Z
ERROR Error occurred during asynchronous async_operation_processor.py:272
operation processing: Timestamp must
be non-decreasing for series
attribute:
monitoring/bb773e00/stdout. Invalid
point: 2023-08-28T12:22:29.203Z
ERROR Error occurred during asynchronous async_operation_processor.py:272
operation processing: Timestamp must
be non-decreasing for series
attribute:
monitoring/bb773e00/stdout. Invalid
point: 2023-08-28T12:22:29.376Z
ERROR Error occurred during asynchronous async_operation_processor.py:272
operation processing: Timestamp must
be non-decreasing for series
attribute:
monitoring/bb773e00/stdout. Invalid
point: 2023-08-28T12:22:29.547Z
ERROR Error occurred during asynchronous async_operation_processor.py:272
operation processing: Timestamp must
be non-decreasing for series
attribute:
monitoring/bb773e00/stdout. Invalid
point: 2023-08-28T12:22:29.719Z
ERROR Error occurred during asynchronous async_operation_processor.py:272
operation processing: Timestamp must
be non-decreasing for series
attribute:
monitoring/bb773e00/stdout. Invalid
point: 2023-08-28T12:22:29.890Z
ERROR Error occurred during asynchronous async_operation_processor.py:272
operation processing: Timestamp must
be non-decreasing for series
attribute:
monitoring/bb773e00/stdout. Invalid
point: 2023-08-28T12:22:30.062Z
Unless you have important metadata in our stdout stream, the above error should not be of much relevance. Since we log console output as a series, the above error prevents logging of streams with a timestamp <= the previously logged timestamp in the same field (monitoring/<hash>/stdout): https://docs.neptune.ai/help/error_step_must_be_increasing/
If you do want to fix this, could you please share a reproducible code sample that we can run end-to-end to see where we've multiple threads writing to the same field?
Please also share the output of pip list, and your system details (number of parallel processes, GPUs, etc).
It would also help us a lot if you could share the link to a run on the Neptune app where you encountered this error. If you are not comfortable sharing this on GitHub, you can send it over to us at [email protected], or through the chat at the bottom right of the Neptune app.
Describe the bug
I have referred to issue #733 and #1327 but none has seemed to offer a viable solution so far for the same issue, even with the latest versions of neptune. Has there been an update on possible solutions on logging in multiprocessing/threaded processes? Suppressing error logs seems like a hacky way to go about doing this.
I am currently running neptune on a kedro environment via tensorflow keras, initialising the Neptune client on my Kedro node. But am still getting the error logs.
Code i used below
The text was updated successfully, but these errors were encountered: