You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When running dlt pipeline drop against a resource, I'm seeing that the last_value and initial_value keys of unrelated resources in the state file are being modified.
Specifically, values that used to have a timezone offset have the offset stripped in the new state that is saved.
Then, next time you run the pipeline, you'll get a fatal error that looks something like this:
<class 'dlt.extract.incremental.exceptions.IncrementalCursorInvalidCoercion'>
In processing pipe table_2: Could not coerce start_value/initial_value with value 2024-01-03 00:00:00 and type <class 'pendulum.datetime.DateTime'> to actual data item 2024-01-03 00:00:00+00:00 at path updated_at with type DateTime: can't compare offset-naive and offset-aware datetimes. You need to use different data type for start_value/initial_value or cast your data ie. by using `add_map` on this resource.
Operating system
macOS
Runtime environment
Local
Python version
3.11
dlt data source
No response
dlt destination
DuckDB
Other deployment details
No response
Additional information
Issue has been replicated with duckdb and redshift destination
The text was updated successfully, but these errors were encountered:
dlt version
dlt 1.8.1
Describe the problem
When running
dlt pipeline drop
against a resource, I'm seeing that thelast_value
andinitial_value
keys of unrelated resources in the state file are being modified.Specifically, values that used to have a timezone offset have the offset stripped in the new state that is saved.
Originally reported via slack here - https://dlthub-community.slack.com/archives/C04DQA7JJN6/p1741639831133249
Expected behavior
When
dlt pipeline drop
is executed against a resource, state is modified ONLY for the resource(s) specified in the CLI command.Steps to reproduce
last_value
andinitial_value
timestamps have an offset specified. It will look something like thisdrop table_1 using the dlt cli
dlt pipeline test_pipeline drop --destination duckdb --dataset public table_1
Look at the new state, and note that the
initial_value
andlast_value
timestamps for table_2 now have their offsets stripped:Then, next time you run the pipeline, you'll get a fatal error that looks something like this:
Operating system
macOS
Runtime environment
Local
Python version
3.11
dlt data source
No response
dlt destination
DuckDB
Other deployment details
No response
Additional information
Issue has been replicated with duckdb and redshift destination
The text was updated successfully, but these errors were encountered: