Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cleaning up stale parts #107

Open
ruslandoga opened this issue Sep 7, 2023 · 20 comments
Open

Cleaning up stale parts #107

ruslandoga opened this issue Sep 7, 2023 · 20 comments
Labels
bug Something isn't working help wanted Extra attention is needed

Comments

@ruslandoga
Copy link

ruslandoga commented Sep 7, 2023

馃憢

Just wanted to quickly check if chdb automatically cleans stale MergeTree parts after optimize table commands. And if so, how does it work?

@ruslandoga ruslandoga added the question Further information is requested label Sep 7, 2023
@auxten
Copy link
Member

auxten commented Sep 7, 2023

No, we haven't do this. To be fixed

@ruslandoga
Copy link
Author

In that case this chat with GPT that explains the naming might be useful: https://chat.openai.com/share/94a7fa2d-5f73-4f9f-b2ba-681ed324ae35

@ruslandoga
Copy link
Author

ruslandoga commented Sep 7, 2023

Seems like we can find the part with the highest LEVEL, check its MAX_BLOCK_NUMBER, and delete all parts with MAX_BLOCK_NUMBER less than that.

@auxten
Copy link
Member

auxten commented Sep 8, 2023

There should be some code in clickhouse doing that in background thread pool. We just need to find it out, run it immediately on Optimize. Any interest to make a patch? @ruslandoga

@auxten auxten added bug Something isn't working help wanted Extra attention is needed and removed question Further information is requested labels Sep 8, 2023
@ruslandoga
Copy link
Author

Yes, I'm interested and will try doing it today! :)

@ruslandoga
Copy link
Author

ruslandoga commented Sep 9, 2023

The logic seems to be in MergeTreeData::clearOldPartsFromFilesystem but I'm a bit stuck at compiling ClickHouse, it's already taking over four hours and I'm at [7671/8349] Building CXX object src/CMakeFiles/dbms.dir/Interpreters/DatabaseAndTableWithAlias.cpp.o 馃槄

I'm new to C++ so I wonder if there is a faster way to build ClickHouse / chdb just for tests?

@lmangani
Copy link
Contributor

lmangani commented Sep 9, 2023

Hey @ruslandoga I'm afraid the first build is as painful as you're experiencing (our action takes > 5h to build) but if you add ccache to the mix the next compile and link rounds with minor modifications will be much faster. If you're on our discord feel free to ping on the dev channel and we'll try to assist.

@ruslandoga
Copy link
Author

I tried to find your discord server but failed :)

@auxten
Copy link
Member

auxten commented Sep 11, 2023

I tried to find your discord server but failed :)

Imangani == qxip

@ruslandoga
Copy link
Author

Thank you @auxten, but google doesn't return anything for "qxip discord" either.

@auxten
Copy link
Member

auxten commented Sep 11, 2023

@ruslandoga
Copy link
Author

ruslandoga commented Sep 11, 2023

It doesn't seem like this link leads me anywhere :) I think it might be specific to your user account. Or maybe I need to be invited first.

@lmangani
Copy link
Contributor

Hello @ruslandoga the link and invite to our discord is on the chdb readme in the contact section. Once you join you'll find us all at once on the chdb channels! Looking forward to you joining!

@ruslandoga ruslandoga changed the title Cleaning up stale blocks Cleaning up stale parts Oct 11, 2023
@ruslandoga
Copy link
Author

ruslandoga commented Oct 11, 2023

A small note on the suggested implementation:

We just need to find it out, run it immediately on Optimize.

It seems like ClickHouse is cleaning old parts after ~eight minutes since they become inactive (i.e. stopped being referenced). Apparently, it's done this way to make sure the dirty pages have been fsynced (assuming dirty_writeback_centisecs = 5 minutes) and that in case of a crash (before fsync) the data could be restored. So I guess the custom implementation would need to call fsync on the new parts and only then clean the old parts.

I also found in the docs that the old parts can be deleted with SQL, so technically, what I initially wanted could be achieved like this:

INSERT INTO events FORMAT RowBinary <...rowbinary...>;

-- in the background process, every few minutes
SELECT * FROM system.parts WHERE table = 'events'; -- check how many parts there are, if too many, run OPTIMIZE
OPTIMIZE TABLE events;
-- also if there are inactive parts older than 8 minutes, drop them
ALTER TABLE events DROP PART '<part_id goes here>';

@auxten
Copy link
Member

auxten commented Oct 13, 2023

@ruslandoga Nice, I think the 8 mins thing is reasonable in ClickHouse. But not in an embedded database. I would patch chdb to run something to do the cleanup automatically.

@devcrafter
Copy link

There are MergeTree settings which control usage of fsync(). Please check min_rows_to_fsync_after_merge and/or min_compressed_bytes_to_fsync_after_merge.

The relevant code is here

@devcrafter
Copy link

Hey @ruslandoga I'm afraid the first build is as painful as you're experiencing (our action takes > 5h to build) but if you add ccache to the mix the next compile and link rounds with minor modifications will be much faster. If you're on our discord feel free to ping on the dev channel and we'll try to assist.

There is also documentation on how to build ClickHouse - https://clickhouse.com/docs/en/development/build

@lmangani
Copy link
Contributor

lmangani commented Oct 26, 2023

There is also documentation on how to build ClickHouse - https://clickhouse.com/docs/en/development/build

I don't see any related issues with the build process and we have chdb-builder helper

@devcrafter
Copy link

devcrafter commented Oct 26, 2023

There is also documentation on how to build ClickHouse - https://clickhouse.com/docs/en/development/build

I don't see any related issues with the build process and we have chdb-builder helper

Had no idea you have it. Just read your comment and thought that this should be covered by build documentation. Just tried to help

@poundifdef
Copy link

Seems like we can find the part with the highest LEVEL, check its MAX_BLOCK_NUMBER, and delete all parts with MAX_BLOCK_NUMBER less than that.

Would this be safe to do?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

5 participants