-
Notifications
You must be signed in to change notification settings - Fork 128
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix possible inconsistency between bitmap LOV table and index #838
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
my-ship-it
previously approved these changes
Jan 3, 2025
There was an issue reported and fixed by #11378 where the `kill_9_segment_processes` function had a bug and could not get the postgres pids to kill (more details in the mentioned PR). Even though the fix was correct, it has some issues - - It is not reliable to use the grep command `ps ux | grep "[p]ostgres:\s*%s" | awk \'{print $2}\'` to get a list of postgres processes since the pattern can be changed in the future and the output of the command also depends on the OS. - The function never considered the case for executing on remote hosts. So the function would never work in case of standby shutdown when standby is on another host. To address the above issues following changes are made - - Use the postmaster pid to get the list of all its child processes (implemented using `get_postgres_segment_processes`) before stopping the postgres server which will be used by the `kill_9_segment_processes` function at the end instead of the grep command. - Modify the `kill_9_segment_processes` so that it can be executed remotely as well.
Before this commit, Greenplum didn't take `internalPQconninfoOption` and `PQconninfoOption` differently on the backend side, that's why we had `connofs` in `PQconninfoOption`, nowadays it's not needed and not actually being used, just remove. commit 510a20b Author: Adam Lee <[email protected]> Date: Mon Oct 23 14:40:11 2017 +0800 Retire gp_libpq_fe part 1, libpq itself
The workflow of `mop_high_watermark` was not clear enough that I thought it's useless, add some comments for others.
Use TRUSTED_SHELL in configuration file when gpinitsystem.
Fix bug of upstream merged feature COMMIT AND CHAIN in GPDB. The feature has been supported by upstream in https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=280a408b48d5ee42969f981bceb9e9426c3a344c We need to adapt distributed transaction block Context/State for supporting this feature in MPP. As a result, In this PR, we tried to set up DtxContext again before start a "chain" transaction for keeping correct distributed Context in a new transaction block state, and that is same as what we did in StartTransactionCommand(). List some of block state that we need to setup DtxContext for chain transaction: TBLOCK_END TBLOCK_ABORT_END TBLOCK_ABORT_PENDING TBLOCK_SUBCOMMIT
When we build a bitmap index on a heap table, we need to scan all the tuples. Usually, it'll start at block 0 to the end. But if we allow sync scan, it'll scan the table circularly, from block X up to the end and then from block 0 to X-1, to ensure we visit all rows while still participating in the common scan. This could lead tids are not in order when building bitmap index, which will throw an error in the current implementation. So this PR forbids sync scan when building bitmap index. Besides the above fact, if we build bitmap index on a heap table, there could have HOT-chain, and it'll return the root tuple's tid, which could lead to the TIDs we scanned are not in order.
If DELETE operator is generated by SplitUpdate node (SplitUpdate node is generated for updating the partition keys), we raise an error. The root cause is SplitUpdate will split the origin tuple to two tuples, one is for deleting and the other one is for inserting. we can skip the deleting tuple if we found the tuple is updated concurrently by another transaction, but it is difficult to skip the inserting tuple, so it will leads to more tuples after updating.
yjhjstz
approved these changes
Jan 3, 2025
my-ship-it
approved these changes
Jan 7, 2025
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Fixes #ISSUE_Number
What does this PR do?
Type of Change
Breaking Changes
Test Plan
make installcheck
make -C src/test installcheck-cbdb-parallel
Impact
Performance:
User-facing changes:
Dependencies:
Checklist
Additional Context
CI Skip Instructions