Skip to content

Kernel: Allow acquiring the same LockRank multiple times #25932

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 4 commits into
base: master
Choose a base branch
from

Conversation

Hendiadyoin1
Copy link
Contributor

Revival of #20619

Will likely conflict with #25931

@github-actions github-actions bot added the 👀 pr-needs-review PR needs review from a maintainer or community member label May 17, 2025
Otherwise we'd fail on acquiring the process level lockrank twice during
jail propagation in `sys$fork`.

Also while we are at it, make the VERIFYs neat little PANICs with good
debug messages.
This delays it's destruction until the end of the function, which is
necessary to avoid taking a mutex during said destruction, while under
a spinlock.
@Hendiadyoin1 Hendiadyoin1 force-pushed the Make-lockranks-great-again branch from 8442d89 to 080c7a7 Compare June 5, 2025 21:01
This helps to enforce that we do not take Mutexes under SpinLocks.
They does not have a lasting effect on the LockRank while a Mutex is
held, but only during acquisition and release.
@Hendiadyoin1 Hendiadyoin1 force-pushed the Make-lockranks-great-again branch from 080c7a7 to 1c2574b Compare June 5, 2025 21:21
@Hendiadyoin1
Copy link
Contributor Author

Hendiadyoin1 commented Jun 5, 2025

New this iteration:
Random KUBSAN fixes,
will move those to a separate PR soon:tm:, as well as stop lying to userspace about it (hopefully)

@Hendiadyoin1 Hendiadyoin1 marked this pull request as draft June 5, 2025 23:19
@github-actions github-actions bot removed the 👀 pr-needs-review PR needs review from a maintainer or community member label Jun 5, 2025
@Hendiadyoin1 Hendiadyoin1 force-pushed the Make-lockranks-great-again branch from a6a7a26 to adcd48b Compare June 6, 2025 18:29
@Hendiadyoin1 Hendiadyoin1 marked this pull request as ready for review June 6, 2025 18:32
@github-actions github-actions bot added the 👀 pr-needs-review PR needs review from a maintainer or community member label Jun 6, 2025
@Hendiadyoin1 Hendiadyoin1 force-pushed the Make-lockranks-great-again branch from adcd48b to 1c2574b Compare June 6, 2025 18:32
Copy link
Collaborator

@kleinesfilmroellchen kleinesfilmroellchen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just notes, LGTM

@@ -34,6 +34,9 @@ enum class LockRank : int {
// Process locks are the highest rank, as they normally are taken
// first thing when processing syscalls.
Process = 0x010,

// Mutexes need to be taken before spinlocks, so they get their own lock-rank
Mutex = 0x020,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note to self: We should reorganize this bitfield such that lowest 16 bits are reserved for spinlocks (and 0x8000 is the Process lock) and highest 16 bits are reserved for mutexes. If you want you can already make this 0x1'0000 even though the circular dependency prevents any rank differentiation on Mutex right now.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Currently Mutexes don't have any lasting effect on the lock rank, as this technically is the lock rank of the internal spinlock of the mutex...

@@ -1401,6 +1401,10 @@ bool Thread::track_lock_acquire(LockRank rank)
// Nothing to do for locks without a rank.
if (rank == LockRank::None)
return false;
if (rank == LockRank::Mutex && m_lock_rank_mask != LockRank::None) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This check could be generalized if we had a notion in the bitfield about which ranks are for mutexes and which are for spinlocks :^) (not necessary for now)

@kleinesfilmroellchen kleinesfilmroellchen added ✅ pr-community-approved PR has been approved by a community member and removed 👀 pr-needs-review PR needs review from a maintainer or community member labels Jun 9, 2025
Copy link

stale bot commented Jul 2, 2025

This pull request has been automatically marked as stale because it has not had recent activity. It will be closed in 7 days if no further activity occurs. Thank you for your contributions!

@stale stale bot added the stale label Jul 2, 2025
@@ -13,6 +13,11 @@

namespace Kernel {

struct [[nodiscard]] SpinLockKey {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/SpinLock/Spinlock

auto key = g_scheduler_lock.lock();
VERIFY(key.interrupts_state == InterruptsState::Disabled);
// FIXME: Remove the None check
VERIFY(key.affect_lock_rank || g_scheduler_lock.rank() == LockRank::None);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the VERIFY here supposed to check?
(same for all other checks like this one)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good question, I dont rememeber that anymore, will check that again

@@ -184,7 +184,10 @@ UNMAP_AFTER_INIT void Scheduler::start()

// We need to acquire our scheduler lock, which will be released
// by the idle thread once control transferred there
g_scheduler_lock.lock();
auto key = g_scheduler_lock.lock();
// FIXME: InterruptsEnabled check
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why can't you check if interrupts were disabled here?
(same for the other "InterruptsEnabled check" fixme in this file)

@@ -336,7 +339,7 @@ void Scheduler::leave_on_first_switch(InterruptsState previous_interrupts_state)
// At this point, enter_current has already be called, but because
// Scheduler::context_switch is not in the call stack we need to
// clean up and release locks manually here
g_scheduler_lock.unlock(previous_interrupts_state);
g_scheduler_lock.unlock({ .interrupts_state = previous_interrupts_state, .affect_lock_rank = true });
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are we guaranteed to always be on a different lock rank here?

// Verify we are only attempting to take a lock of a higher rank.
VERIFY(m_lock_rank_mask > rank);
if (m_lock_rank_mask > rank)
PANIC("Acquiring lock of higher rank than already held, currently held: {:#05b}, to-be-acquired: {:#05b}", (int)m_lock_rank_mask, (int)rank);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: "Attempting to acquire a lock of higher rank than the one currently held. Currently held: {:#b}, to be acquired: {:#b}."

The maximum lock rank might change, so I wouldn't use a hard-coded minimum length for these fields. These hyphens also feel out of place.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: commit message: "They does" should be This does" I think.
and s/SpinLock/Spinlock

@@ -34,6 +34,9 @@ enum class LockRank : int {
// Process locks are the highest rank, as they normally are taken
// first thing when processing syscalls.
Process = 0x010,

// Mutexes need to be taken before spinlocks, so they get their own lock-rank
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: s/lock-rank/LockRank. The hyphen looks weird.
Also please end all of your comments with a period (https://github.com/SerenityOS/serenity/blob/master/Documentation/CodingStyle.md#comments).

@@ -1401,6 +1401,10 @@ bool Thread::track_lock_acquire(LockRank rank)
// Nothing to do for locks without a rank.
if (rank == LockRank::None)
return false;
if (rank == LockRank::Mutex && m_lock_rank_mask != LockRank::None) {
// If we already hold any Spinlock, we can't acquire a mutex.
PANIC("Trying to acquire a mutex while holding a lock of higher rank: {:#05b}", (int)m_lock_rank_mask);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/:#05b/:#b

Comment on lines +50 to +51
// FIXME: This should have a lasting effect on the threads lock-rank
// and therefore return some sort of key for unlocking
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So do you want to give Mutexes multiple lock ranks as well?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would probably make sense in the future

Comment on lines +32 to +34
// Note: The internal Spinlock might have a different Rank than the Mutex
// Note: Making this a template would currently mess with Threads and our implementation
// As we currently include Thread.h and Thread uses Mutexes in its API
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1st note: Does this might refer to something that should be done in the future? Because currently the spinlocks can't have a dfiferent rank.
2nd note: Does this cause an include cycle or what do you mean?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IIRC Mutecies are part of the out-of-line API of Thread so slapping a template on them would make that harder

@stale stale bot removed stale labels Jul 3, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
✅ pr-community-approved PR has been approved by a community member
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants