@@ -50,7 +50,7 @@ Or add a entry under the `[dependencies]` section in your `Cargo.toml`:
50
50
# Cargo.toml
51
51
52
52
[dependencies ]
53
- # Available features : `yield`, `barging`, `thread_local` and `lock_api`.
53
+ # Features : `yield`, `barging`, `thread_local` and `lock_api`.
54
54
mcslock = { version = " 0.4" , features = [" thread_local" ] }
55
55
```
56
56
@@ -75,7 +75,7 @@ module for more information.
75
75
use std :: sync :: Arc ;
76
76
use std :: thread;
77
77
78
- // `spins::Mutex` simply spins during contention.
78
+ // Simply spins during contention.
79
79
use mcslock :: raw :: {spins :: Mutex , MutexNode };
80
80
81
81
fn main () {
@@ -86,31 +86,29 @@ fn main() {
86
86
// A queue node must be mutably accessible.
87
87
// Critical section must be defined as a closure.
88
88
let mut node = MutexNode :: new ();
89
- c_mutex . lock_with_then (& mut node , | data | {
90
- * data = 10 ;
91
- });
89
+ c_mutex . lock_with_then (& mut node , | data | * data = 10 );
92
90
})
93
91
. join (). expect (" thread::spawn failed" );
94
92
95
- // A node is transparently allocated in the stack.
93
+ // A node may also be transparently allocated in the stack.
96
94
// Critical section must be defined as a closure.
97
- assert_eq! (* mutex . try_lock_then (| data | * data . unwrap ()), 10 );
95
+ assert_eq! (mutex . try_lock_then (| data | * data . unwrap ()), 10 );
98
96
}
99
97
```
100
98
101
99
## Thread local queue nodes
102
100
103
- Enables [ ` raw::Mutex ` ] locking APIs that operate over queue nodes that are
104
- stored at the thread local storage. These locking APIs require a static
105
- reference to a [ ` raw::LocalMutexNode ` ] key. Keys must be generated by the
106
- [ ` thread_local_node! ` ] macro. Thread local nodes are not ` no_std ` compatible
107
- and can be enabled through the ` thread_local ` feature.
101
+ [ ` raw::Mutex ` ] supports locking APIs that access queue nodes that are stored in
102
+ the thread local storage. These locking APIs require a static reference to a
103
+ [ ` raw::LocalMutexNode ` ] key. Keys must be generated by the [ ` thread_local_node! ` ]
104
+ macro. Thread local nodes are not ` no_std ` compatible and can be enabled through
105
+ the ` thread_local ` feature.
108
106
109
107
``` rust
110
108
use std :: sync :: Arc ;
111
109
use std :: thread;
112
110
113
- // `spins::Mutex` simply spins during contention.
111
+ // Simply spins during contention.
114
112
use mcslock :: raw :: spins :: Mutex ;
115
113
116
114
// Requires `thread_local` feature.
@@ -127,9 +125,9 @@ fn main() {
127
125
})
128
126
. join (). expect (" thread::spawn failed" );
129
127
130
- // Local node handles are provided by reference .
128
+ // A node may also be transparently allocated in the stack .
131
129
// Critical section must be defined as a closure.
132
- assert_eq! (mutex . try_lock_with_local_then ( & NODE , | data | * data . unwrap ()), 10 );
130
+ assert_eq! (mutex . try_lock_then ( | data | * data . unwrap ()), 10 );
133
131
}
134
132
```
135
133
@@ -146,19 +144,19 @@ use std::sync::Arc;
146
144
use std :: thread;
147
145
148
146
// Requires `barging` feature.
149
- // `spins::backoff::Mutex` spins with exponential backoff during contention.
147
+ // Spins with exponential backoff during contention.
150
148
use mcslock :: barging :: spins :: backoff :: Mutex ;
151
149
152
150
fn main () {
153
151
let mutex = Arc :: new (Mutex :: new (0 ));
154
152
let c_mutex = Arc :: clone (& mutex );
155
153
156
154
thread :: spawn (move || {
157
- * c_mutex . lock () = 10 ;
155
+ * c_mutex . try_lock () . unwrap () = 10 ;
158
156
})
159
157
. join (). expect (" thread::spawn failed" );
160
158
161
- assert_eq! (* mutex . try_lock () . unwrap (), 10 );
159
+ assert_eq! (* mutex . lock (), 10 );
162
160
}
163
161
```
164
162
@@ -176,21 +174,24 @@ of busy-waiting during lock acquisitions and releases, this will call
176
174
OS scheduler. This may cause a context switch, so you may not want to enable
177
175
this feature if your intention is to to actually do optimistic spinning. The
178
176
default implementation calls [ ` core::hint::spin_loop ` ] , which does in fact
179
- just simply busy-waits. This feature ** is not** ` no_std ` compatible.
177
+ just simply busy-waits. This feature is not ` no_std ` compatible.
180
178
181
179
### thread_local
182
180
183
181
The ` thread_local ` feature enables [ ` raw::Mutex ` ] locking APIs that operate over
184
182
queue nodes that are stored at the thread local storage. These locking APIs
185
- require a static reference to a [ ` raw::LocalMutexNode ` ] key. Keys must be generated
186
- by the [ ` thread_local_node! ` ] macro. This feature ** is not** ` no_std ` compatible.
183
+ require a static reference to [ ` raw::LocalMutexNode ` ] keys. Keys must be generated
184
+ by the [ ` thread_local_node! ` ] macro. This feature also enables memory optimizations
185
+ for [ ` barging::Mutex ` ] and locking operations. This feature is not ` no_std `
186
+ compatible.
187
187
188
188
### barging
189
189
190
- The ` barging ` feature provides locking APIs that are compatible with the [ lock_api]
191
- crate. It does not require node allocations from the caller. The [ ` barging ` ] module
192
- is suitable for ` no_std ` environments. This implementation ** is not** fair (does not
193
- guarantee FIFO), but can improve throughput when the lock is heavily contended.
190
+ The ` barging ` feature provides locking APIs that are compatible with the
191
+ [ lock_api] crate. It does not require node allocations from the caller.
192
+ The [ ` barging ` ] module is suitable for ` no_std ` environments. This implementation
193
+ is not fair (does not guarantee FIFO), but can improve throughput when the lock
194
+ is heavily contended.
194
195
195
196
### lock_api
196
197
@@ -208,9 +209,8 @@ this crate MSRV substantially, it just has not been explored yet.
208
209
209
210
## Related projects
210
211
211
- These projects provide MCS lock implementations with slightly different APIs,
212
- implementation details or compiler requirements, you can check their
213
- repositories:
212
+ These projects provide MCS lock implementations with different APIs, capabilities,
213
+ implementation details or compiler requirements, you can check their repositories:
214
214
215
215
- mcs-rs: < https://github.com/gereeter/mcs-rs >
216
216
- libmcs: < https://github.com/topecongiro/libmcs >
@@ -270,4 +270,3 @@ each of your dependencies, including this one.
270
270
[ lock_api ] : https://docs.rs/lock_api/latest/lock_api
271
271
[ `RawMutex` ] : https://docs.rs/lock_api/latest/lock_api/trait.RawMutex.html
272
272
[ `RawMutexFair` ] : https://docs.rs/lock_api/latest/lock_api/trait.RawMutexFair.html
273
- [ `parking_lot::Mutex` ] : https://docs.rs/parking_lot/latest/parking_lot/type.Mutex.html
0 commit comments