Releases: async-raft/async-raft
Releases · async-raft/async-raft
async-raft v0.6.1 | Bug Fixes
fixed
memstore-v0.2.0
memstore 0.2.0
changed
- Updated async-raft dependency to
0.6.0
& updated storage interface as needed.
fixed
- Fixed #76 by moving the process of replicating log entries to the state machine off of the main task. This ensures that the process never blocks the main task. This also includes a few nice optimizations mentioned in the changelog.
async-raft-v0.6.0
async-raft 0.6.0
The big news for this release is that we are now based on Tokio 1.0! Big shoutout to @xu-cheng for doing all of the heavy lifting for the Tokio 1.0 update, along with many other changes which are part of this release.
It is important to note that 0.6.0 does include two breaking changes from 0.5: the new RaftStorage::ShutdownError
associated type, and Tokio 1.0. Both of these changes are purely code related, and it is not expected that they will negatively impact running systems.
changed
- Updated to Tokio 1.0!
- BREAKING: this introduces a
RaftStorage::ShutdownError
associated type. This allows for the Raft system to differentiate between fatal storage errors which should cause the system to shutdown vs errors which should be propagated back to the client for application specific error handling. These changes only apply to theRaftStorage::apply_entry_to_state_machine
method. - A small change to Raft startup semantics. When a node comes online and successfully recovers state (the node was already part of a cluster), the node will start with a 30 second election timeout, ensuring that it does not disrupt a running cluster.
- #89 removes the
Debug
bounds requirement on theAppData
&AppDataResponse
types. - The
Raft
type can now be cloned. The clone is very cheap and helps to facilitate async workflows while feeding client requests and Raft RPCs into the Raft instance. - The
Raft.shutdown
interface has been changed slightly. Instead of returning aJoinHandle
, the method is now async and simply returns a result. - The
ClientWriteError::ForwardToLeader
error variant has been modified slightly. It now exposes the data (generic typeD
of the type) of the original client request directly. This ensures that the data can actually be used for forwarding, if that is what the parent app wants to do. - Implemented #12. This is a pretty old issue and a pretty solid optimization. The previous implementation of this algorithm would go to storage (typically disk) for every process of replicating entries to the state machine. Now, we are caching entries as they come in from the leader, and using only the cache as the source of data. There are a few simple measures needed to ensure this is correct, as the leader entry replication protocol takes care of most of the work for us in this case.
- Updated / clarified the interface for log compaction. See the guide or the updated
do_log_compaction
method docs for more details.
added
- #97 adds the new
Raft.current_leader
method. This is a convenience method which builds upon the Raft metrics system to quickly and easily identify the current cluster leader.
fixed
- Fixed #98 where heartbeats were being passed along into the log consistency check algorithm. This had the potential to cause a Raft node to go into shutdown under some circumstances.
- Fixed a bug where the timestamp of the last received heartbeat from a leader was not being stored, resulting in degraded cluster stability under some circumstances.
async-raft-v0.6.0-alpha.2
added
- #97 adds the new
Raft.current_leader
method. This is a convenience method which builds upon the Raft metrics system to quickly and easily identify the current cluster leader.
fixed
- Fixed #98 where heartbeats were being passed along into the log consistency check algorithm. This had the potential to cause a Raft node to go into shutdown under some circumstances.
- Fixed a bug where the timestamp of the last received heartbeat from a leader was not being stored, resulting in degraded cluster stability under some circumstances.
changed
- BREAKING: this introduces a
RaftStorage::ShutdownError
associated type. This allows for the Raft system to differentiate between fatal storage errors which should cause the system to shutdown vs errors which should be propagated back to the client for application specific error handling. These changes only apply to theRaftStorage::apply_entry_to_state_machine
method. - A small change to Raft startup semantics. When a node comes online and successfully recovers state (the node was already part of a cluster), the node will start with a 30 second election timeout, ensuring that it does not disrupt a running cluster. This will be replaced by future work to implement the
PreVote
optimization. - #89 removes the
Debug
bounds requirement on theAppData
&AppDataResponse
types.
async-raft-v0.6.0-alpha.1
changed
- The
Raft
type can now be cloned. The clone is very cheap and helps to facilitate async workflows while feeding client requests and Raft RPCs into the Raft instance. - The
Raft.shutdown
interface has been changed slightly. Instead of returning aJoinHandle
, the method is now async and simply returns a result. - The
ClientWriteError::ForwardToLeader
error variant has been modified slightly. It now exposes the data (generic typeD
of the type) of the original client request directly. This ensures that the data can actually be used for forwarding, if that is what the parent app wants to do.
memstore-v0.2.0-alpha.0
changed
- Updated async-raft dependency to
0.6.0-aplpha.0
& updated storage interface as needed.
async-raft-v0.6.0-alpha.0
changed
- Implemented #12. This is a pretty old issue and a pretty solid optimization. The previous implementation of this algorithm would go to storage (typically disk) for every process of replicating entries to the state machine. Now, we are caching entries as they come in from the leader, and using only the cache as the source of data. There are a few simple measures needed to ensure this is correct, as the leader entry replication protocol takes care of most of the work for us in this case.
- Updated / clarified the interface for log compaction. See the guide or the updated
do_log_compaction
method docs for more details.
fixed
- Fixed #76 by moving the process of replicating log entries to the state machine off of the main task. This ensures that the process never blocks the main task. This also includes a few nice optimizations mentioned below.
async-raft-v0.5.5 | QOL Improvements for RaftMetrics
changed
- Added
#[derive(Serialize, Deserialize)]
toRaftMetrics
,State
.
async-raft-v0.5.4 | Bug Fix for Client Reads w/ Single Node Cluster
fixed
- Fixed #82 where client reads were not behaving correctly for single node clusters. Single node integration tests have been updated to ensure this functionality is working as needed.
async-raft-v0.5.3 | Bug Fix for Shutdown
fixed
- Fixed #79 ... for real this time! Added an integration test to prove it.