Skip to content

Move the docs pipeline to Swift 6.0 toolchains #1192

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 6 commits into from
Mar 21, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 0 additions & 2 deletions .github/workflows/pull_request.yml
Original file line number Diff line number Diff line change
Expand Up @@ -10,8 +10,6 @@ jobs:
uses: swiftlang/github-workflows/.github/workflows/soundness.yml@main
with:
license_header_check_project_name: "Swift Distributed Actors"
# We need to move to 6.0 here but have to fix the new warnings first
docs_check_container_image: "swift:5.10-noble"

unit-tests:
name: Unit tests
Expand Down
2 changes: 1 addition & 1 deletion Sources/DistributedCluster/Backoff.swift
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ public enum Backoff {
/// - multiplier: multiplier to be applied on each subsequent backoff to the previous backoff interval.
/// For example, a value of 1.5 means that each backoff will increase 50% over the previous value.
/// MUST be `>= 0`.
/// - maxInterval: interval limit, beyond which intervals should be truncated to this value.
/// - capInterval: interval limit, beyond which intervals should be truncated to this value.
/// MUST be `>= initialInterval`.
/// - randomFactor: A random factor of `0.5` results in backoffs between 50% below and 50% above the base interval.
/// MUST be between: `<0; 1>` (inclusive)
Expand Down
4 changes: 2 additions & 2 deletions Sources/DistributedCluster/Cluster/Cluster+Membership.swift
Original file line number Diff line number Diff line change
Expand Up @@ -118,7 +118,7 @@ extension Cluster {
///
///
/// - Parameters:
/// - statuses: statuses for which to check the members for
/// - status: status for which to check the members for
/// - reachability: optional reachability that is the members will be filtered by
/// - Returns: array of members matching those checks. Can be empty.
public func members(withStatus status: Cluster.MemberStatus, reachability: Cluster.MemberReachability? = nil) -> [Cluster.Member] {
Expand All @@ -145,7 +145,7 @@ extension Cluster {
/// the passed in `status` passed in and `reachability` status. See ``Cluster/MemberStatus`` to learn more about the meaning of "at least".
///
/// - Parameters:
/// - statuses: statuses for which to check the members for
/// - status: status for which to check the members for
/// - reachability: optional reachability that is the members will be filtered by
/// - Returns: array of members matching those checks. Can be empty.
public func members(atLeast status: Cluster.MemberStatus, reachability: Cluster.MemberReachability? = nil) -> [Cluster.Member] {
Expand Down
6 changes: 1 addition & 5 deletions Sources/DistributedCluster/Docs.docc/ClusterSingleton.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,5 @@
# ``DistributedCluster/ClusterSingleton``

@Metadata {
@DocumentationExtension(mergeBehavior: append)
}

Allows for hosting a _single unique instance_ of a distributed actor within the entire distributed actor system,
including automatic fail-over when the node hosting the instance becomes down.

Expand Down Expand Up @@ -155,7 +151,7 @@ The allocated singleton instance will get the ``activateSingleton()-9fbad`` meth

Conversely, when the allocation strategy decides that this cluster member is no longer hosting the singleton the ``passivateSingleton()-31z1s`` method will be invoked and the actor will be released. Make sure to not retain the actor or make it perform any decisions which require single-point-of-truth after it has had passivate called on it, as it no longer is guaranteed to be the unique singleton instance anymore.

## Glossary
### Glossary

- **cluster singleton** - the conceptual "singleton". Regardless on which node it is located we can generally speak in terms of contacting the cluster singleton, by which we mean contacting a concrete active instance, wherever it is currently allocated.
- singleton **instance** - a concrete instance of a distributed actor, allocated as a singleton. In contrast to "cluster singleton", a "cluster singleton instance" refers to a concrete unique instance on a concrete unique member in the cluster.
Expand Down
16 changes: 8 additions & 8 deletions Sources/DistributedCluster/Docs.docc/Clustering.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ Clustering multiple actor system instances into a single Distributed Actor Syste

In this article, we'll learn how to configure and use multiple ``ClusterSystem`` instances to form a distributed system.

## Initializing a ClusterSystem
### Initializing a ClusterSystem

In this section, we will discuss initializing and using a distributed cluster system.

Expand Down Expand Up @@ -49,15 +49,15 @@ Declaring a distributed actor is similar to declaring a plain `actor`. We do thi

TODO: documentation of TLS config

## Forming clusters
### Forming clusters

Forming a cluster is the first step towards making use of distributed clusters across multiple nodes.

Once a node joins at least one other node of an already established cluster, it will receive information about all other nodes
which participate in this cluster. This is why often it is not necessary to give all nodes the information about all other nodes in a cluster,
but only attempt to join one or a few of them. The first join "wins" and the cluster welcome the new node into the ``Cluster/Membership``.

### Joining existing nodes
#### Joining existing nodes

In the simplest scenario we already know about some existing node that we can join to form a cluster, or become part of a cluster that node already is in.

Expand All @@ -83,7 +83,7 @@ There is also convenience APIs available on ``ClusterControl`` (`system.cluster`
- ``ClusterControl/joined(endpoint:within:)`` which allows you to suspend until a specific node becomes ``Cluster/MemberStatus/joining`` in the cluster membership, or
- ``ClusterControl/waitFor(_:_:within:)-2aq7r`` which allows you to suspend until a node reaches a specific ``Cluster/MemberStatus``.

### Automatic Node Discovery
#### Automatic Node Discovery

The cluster system uses [swift-service-discovery](https://github.com/apple/swift-service-discovery) to discover nearby nodes it can connect to. This discovery step is only necessary to find IPs and ports on which we are expecting other cluster actor system instances to be running, the actual joining of the nodes is performed by the cluster itself. It can negotiate, and authenticate the other peer before establishing a connection with it (see also TODO: SECURITY).

Expand All @@ -106,7 +106,7 @@ Similarly, you can implement the [ServiceDiscovery](https://github.com/apple/swi
and this will then enable the cluster to locate nodes to contact and join automatically. It also benefits all other uses of service discovery in such new environment,
so we encourage publishing your implementations if you're able to!

## Cluster Events
### Cluster Events

Cluster events are events emitted by the cluster as changes happen to the lifecycle of members of the cluster.

Expand Down Expand Up @@ -186,7 +186,7 @@ The ``Cluster/Membership`` also offers a number of useful APIs to inspect the me

> A new async/await API might be offered that automates such "await for some node to reach some state" in the future, refer to [#948](https://github.com/apple/swift-distributed-actors/issues/948) for more details.

## Cluster Leadership
### Cluster Leadership

The cluster has a few operations which must be performed in a consistent fashion, such as moving a joining member to the ``Cluster/MemberStatus/up`` state. Other member status changes such as becoming `joining` or `down` do not require such strict decision-making and are disseminated throughout the cluster even without a leader.

Expand All @@ -197,7 +197,7 @@ For details, refer to the ``Leadership/LowestReachableMember`` documentation.

You can configure leader election by changing the ``ClusterSystemSettings/autoLeaderElection`` setting while initializing your ``ClusterSystem``.

## Customizing Remote Calls
### Customizing Remote Calls

Remote calls are at the heart of what makes distributed actors actually distributed.

Expand Down Expand Up @@ -226,7 +226,7 @@ try await RemoteCall.with(timeout: .seconds(5)) {
}
```

### Remote call errors
#### Remote call errors

By default, if a remote call results in an error that is `Codable`, the error is returned as-is. Non-`Codable` errors are
converted to ``GenericRemoteCallError``.
Expand Down
10 changes: 6 additions & 4 deletions Sources/DistributedCluster/Docs.docc/Contributing.md
Original file line number Diff line number Diff line change
@@ -1,18 +1,20 @@
# Contributing

Learn how to contribute to this project.

## Overview

## Testing
### Testing

The cluster is extensively tested, including plain unit tests, tests spanning multiple nodes within the same process, as well as multi-node tests spanning across actor systems running across separate processes.

### Multi-node testing
#### Multi-node testing

Multi node test infrastructure is still in development and may be lacking some features, however its basic premise is to be able to run small "apps" that function as tests, and are automatically deployed to multiple processes.

> Note: Eventually, those processes may actually be located on different physical machines, but this isn't implemented yet.

## Testing logging (LogCapture)
### Testing logging (LogCapture)

As the cluster performs operations "in the background", such as keeping the membership and health information of the cluster up to date, it is very important that log statements it emits in such mode are useful and actionable, byt not overwhelming.

Expand All @@ -29,4 +31,4 @@ let log = try self.logCapture.awaitLogContaining(testKit, text: "Assign identity

to suspend until the expected log statement is emitted. It is possible to configure more details about the matching as well as timeouts by passing more parameters to this call.

### Testing metrics (MetricsTestKit)
#### Testing metrics (MetricsTestKit)
25 changes: 12 additions & 13 deletions Sources/DistributedCluster/Docs.docc/Introduction.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,6 @@ A high-level introduction to distributed actor systems.

Distributed actors extend Swift's "local only" concept of `actor` types to the world of distributed systems.


### Actors

As distributed actors are an extension of Swift's actor based concurrency model, it is recommended to familiarize yourself with Swift actors first, before diving into the world of distributed actors.
Expand All @@ -15,7 +14,7 @@ To do so, you can refer to:
- [Concurrency: Actors](https://docs.swift.org/swift-book/LanguageGuide/Concurrency.html#ID645) section of the Swift Book,
- or the [Protect mutable state with Swift actors](https://developer.apple.com/videos/play/wwdc2021/10133/) introduction video from WWDC 2021.

## Thinking in (distributed) actors
### Thinking in (distributed) actors

In order to build distributed systems successfully you will need to get into the right mindset.

Expand All @@ -25,7 +24,7 @@ Distribution comes with the added complexity of _partial failure_ of systems. Me

In this section we will try to guide you towards "thinking in actors," but perhaps it’s also best to first realize that: "you probably already know actors!" As any time you implement some form of identity that is given tasks that it should work on, most likely using some concurrent queue or other synchronization mechanism, you are probably inventing some form of actor-like structures there yourself!

## Distributed actors
### Distributed actors

Distributed actors are a type of nominal type in Swift. Similarly to actors, they are introduced using the `distributed actor` pair of keywords.

Expand All @@ -41,7 +40,7 @@ distributed actor Greeter {
}
```

### Module-wide default actor system typealias
#### Module-wide default actor system typealias

Instead of declaring the `typealias ActorSystem = ClusterSystem` in every actor you declare, you can instead declare a module-wide `DefaultDistributedActorSystem` typealias instead. Generally it is recommended to keep that type-alias at the default (module wide) access control level, like this:

Expand Down Expand Up @@ -77,13 +76,13 @@ distributed actor WebSocketWorker {
}
```

### Location Transparency
#### Location Transparency

Distributed actors gain most of their features from the fact that they can be interacted with "the same way" (only by asynchronous and throwing `distributed func` calls), regardless where they are "located". This means that if one is passed an instance of a `distributed actor Greeter` we do not know (by design and on purpose!) if it is really a local instance, or actually only a reference to a remote distributed actor, located on some other host.

This capability along with strong isolation guarantees, enables a concept called [location transparency](https://en.wikipedia.org/wiki/Location_transparency), which is a programming style in which we describe network resources by some form of identity, and not their actual location. In distributed actors, this means that practically speaking, we do not have to know "where" a distributed actor is located. Or in some more advanced patterns, it may actually be "moving" from one host to another, while we still only refer to it using some abstract identifier.

## Distributed actor isolation
### Distributed actor isolation

In order to function properly, distributed actors must impose stronger isolation guarantees than their local-only cousins.

Expand Down Expand Up @@ -144,7 +143,7 @@ distributed actor Example {

It is possible to declare nonisolated computed properties as well as methods, and they follow the same rules as such declarations in actor types.

## Distributed actor initialization
### Distributed actor initialization

Distributed actors **must** declare a type of `ActorSystem` they are intended to be used with (which in case of the swift-distributed-actors cluster library is always the ``ClusterSystem``), and initialize the implicit `actorSystem` property that stores the system.

Expand All @@ -159,7 +158,7 @@ Worker() // ❌ error: missing argument for 'actorSystem' parameter

Distributed actor initializers are allowed to be throwing, failing, or even `async`. For more details on actor initializer semantics, please refer to [SE-0327: On Actors and Initialization](https://github.com/apple/swift-evolution/blob/main/proposals/0327-actor-initializers.md).

## Distributed actor methods
### Distributed actor methods

Distributed actors may declare distributed instance methods by prepending the `distributed` keyword in front of a `func` or _computed property_ declaration, like so:

Expand Down Expand Up @@ -217,11 +216,11 @@ The `self.work(on:)` call still needed to use the `await` keyword, since we were
> try await worker.work(item, settings)
> ```

## Distributed actors conforming to protocols
### Distributed actors conforming to protocols

Distributed actors may conform to `protocol` types, however they face similar restrictions in doing so as local-only `actor` types do.

### Witnessing protocol requirements
#### Witnessing protocol requirements

As distributed actor methods are implicitly asynchronous and throwing when called from the outside of the actor, they can only witness asynchronous and throwing protocol requirements.

Expand All @@ -247,7 +246,7 @@ distributed actor Example: SampleProtocol {
}
```

### Synchronous protocol requirements
#### Synchronous protocol requirements

A `distributed actor` may conform to a synchronous protocol requirement **only** with a `nonisolated` computed property or function declaration.

Expand All @@ -268,7 +267,7 @@ This is correct, since it is only accessing other `nonisolated` computed propert
>
> This is also how the `Hashable` and `Equatable` protocols are implemented for distributed actors, by delegating to the `self.id` property.

### DistributedActor constrained protocols
#### DistributedActor constrained protocols

Protocols may require that types extending it be distributed actors, this can be expressed using the following:

Expand All @@ -294,7 +293,7 @@ However, the `DistributedActor` **does not** refine the `Actor` protocol! This i

In practice, we do not see this as a problem, but a natural fallout of the isolation models. If necessary to require a type to be "some actor", please use the `protocol Worker: AnyActor` constraint.

## Where to go from here?
### Where to go from here?

Continue your journey with those articles:

Expand Down
Original file line number Diff line number Diff line change
@@ -1,8 +1,6 @@
# ``DistributedCluster/ClusterSystemSettings/LeadershipSelectionSettings``

@Metadata {
@DocumentationExtension(mergeBehavior: append)
}
Configure leadership election using which the cluster leader should be decided.

## Topics

Expand Down
2 changes: 1 addition & 1 deletion Sources/DistributedCluster/Docs.docc/Lifecycle.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ Monitoring distributed actor lifecycles enables you to react to their terminatio
This is crucial for building robust actor systems which are able to automatically remote e.g. remote worker references as they are confirmed to have terminated.
This can happen if the remote actor is just deinitialized, or if the remote host is determined to be ``Cluster/MemberStatus/down``.

## Lifecycle Watch
### Lifecycle Watch

A distributed actor is able to monitor other distributed actors by making use of the ``LifecycleWatch`` protocol.

Expand Down
6 changes: 3 additions & 3 deletions Sources/DistributedCluster/Docs.docc/Observability.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,15 +2,15 @@

The cluster system offers a number of built-in observability capabilities about the state of the cluster, as well as distributed actors it manages.

## Logging
### Logging

TODO: Explain `Logger(actor: self)` pattern

## Metrics
### Metrics

TODO: List the metrics we expose automatically; things like number of active distributed actors, message size, time calls take etc.


## Distributed Tracing
### Distributed Tracing

TODO: Integrate and explain how tracing works in the cluster system.
6 changes: 3 additions & 3 deletions Sources/DistributedCluster/Docs.docc/Receptionist.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ and communicate with them.
fishy-docs:enable
}

## Receptionist
### Receptionist

Discovering actors is an important aspect of distributed programming, as it is _the_ primary way we can discover actors on other nodes,
and communicate with them.
Expand Down Expand Up @@ -55,7 +55,7 @@ Other actors which discover the actor, and want to be informed once the actor ha
> Warning: `DistributedReception.Key`s are likely to be collapsed with ``ClusterSystem/ActorID/Metadata-swift.struct`` during the beta releases.
> See [Make use of ActorTag rather than separate keys infra for reception #950](https://github.com/apple/swift-distributed-actors/issues/950)

### Receptionist: Listings
#### Receptionist: Listings

The opposite of using a receptionist is obtaining a ``DistributedReceptionist/listing(of:file:line:)`` of actors registered with a specific key.

Expand Down Expand Up @@ -101,7 +101,7 @@ distributed actor Boss: LifecycleWatch {
}
```

### Checking-out from receptionist listings
#### Checking-out from receptionist listings

Checking out of the receptionist is performed automatically when a previously checked-in actor is terminated,
be it by the node that it was located on terminating, or the actor itself being deallocated.
Expand Down
11 changes: 5 additions & 6 deletions Sources/DistributedCluster/Docs.docc/Security.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ Configuring security aspects of your cluster system.
Securing your cluster system mostly centers around two concepts: making sure you trust your peers and systems which are able to call into the cluster,
and ensuring that messages exchanged are of trusted types.

## Transport Security: TLS
### Transport Security: TLS

> Note: **TODO:** explain configuring TLS

Expand Down Expand Up @@ -48,18 +48,18 @@ let tlsExampleSystem = await ClusterSystem("tls-example") { settings in
}
```

## Message Security
### Message Security

The other layer of security is about messages which are allowed to be sent to actors.

In general, you can audit your distributed API surface by searching your codebase for `distributed func` and `distributed var`, and verify the types involved in those calls.

The cluster also requires all types invokved in remote calls to conform to `Codable` and will utilize `Encoder` and `Decoder` types to deserialize them. As such, the typical attack of "accidentally deserialize an arbitrary sub-class of a type" is prevented by the `Codable` type itself.

### Trusting message types
#### Trusting message types


### Trusting error types
#### Trusting error types

Error types may be transported back to a remote caller if they are trusted.

Expand All @@ -77,8 +77,7 @@ struct GreeterCodableError: Error, Codable {}
struct AnotherGreeterCodableError: Error, Codable {}
```


### Topics
## Topics

- ``Serialization/Settings``
- ``ClusterSystemSettings/tls``
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -61,6 +61,8 @@ extension LifecycleWatch {
///
/// - Parameters:
/// - watchee: the actor to watch
/// - file: the file path
/// - line: the line number
/// - Returns: the watched actor
@discardableResult
public func watchTermination<Watchee>(
Expand Down
Loading
Loading