Skip to content

Commit

Permalink
Merge pull request #1889 from nirrozenbaum/0.21.0-rc1
Browse files Browse the repository at this point in the history
馃摉  prep for release 0.21.0-rc1
  • Loading branch information
kcp-ci-bot committed Mar 13, 2024
2 parents a08f159 + 5961f03 commit 5554d0a
Show file tree
Hide file tree
Showing 4 changed files with 42 additions and 15 deletions.
2 changes: 1 addition & 1 deletion config/postcreate-hooks/kubestellar.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -184,7 +184,7 @@ spec:
- kubestellar
- oci://ghcr.io/kubestellar/kubestellar/controller-manager-chart
- --version
- "0.20.0"
- "0.21.0-rc1"
- --set
- "ControlPlaneName={{.ControlPlaneName}}"
env:
Expand Down
3 changes: 2 additions & 1 deletion docs/content/direct/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,8 @@ including OCM Klusterlet for the WECs.
## Latest stable release

We do not have one that is proven very good yet.
The latest release is [0.20.0](../../../../v0.20.0).
The first release using the new architecture is [0.20.0](../../../../v0.20.0); it is feture-incomplete.
The latest release is [0.21.0-rc1](../../../../v0.21.0-rc1); it is also feature-incomplete.
See also [the release notes](release-notes.md).

## Architecture
Expand Down
19 changes: 13 additions & 6 deletions docs/content/direct/examples.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,18 +14,25 @@ See [pre-reqs](pre-reqs.md).

The following steps establish an initial state used in the examples below.

1. Set environment variables to hold KubeStellar and OCM-status-addon desired versions:

```shell
export KUBESTELLAR_VERSION=0.21.0-rc1
export OCM_STATUS_ADDON_VERSION=0.2.0-rc3
```

1. Create a Kind hosting cluster with nginx ingress controller and KubeFlex controller-manager installed:

```shell
kflex init --create-kind
```
If you are installing KubeStellar on an existing Kubernetes or OpenShift cluster, just use the command `kflex init`.

1. Update the post-create-hooks in KubeFlex to install kubestellar with the v0.20.0 images:
1. Update the post-create-hooks in KubeFlex to install kubestellar with the desired images:

```shell
kubectl apply -f https://raw.githubusercontent.com/kubestellar/kubestellar/v0.20.0/config/postcreate-hooks/kubestellar.yaml
kubectl apply -f https://raw.githubusercontent.com/kubestellar/kubestellar/v0.20.0/config/postcreate-hooks/ocm.yaml
kubectl apply -f https://raw.githubusercontent.com/kubestellar/kubestellar/v${KUBESTELLAR_VERSION}/config/postcreate-hooks/kubestellar.yaml
kubectl apply -f https://raw.githubusercontent.com/kubestellar/kubestellar/v${KUBESTELLAR_VERSION}/config/postcreate-hooks/ocm.yaml
```

1. Create an inventory & mailbox space of type `vcluster` running *OCM* (Open Cluster Management)
Expand All @@ -47,7 +54,7 @@ which installs OCM on it.
and then install the status add-on:

```shell
helm --kube-context imbs1 upgrade --install status-addon -n open-cluster-management oci://ghcr.io/kubestellar/ocm-status-addon-chart --version v0.2.0-rc3
helm --kube-context imbs1 upgrade --install status-addon -n open-cluster-management oci://ghcr.io/kubestellar/ocm-status-addon-chart --version v${OCM_STATUS_ADDON_VERSION}
```

see [here](./architecture.md#ocm-status-add-on-agent) for more details on the add-on.
Expand All @@ -69,7 +76,7 @@ manager which connects to the `wds1` front-end and the `imbs1` OCM control plane
The transport controller image argument can be specified to a specific image, or, if omitted, it defaults to the OCM transport plugin release that preceded the KubeStellar release being used.
For example, one can deploy transport controller using the following command:
```shell
bash <(curl -s https://raw.githubusercontent.com/kubestellar/kubestellar/0.21.0-rc1/scripts/deploy-transport-controller.sh) wds1 imbs1
bash <(curl -s https://raw.githubusercontent.com/kubestellar/kubestellar/${KUBESTELLAR_VERSION}/scripts/deploy-transport-controller.sh) wds1 imbs1
```

1. Follow the steps to [create and register two clusters with OCM](example-wecs.md).
Expand Down Expand Up @@ -265,7 +272,7 @@ done
Apply the kubestellar controller-manager helm chart with the option to allow only delivery of objects with api group `workload.codeflare.dev`

```shell
helm --kube-context kind-kubeflex upgrade --install -n wds2-system kubestellar oci://ghcr.io/kubestellar/kubestellar/controller-manager-chart --version 0.20.0 --set ControlPlaneName=wds2 --set APIGroups=workload.codeflare.dev
helm --kube-context kind-kubeflex upgrade --install -n wds2-system kubestellar oci://ghcr.io/kubestellar/kubestellar/controller-manager-chart --version ${KUBESTELLAR_VERSION} --set ControlPlaneName=wds2 --set APIGroups=workload.codeflare.dev
```

Check that the kubestellar controller for wds2 is started:
Expand Down
33 changes: 26 additions & 7 deletions docs/content/direct/release-notes.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,14 +2,33 @@

The following sections list the known issues for each release. The issue list is not differential (i.e., compared to previous releases) but a full list representing the overall state of the specific release.

## Every release after 0.15.X
## 0.21.0-rc1

### Major changes for 0.21.0-rc1

* This release introduces pluggable transport. Currently the only plugin is [the OCM transport plugin](https://github.com/kubestellar/ocm-transport-plugin).

### Bug fixes in 0.21.0-rc1

* dynamic changes to WECs **are supported**. Existing Bindings and ManifestWorks will be updated when new WECs are added/updated/delete or when labels are added/updated/deleted on existing WECs
* An update to a workload object that removes some BindingPolicies from the matching set _is_ handled correctly.
* Changes that happen while a controller is down and are handled correctly:
* If a workload object is deleted, or changed to remove some BindingPolicies from the matching set;
* A BindingPolicy update that removes workload objects or clusters from their respective matching sets.

### Remaining limitations in 0.21.0-rc1

* Dynamic changes to WECs are not supported. Existing placements will not be updated when new WECs are added or when labels are added/deleted on existing WECs
* Removing of WorkStatus objects (on the transport namespace) is not supported and may not result in recreation of that object
* Singleton: It's the user responsibility to make sure there are no shared objects in two different (singleton) placements that target two different WECs. Currently there is no enforcement on on that.
* Singleton status return: It is the user responsibility to make sure that if a BindingPolicy requesting singleton status return matches a given workload object then no other BindingPolicy matches the same object. Currently there is no enforcement of that.
* Objects on two different WDSs shouldn't have the exact same identifier (same group, version, kind, name and namespace). Such a conflict is currently not identified.
* An update to a workload object that removes some Placements from the matching set is not handled correctly.
* Some operations are not handled correctly while the controller is down:
* If a workload object is deleted, or changed to remove some Placements from the matching set, it will not be handled correctly.
* A Placement update that (a) removes workload objects or clusters from their respective matching sets is not handled correctly.

## 0.20.0 and its release candidates

* Dynamic changes to WECs are not supported. Existing ManifestWorks will not be updated when new WECs are added or when labels are added/deleted on existing WECs
* Removing of WorkStatus objects (on the transport namespace) is not supported and may not result in recreation of that object
* Singleton status return: It is the user responsibility to make sure that if a BindingPolicy requesting singleton status return matches a given workload object then no other BindingPolicy matches the same object. Currently there is no enforcement of that.
* Objects on two different WDSs shouldn't have the exact same identifier (same group, version, kind, name and namespace). Such a conflict is currently not identified.
* An update to a workload object that removes some BindingPolicies from the matching set is not handled correctly.
* Some operations are not handled correctly while the controller is down:
* If a workload object is deleted, or changed to remove some BindingPolicies from the matching set, it will not be handled correctly.
* A BindingPolicy update that removes workload objects or clusters from their respective matching sets is not handled correctly.

0 comments on commit 5554d0a

Please sign in to comment.